Portugal Bans Burqa: Is It Really About Women’s Rights?

Portugal has just approved a nationwide ban on full face coverings in public, adding another country to the long list of European nations abolishing burqas and niqabs. Does this protect rights, or restrict them? Is it even about rights at all?

Portugal’s Vote: What Passed

The country’s parliament approved a bill banning face coverings worn for religious or gender-related reasons in most public spaces. The measure targets burqas and niqabs with fines of €200-€4,000 and penalises anyone forcing somebody else to veil with up to three years in prison. Introduced by Chega and backed by centre-right parties, the left-wing parties oppose the bill calling it discriminatory and unnecessary in a country where very few women wear full-face coverings. 

What started 15 years ago in France as a way to tackle specific concerns about identification, social cohesion and security continues to spread further and wider than ever. It currently looks like a victory for those seeking improved cultural integration, but is there a bigger picture to consider?

The List Gets Longer

Here’s a recap of other European countries imposing similar bans in recent years: 

  • France was the first in Europe to enact a nationwide ban on full-face coverings, with the law passed in 2010 and effective from 2011 – it was later upheld by the European Court of Human Rights in 2014 
  • Belgium brought in a national ban in July 2011, with violators facing fines 
  • Bulgaria’s national ban was adopted in 2016 
  • Germany introduced partial bans focused on public servants and official duties in 2017 
  • Austria’s Anti-Face-Veiling Act came into force in October 2017 
  • Denmark passed a national ban in May 2018, effective from August that year 
  • Norway introduced a sectoral ban in schools and universities in 2018 
  • Netherlands brought in a partial national ban in public buildings and transport in August 2019 
  • Switzerland’s nationwide ban was approved by referendum in March 2021, with federal law taking effect in January 2025 

Other countries like Italy, Spain and Luxembourg have local or limited measures rather than blanket national bans. 

What They Say the Ban Does

Supporters of Portugal’s new legislation argue that the measure aims to strengthen public safety, facilitate identification, and promote women’s rights and social integration. Chega’s leadership framed the proposal as a means of protecting women from coercion, maintaining that a woman forced to wear a burqa loses autonomy and becomes objectified. According to the party’s leader, immigrants and others arriving in Portugal must adhere to their social norms, including the expectation that faces be visible in public. Members from supporting parties such as the Social Democrats, Liberal Initiative, and CDS-PP cited concerns about identification, public order, and the belief that no tradition or imposition should erase an individual’s presence in society. 

Penalties for breaking this law will result in fines of up to €4,000 in Portugal – the highest in all European countries. Fines are around €150 in France and Austria, and up to 1,000 CHF in Switzerland. 

Is It Really About Security or Women’s Rights?

Supporters brand these bans as pro-women, claiming they protect girls from coercion and affirm equality in public life. Others argue that if the goal were women’s freedom, the policy would centre around choice and support rather than fines and police checks. In practice – especially in Portugal – the ban polices what a tiny minority of women wear, while doing little for victims of abuse or forced marriage who need legal aid, shelters, and community support – not fines for what they wear. 

There’s another angle to consider here too. Keeping in mind that these rules extend beyond just religious clothing, removing face coverings makes everyone machine-readable. As cities roll out CCTV with facial recognition, is the goal to keep everyone trackable? A continent-wide expectation of uncovered faces makes it easier to identify and profile hundreds of millions of people – even though the rule initially looks like it tackles widespread cultural and security concerns.  

Consider protest anonymity, football ultras, or simply masking for privacy in tomorrow’s camera-tracked world. Broad bans today may satisfy voters by targeting religious coverings, but could be diverting attention from the real end-goal. Will it essentially become illegal to hide your face from recognition software in future? 

Keep reading

Microsoft OneDrive Limits How Often Users Can Restrict Facial Recognition Setting

Microsoft is quietly testing a new facial-recognition feature in OneDrive that automatically sorts photos based on who appears in them.

The experimental version is being rolled out to some early-access users, and it turns on by default while placing strict limits on how often it can be disabled.

Those with access will see a new privacy notice in the app stating: “OneDrive uses AI to recognize faces in your photos.”

Microsoft’s support page, which still labels the option as “coming soon,” explains that “Microsoft collects, uses, and stores facial scans and biometric information from your photos through the OneDrive app for facial grouping technologies.”

The company says this is intended to “help you quickly and easily organize photos of friends and family.”

Microsoft insists that the face groupings remain private even when users share albums. It also claims that “Microsoft does not use any of your facial scans and biometric information to train or improve the AI model overall.”

When asked why the system is enabled by default instead of requiring consent first, a company spokesperson told Slashdot that “Microsoft OneDrive inherits privacy features and settings from Microsoft 365 and SharePoint, where applicable.”

Keep reading

Australia Advances National Facial Recognition Network Despite Privacy Concerns

Australia is moving forward with a national facial recognition network that will link millions of citizens’ identity documents, despite ongoing uncertainty about privacy safeguards.

The National Driver Licence Facial Recognition Solution (NDLFRS) will merge biometric data from driver’s licenses and passports so that both government departments and private businesses can verify individuals’ identities.

The proposal dates back eight years but has recently accelerated. The Digital Transformation Agency confirmed that the Department of Home Affairs will host the system, while each state and territory will continue to manage its own data.

The agency stated that the project aims “to protect Australian people from identity theft, to manage and prevent crime, to increase road safety and to improve identity verification.”

It also noted that “Tasmania, Victoria and South Australia have provided data to the NDLFRS, with other states and territories to follow.”

Although the initiative remains marked as “unendorsed,” the government is preparing to activate key components.

The Attorney-General’s Department has announced that the accompanying Face Verification Service (FVS), which checks whether a person’s facial image matches the photo held in official records, is expected to begin operation in 2025.

Keep reading

Dover, NJ Implements AI Surveillance, Expanding Facial Recognition and Public Monitoring Systems

Dover, New Jersey, has joined a growing wave of municipalities embedding artificial intelligence into public spaces, advancing a surveillance system that includes facial recognition and automated video analysis across its government buildings.

The town partnered with technology firm Claro to retrofit its existing camera infrastructure with AI tools, avoiding the need for costly new hardware while expanding its monitoring capabilities.

The system brings a range of features into play, including facial recognition, visible weapons detection, and real-time behavioral analytics.

These tools are now active in locations such as the town hall, police department, fire station, and public library.

Town officials say the technology is being used for incident detection, crime prevention, crowd control, traffic monitoring, and illegal dumping enforcement.

“As a small municipality, we don’t have the budget for constant law enforcement presence,” said Mayor James Dodd. “Claro gave us the ability to enhance safety with cutting-edge technology that works with what we already have.”

The rollout reflects a broader trend where small towns turn to algorithmic systems to fill gaps traditionally addressed by human staff.

AI tools, particularly facial recognition, are increasingly being deployed in public settings, sparking ongoing concern about surveillance practices and the erosion of privacy rights.

Councilman Sergio Rodriguez, who helped lead the initiative, emphasized that the project came together through collaboration rather than off-the-shelf sales.

“Claro wasn’t just selling a product,” he said. “They listened to our needs and delivered solutions that worked for the Town of Dover.” He pointed to the technology’s role in optimizing public safety while helping stretch municipal budgets.

“With AI supporting day-to-day operations,” he said, “we can better protect residents and allocate our budget more effectively.”

Claro markets its AI platform as adaptable to existing surveillance systems and suitable for both real-time alerts and forensic investigations.

Keep reading

JP Morgan’s Biometric Mandate Signals New Era Of Workplace Surveillance In Finance

When employees begin reporting to JPMorgan Chase’s new Manhattan headquarters later this year, they will be required to submit their biometric data to enter the building.

The policy, a first among major U.S. banks, makes biometric enrollment mandatory for staff assigned to the $3 billion, 60-story tower at 270 Park Avenue.

JPMorgan says the system is part of a modern security program designed to protect workers and streamline access, but it has sparked growing concern over privacy, consent, and the expanding use of surveillance technology in the workplace.

Internal communications reviewed by the Financial Times and The Guardian confirm that JPMorgan employees assigned to the new building have been told they must enroll their fingerprints or undergo an eye scan to access the premises.

Earlier drafts of the plan described the system as voluntary, but reports say that language has quietly disappeared. A company spokesperson declined to clarify how data will be stored or how long it will be retained, citing security concerns. Some staff reportedly may retain the option of using a badge instead, though the criteria for exemption remain undisclosed.

The biometric access requirement is being rolled out alongside a Work at JPMC smartphone app that doubles as a digital ID badge and internal service platform, allowing staff to order meals, navigate the building, or register visitors.

According to its listing in the Google Play Store, the app currently claims “no data collected,” though that self-reported disclosure does not replace a formal employee privacy notice.

In combination, the app and access system will allow the bank to track who enters the building, when, and potentially how long they stay on each floor, a level of visibility that, while defensible as security modernization, unsettles those wary of the creeping normalization of biometric surveillance in the workplace.

Executives have promoted the new headquarters as the “most technologically advanced” corporate campus in New York, and that it is designed to embody efficiency and safety. Reports suggest that the decision to make biometrics mandatory followed a series of high-profile crimes in Midtown, including the December 2024 killing of UnitedHealthcare CEO Brian Thompson. Within the bank, the justification has been framed as protecting employees in a volatile urban environment.

Yet, the decision thrusts JPMorgan into largely uncharted territory. No other major U.S. bank has been publicly documented as requiring its employees to submit biometric data merely to enter a headquarters building.

Keep reading

UK Expands Live Facial Recognition as First Legal Challenge Targets Met Police Misidentification

Police forces across England are preparing to expand their use of live facial recognition (LFR) surveillance as the government moves forward with a national policy to guide deployments.

Policing minister Sarah Jones confirmed during the Labour Party conference that formal guidance is in development to instruct officers on when and where the technology should be used.

Funding from the Home Office has already been allocated to support LFR operations in seven additional regions: Greater Manchester, West Yorkshire, Bedfordshire, Surrey, Sussex, Thames Valley, and Hampshire.

Government officials have pointed to early deployments in London and Cardiff as successful, citing arrests.

Reflecting on those results, Jones stated:

“What we’ve seen in Croydon is that it has worked. We just need to make sure it’s clear what the technology is going to be useful for going forward. If we are going to use it more, if we do want to roll it out across the country, what are the parameters? Live facial recognition is a really good tool that has led to arrests that wouldn’t have come otherwise, and it’s very, very valuable.”

The software links live camera feeds to a watchlist of people wanted by police. When someone passes a camera, facial measurements are analyzed and compared against the database. If a match is found, officers are alerted to intervene.

However, the use of LFR has expanded sharply. In London, the number of people included on watchlists has more than doubled between 2020 and 2025.

The volume of facial scans during deployments has also grown, with single-day scans now reaching into the tens of thousands.

The Metropolitan Police insists it has safeguards in place and maintains that data from individuals not on a watchlist is deleted immediately.

Keep reading

Britain’s policing minister punts facial recog nationwide

The government is to encourage police forces across England and Wales to adopt live facial recognition (LFR) technology, with a minister praising its use by the London’s Metropolitan Police in a suburb in the south of the city.

Policing minister Sarah Jones confirmed the UK government is consulting on guidance on where, when, and how police forces can use LFR with publication due later this year. “What we’ve seen in Croydon is that it has worked,” she told a fringe event at the Labour party conference on September 29, referring to the Met’s installation of permanent LFR cameras in the town.

“We just need to make sure it’s clear what the technology is going to be useful for going forward. If we are going to use it more, if we do want to roll it out across the country, what are the parameters?” she added. “Live facial recognition is a really good tool that has led to arrests that wouldn’t have come otherwise and it’s very, very valuable.”

In August, the Home Office said that seven more police forces will start using ten new vans kitted out with LFR technology, in addition to existing use by the Metropolitan Police in London and South Wales Police. At the time it said the two forces have used LFR to make 580 arrests over the previous 12 months.

Keep reading

German Federal Police Sued Over Facial Recognition Database Use

Germany’s top criminal police authority is facing legal action over the handling of biometric data.

On September 19, 2025, IT-security expert Janik Besendorf, with the support of the Chaos Computer Club (CCC), brought a case before the Administrative Court in Wiesbaden.

He argues that photographs taken of him during a 2018 police matter, which was later dismissed, were unlawfully fed into facial recognition testing programs instead of being deleted.

According to Netzpolitik, images from Besendorf and millions of others stored in the police database INPOL-Z were repurposed without consent or statutory approval.

The legal complaint points in particular to the BKA’s “EGES” initiative, a 2019 project aimed at improving its facial recognition capabilities.

In this program, Fraunhofer IGD ran trials of four commercial recognition systems using roughly five million frontal photos of around three million individuals, in addition to volunteer contributions.

Freedom of information records show that officials had already raised doubts about whether the project had any legal footing.

The BKA maintains that the testing was conducted safely, emphasizing that all work was carried out on isolated internal systems, with no direct handover of personal data to outside partners.

It also insists that the program is qualified as research under the BKA Act. Regulators have taken a different view, arguing that benchmarking market-ready tools cannot be described as scientific research and pointing out the absence of a clear legal mandate.

A judgment against the BKA would set limits on how police photo archives can be repurposed for experimentation and product vetting.

Keep reading

YouTube Expands AI Age Checks, Users Face ID Verification

Google’s video platform is tightening its controls, and a growing number of YouTube users are suddenly finding themselves locked out of content unless they hand over proof of age through ID verification.

The new direction stems from YouTube’s decision to push its AI-driven age estimation system much more broadly than before.

The technology, which YouTube began experimenting with in Europe before confirming its US debut in July, surveils people’s accounts and examines everything from account details and viewing habits to search activity.

Those surveillance signals are then used to guess whether someone is under 18.

The pace of enforcement has accelerated sharply. Within the 24 hours leading up to September 24, Reddit’s r/youtube subreddit saw a flood of posts from users saying their accounts were suddenly flagged for verification, suggesting a big uptick in restrictions.

Accounts flagged as too young are immediately restricted: targeted advertising is reduced, recommendations deemed unsafe are cut back, and access to adult material is blocked outright.

When YouTube’s system determines that an account likely belongs to someone under 18, the platform automatically enforces a set of limits designed to control how the service is used.

Keep reading

American travelers to Europe will be forced to hand over biodata before flights starting next month

Americans flying to Europe will need to be fingerprinted under new EU regulations being brought in next month. 

From October 12, US citizens will have to go through the EU’s Entry and Exit System to enter 29 countries, including FranceGermanyItaly and Spain

Under the new system, passport control agents will take fingerprints, a facial image and passport details. 

It will be introduced gradually over six months, according to advice from the US Department of State website, which also includes the full list of countries impacted.  

The new digital border program is likely to prompt longer wait times at security on entry to the EU countries as travelers have to register upon their first entry to the impacted zone, known as the Schengen Area. 

American passengers will pass through e-gates and a computerized system which will automatically check passports on entry to the 29 countries within this zone. 

However, once a traveler is within the borders of the Schengen Area, they are free to travel between the 29 countries with minimal security checks. 

The zone includes 25 EU member states, and four non-EU member states – Switzerland, Iceland, Liechtenstein and Norway. 

Keep reading