British Transport Police Launch Facial Recognition Trials in London Stations

Some people, when they want to improve public transport safety, hire more staff, fix the lighting, or maybe even try being on time.

The British Transport Police, however, have gone full Black Mirror, deciding the best way to protect you from crime on your morning commute is by pointing cameras at your face and feeding your biometric soul into a machine.

Yes, for many Britons, facial recognition is coming to a railway station near them. Smile. Or don’t. It makes no difference. The algorithm will be watching anyway.

In the coming weeks, British Transport Police (BTP) will be trialling Live Facial Recognition (LFR) tech in London stations. It’s being sold as a six-month pilot program, which in government-speak usually means it will last somewhere between forever and the heat death of the universe.

The idea is to deploy these cameras in “key transport hubs,” which is bureaucratic code for: “places you’re likely to be standing around long enough for a camera to decide whether or not you look criminal.”

BTP assures us that the system is “intelligence-led,” which doesn’t mean they’ll be targeting shady characters with crowbars, but rather that the cameras will be feeding your face into a watchlist generated from police data systems.

They’re looking for criminals and missing people, they say. But here’s how it works: if your face doesn’t match anyone on the list, it gets deleted immediately. Allegedly. If it does match, an officer gets a ping, stares at a screen, and decides whether you’re a knife-wielding fugitive or just a man who looks like one.

And you have to love the quaint touch of QR codes, and signs stuck up around the station letting you know that, yes, your biometric identity is being scanned in real time.

Chief Superintendent Chris Casey would like you to know that “we’re absolutely committed to using LFR ethically and in line with privacy safeguards.”

The deployments, we’re told, will come with “internal governance” and even “external engagement with ethics and independent advisory groups.”

Keep reading

Thailand orders suspension of iris scans and deletion of data collected from 1.2 million users

Thailand’s Personal Data Protection Committee has ordered TIDC Worldverse to suspend its iris-scanning services and delete biometric data collected from 1.2 million users, citing non-compliance with Thailand’s Personal Data Protection Act.

TIDC Worldverse is part of Sam Altman’s World ID project, which has faced scrutiny over potential links to cryptocurrency scams and unauthorised data use, including cases where people were allegedly hired to scan irises for others.

The National Health Security Office in Thailand has ordered the suspension of iris biometric data collection by TIDC Worldverse and has demanded the deletion of biometric data already collected from approximately 1.2 million Thai citizens.

TIDC Worldverse is the Thai representative of Sam Altman’s Tools for Humanity, which operates the World ID project (formerly Worldcoin) in Thailand. The initiative uses iris-scanning “Orb” devices to provide a digital “proof-of-human” credential.  Participants receive Worldcoin (“WLD”) tokens as an incentive for biometric verification.

Explaining in simple terms what the “Orb” is, Business Insider said, “The Orb is a polished, volleyball-sized metal sphere that scans irises to generate a ‘World ID’ – a kind of digital passport meant to distinguish humans from machines online.”

Keep reading

German States Expand Police Powers to Train AI Surveillance Systems with Personal Data

Several German states are preparing to widen police powers by allowing personal data to be used in the training of surveillance technologies.

North Rhine-Westphalia and Baden-Württemberg are introducing legislative changes that would let police feed identifiable information such as names and facial images into commercial AI systems.

Both drafts permit this even when anonymization or pseudonymization is bypassed because the police consider it “impossible” or achievable only with “disproportionate effort.”

Hamburg adopted similar rules earlier this year, and its example appears to have encouraged other regions to follow. These developments together mark a clear move toward normalizing the use of personal information as fuel for surveillance algorithms.

The chain reaction began in Bavaria, where police in early 2024 tested Palantir’s surveillance software with real personal data.

The experiment drew objections from the state’s data protection authority, but still served as a model for others.

Hamburg used the same idea in January 2025 to amend its laws, granting permission to train “learning IT systems” on data from bystanders. Now Baden-Württemberg and North Rhine-Westphalia plan to adopt nearly identical language.

In North Rhine-Westphalia, police would be allowed to upload clear identifiers such as names or faces into commercial systems like Palantir’s and to refine behavioral or facial recognition programs with real, unaltered data.

Bettina Gayk, the state’s data protection officer, warned that “the proposed regulation addresses significant constitutional concerns.”

She argued that using data from people listed as victims or complainants was excessive and added that “products from commercial providers are improved with the help of state-collected and stored data,” which she found unacceptable.

The state government has embedded this expansion of surveillance powers into a broader revision of the Police Act, a change initially required by the Federal Constitutional Court.

The court had previously ruled that long-term video monitoring under the existing law violated the Basic Law.

Instead of narrowing these powers, the new draft introduces a clause allowing police to “develop, review, change or train IT products” with personal data.

This wording effectively enables continued use of Palantir’s data analysis platform while avoiding the constitutional limits the court demanded.

Across North Rhine-Westphalia, Baden-Württemberg, and Hamburg, the outcome will be similar: personal data can be used for training as soon as anonymization is judged to be disproportionately difficult, with the assessment left to police discretion.

Gayk has urged that the use of non-anonymized data be prohibited entirely, warning that the exceptions are written so broadly that “they will ultimately not lead to any restrictions in practice.”

Baden-Württemberg’s green-black coalition plans to pass its bill this week.

Keep reading

DHS Expands Nationwide Airport Biometric Tracking

The Department of Homeland Security has introduced a new rule that will greatly expand biometric tracking at US borders, establishing a system to photograph and identify every non-citizen who enters or leaves the country.

Although the regulation applies to non-citizens, the cameras do not distinguish citizens from non-citizens in real time.

CBP says US citizens may opt out by presenting their passports manually, and that photos of citizens are deleted within twelve hours once nationality is confirmed. However, that’s after the fact.

Starting December 26, Customs and Border Protection will have authority to take photographs of “all aliens” not only at airports and land crossings but at “any other point of departure” the agency designates.

We obtained a copy of the rule for you here.

DHS describes the change as “operational modernization.”

Keep reading

Are Your Identification Photos in a Face Recognition Database?

A majority of Americans are in face recognition databases in use by the U.S. government. Are you one of them? The Electronic Frontier Foundation (EFF) has launched a new quiz called “Who Has Your Face” to help you find out.

“Your driver’s license picture and other ID photos are often shared with law enforcement and other agencies like Immigration and Customs Enforcement (ICE),” said EFF Digital Strategist Jason Kelley. “Those agencies use facial recognition technology to compare your face with those in mugshots and with other photos of people suspected of committing crimes—putting you at risk of being misidentified. So we created this quiz to help show people what we know about who has their face.”

To create the Who Has Your Face quiz, EFF and the Center on Privacy & Technology at Georgetown Law reviewed thousands of pages of public records to determine as much as possible which government photos of U.S. citizens, residents, and travelers are shared with which agencies for facial recognition purposes.

We learned that government agencies—including ICE, the Department of Homeland Security, and the FBI—could all have some access to these photos. However, despite hundreds of hours of research it’s nearly impossible to know precisely which agencies are sharing which photos, and with whom. For example, each state DMV shares access to their photos differently, depending on agreements with local police, other states, and federal agencies.  Our Who Has Your Face quiz asks you questions like what kind of ID you have and which state you live in to help you narrow down which agencies might have copies of your photos.

“These public records have shown us that biometric database sharing is widespread and completely unregulated—and this is still just a partial picture,” said Clare Garvie, senior associate with the Center on Privacy & Technology. “Americans deserve to know how their biometric information is being used, especially when it may put them at risk of being misidentified as a criminal suspect.”

“Here’s the truth: it should be easy to learn the full list of which entities have personal data that you’ve been required to hand over in exchange for a driver’s license or for re-entry into the country after visiting family abroad—especially when that’s a photo of your face,” said EFF Surveillance Litigation Director Jennifer Lynch. “Most people realize that their photos are scanned into a database, but they don’t realize this effectively makes them part of a perpetual police line-up. That’s what’s happening to millions of people, without their knowledge, and it’s practically impossible to opt out.”

Despite the proliferation of federal, state, and local face databases, we can fight back. Laws that ban government use of face recognition are increasingly passing around the country. Several states already don’t allow or don’t have face recognition at DMVs. Cities like San Francisco, Berkeley, and Oakland, California, as well as Somerville, Massachusetts have also passed bans on its use.  To help ban government use of face recognition in your city, visit our About Face campaign.

Keep reading

Portugal Bans Burqa: Is It Really About Women’s Rights?

Portugal has just approved a nationwide ban on full face coverings in public, adding another country to the long list of European nations abolishing burqas and niqabs. Does this protect rights, or restrict them? Is it even about rights at all?

Portugal’s Vote: What Passed

The country’s parliament approved a bill banning face coverings worn for religious or gender-related reasons in most public spaces. The measure targets burqas and niqabs with fines of €200-€4,000 and penalises anyone forcing somebody else to veil with up to three years in prison. Introduced by Chega and backed by centre-right parties, the left-wing parties oppose the bill calling it discriminatory and unnecessary in a country where very few women wear full-face coverings. 

What started 15 years ago in France as a way to tackle specific concerns about identification, social cohesion and security continues to spread further and wider than ever. It currently looks like a victory for those seeking improved cultural integration, but is there a bigger picture to consider?

The List Gets Longer

Here’s a recap of other European countries imposing similar bans in recent years: 

  • France was the first in Europe to enact a nationwide ban on full-face coverings, with the law passed in 2010 and effective from 2011 – it was later upheld by the European Court of Human Rights in 2014 
  • Belgium brought in a national ban in July 2011, with violators facing fines 
  • Bulgaria’s national ban was adopted in 2016 
  • Germany introduced partial bans focused on public servants and official duties in 2017 
  • Austria’s Anti-Face-Veiling Act came into force in October 2017 
  • Denmark passed a national ban in May 2018, effective from August that year 
  • Norway introduced a sectoral ban in schools and universities in 2018 
  • Netherlands brought in a partial national ban in public buildings and transport in August 2019 
  • Switzerland’s nationwide ban was approved by referendum in March 2021, with federal law taking effect in January 2025 

Other countries like Italy, Spain and Luxembourg have local or limited measures rather than blanket national bans. 

What They Say the Ban Does

Supporters of Portugal’s new legislation argue that the measure aims to strengthen public safety, facilitate identification, and promote women’s rights and social integration. Chega’s leadership framed the proposal as a means of protecting women from coercion, maintaining that a woman forced to wear a burqa loses autonomy and becomes objectified. According to the party’s leader, immigrants and others arriving in Portugal must adhere to their social norms, including the expectation that faces be visible in public. Members from supporting parties such as the Social Democrats, Liberal Initiative, and CDS-PP cited concerns about identification, public order, and the belief that no tradition or imposition should erase an individual’s presence in society. 

Penalties for breaking this law will result in fines of up to €4,000 in Portugal – the highest in all European countries. Fines are around €150 in France and Austria, and up to 1,000 CHF in Switzerland. 

Is It Really About Security or Women’s Rights?

Supporters brand these bans as pro-women, claiming they protect girls from coercion and affirm equality in public life. Others argue that if the goal were women’s freedom, the policy would centre around choice and support rather than fines and police checks. In practice – especially in Portugal – the ban polices what a tiny minority of women wear, while doing little for victims of abuse or forced marriage who need legal aid, shelters, and community support – not fines for what they wear. 

There’s another angle to consider here too. Keeping in mind that these rules extend beyond just religious clothing, removing face coverings makes everyone machine-readable. As cities roll out CCTV with facial recognition, is the goal to keep everyone trackable? A continent-wide expectation of uncovered faces makes it easier to identify and profile hundreds of millions of people – even though the rule initially looks like it tackles widespread cultural and security concerns.  

Consider protest anonymity, football ultras, or simply masking for privacy in tomorrow’s camera-tracked world. Broad bans today may satisfy voters by targeting religious coverings, but could be diverting attention from the real end-goal. Will it essentially become illegal to hide your face from recognition software in future? 

Keep reading

Microsoft OneDrive Limits How Often Users Can Restrict Facial Recognition Setting

Microsoft is quietly testing a new facial-recognition feature in OneDrive that automatically sorts photos based on who appears in them.

The experimental version is being rolled out to some early-access users, and it turns on by default while placing strict limits on how often it can be disabled.

Those with access will see a new privacy notice in the app stating: “OneDrive uses AI to recognize faces in your photos.”

Microsoft’s support page, which still labels the option as “coming soon,” explains that “Microsoft collects, uses, and stores facial scans and biometric information from your photos through the OneDrive app for facial grouping technologies.”

The company says this is intended to “help you quickly and easily organize photos of friends and family.”

Microsoft insists that the face groupings remain private even when users share albums. It also claims that “Microsoft does not use any of your facial scans and biometric information to train or improve the AI model overall.”

When asked why the system is enabled by default instead of requiring consent first, a company spokesperson told Slashdot that “Microsoft OneDrive inherits privacy features and settings from Microsoft 365 and SharePoint, where applicable.”

Keep reading

Australia Advances National Facial Recognition Network Despite Privacy Concerns

Australia is moving forward with a national facial recognition network that will link millions of citizens’ identity documents, despite ongoing uncertainty about privacy safeguards.

The National Driver Licence Facial Recognition Solution (NDLFRS) will merge biometric data from driver’s licenses and passports so that both government departments and private businesses can verify individuals’ identities.

The proposal dates back eight years but has recently accelerated. The Digital Transformation Agency confirmed that the Department of Home Affairs will host the system, while each state and territory will continue to manage its own data.

The agency stated that the project aims “to protect Australian people from identity theft, to manage and prevent crime, to increase road safety and to improve identity verification.”

It also noted that “Tasmania, Victoria and South Australia have provided data to the NDLFRS, with other states and territories to follow.”

Although the initiative remains marked as “unendorsed,” the government is preparing to activate key components.

The Attorney-General’s Department has announced that the accompanying Face Verification Service (FVS), which checks whether a person’s facial image matches the photo held in official records, is expected to begin operation in 2025.

Keep reading

Dover, NJ Implements AI Surveillance, Expanding Facial Recognition and Public Monitoring Systems

Dover, New Jersey, has joined a growing wave of municipalities embedding artificial intelligence into public spaces, advancing a surveillance system that includes facial recognition and automated video analysis across its government buildings.

The town partnered with technology firm Claro to retrofit its existing camera infrastructure with AI tools, avoiding the need for costly new hardware while expanding its monitoring capabilities.

The system brings a range of features into play, including facial recognition, visible weapons detection, and real-time behavioral analytics.

These tools are now active in locations such as the town hall, police department, fire station, and public library.

Town officials say the technology is being used for incident detection, crime prevention, crowd control, traffic monitoring, and illegal dumping enforcement.

“As a small municipality, we don’t have the budget for constant law enforcement presence,” said Mayor James Dodd. “Claro gave us the ability to enhance safety with cutting-edge technology that works with what we already have.”

The rollout reflects a broader trend where small towns turn to algorithmic systems to fill gaps traditionally addressed by human staff.

AI tools, particularly facial recognition, are increasingly being deployed in public settings, sparking ongoing concern about surveillance practices and the erosion of privacy rights.

Councilman Sergio Rodriguez, who helped lead the initiative, emphasized that the project came together through collaboration rather than off-the-shelf sales.

“Claro wasn’t just selling a product,” he said. “They listened to our needs and delivered solutions that worked for the Town of Dover.” He pointed to the technology’s role in optimizing public safety while helping stretch municipal budgets.

“With AI supporting day-to-day operations,” he said, “we can better protect residents and allocate our budget more effectively.”

Claro markets its AI platform as adaptable to existing surveillance systems and suitable for both real-time alerts and forensic investigations.

Keep reading

JP Morgan’s Biometric Mandate Signals New Era Of Workplace Surveillance In Finance

When employees begin reporting to JPMorgan Chase’s new Manhattan headquarters later this year, they will be required to submit their biometric data to enter the building.

The policy, a first among major U.S. banks, makes biometric enrollment mandatory for staff assigned to the $3 billion, 60-story tower at 270 Park Avenue.

JPMorgan says the system is part of a modern security program designed to protect workers and streamline access, but it has sparked growing concern over privacy, consent, and the expanding use of surveillance technology in the workplace.

Internal communications reviewed by the Financial Times and The Guardian confirm that JPMorgan employees assigned to the new building have been told they must enroll their fingerprints or undergo an eye scan to access the premises.

Earlier drafts of the plan described the system as voluntary, but reports say that language has quietly disappeared. A company spokesperson declined to clarify how data will be stored or how long it will be retained, citing security concerns. Some staff reportedly may retain the option of using a badge instead, though the criteria for exemption remain undisclosed.

The biometric access requirement is being rolled out alongside a Work at JPMC smartphone app that doubles as a digital ID badge and internal service platform, allowing staff to order meals, navigate the building, or register visitors.

According to its listing in the Google Play Store, the app currently claims “no data collected,” though that self-reported disclosure does not replace a formal employee privacy notice.

In combination, the app and access system will allow the bank to track who enters the building, when, and potentially how long they stay on each floor, a level of visibility that, while defensible as security modernization, unsettles those wary of the creeping normalization of biometric surveillance in the workplace.

Executives have promoted the new headquarters as the “most technologically advanced” corporate campus in New York, and that it is designed to embody efficiency and safety. Reports suggest that the decision to make biometrics mandatory followed a series of high-profile crimes in Midtown, including the December 2024 killing of UnitedHealthcare CEO Brian Thompson. Within the bank, the justification has been framed as protecting employees in a volatile urban environment.

Yet, the decision thrusts JPMorgan into largely uncharted territory. No other major U.S. bank has been publicly documented as requiring its employees to submit biometric data merely to enter a headquarters building.

Keep reading