Mexico Mandates Biometric SIM Registration for All Phone Numbers

Anonymous prepaid SIM cards are dying in Mexico. By July 1, 2026, every active cell phone number in the country must be biometrically linked to a named, government-credentialed individual or face suspension. That’s around 127 million numbers, each one tethered to an identity the Mexican government can look up by name.

The mobile registration law took effect January 9, 2026, covering prepaid and postpaid plans, physical SIMs, and eSIMs alike. Existing subscribers have until June 30 to complete registration. New lines activated after January 9 get 30 days. Miss the window, and the line goes dark.

The enforcement mechanism runs through the CURP Biométrica, Mexico’s biometric upgrade to its existing population registry code. The new credential embeds a photograph, electronic signature, and QR code that ties directly to biometrically verified records held in the national registry.

Residents registering a mobile line must provide their CURP number alongside a valid government ID, which makes biometric enrollment not optional but structurally required. You cannot register a phone number without first handing your biometric data to the state.

What Mexico is building here is a national phone network where every number has a face attached to it.

Keep reading

Meta Considers Timed Face Recognition Launch to Exploit Distracted Society

Meta is weighing whether to add face recognition to its camera-equipped smart glasses, and The New York Times obtained an internal company document that reveals more than just the plan itself.

It reveals how Meta thinks about when to launch it: “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

Read that plainly: Meta wants to release a mass biometric surveillance product while the people most likely to fight it are too distracted to respond.

The technology would scan the face of every person who enters the glasses’ field of view, building a faceprint to match against a database. Every passerby. Every stranger on the subway. Every person who happens to walk through the frame of someone else’s device. None of them consented. Most of them won’t even know they were captured.

Faceprints are among the most sensitive data a company can collect. Unlike a password, a face cannot be changed after a breach. Once collected, this data enables mass surveillance, fuels discrimination, and creates a permanent identification trail attached to a person’s physical movement through the world.

Putting that capability into wearable glasses carried by ordinary people in ordinary places moves it off servers and into every room, street, and gathering that people enter.

Meta ran this experiment before and lost.

The company shut down (only kind of) its photo face-scanning tool in November 2021, simultaneously announcing it would delete (if you believe them) over a billion stored face templates. That retreat came after years of mounting legal exposure that produced a very expensive record.

In July 2019, Facebook settled a Federal Trade Commission investigation for $5 billion. The allegations included that the company’s face recognition settings were confusing and deceptive, and the settlement required the company to obtain consent before running face recognition on users going forward.

Less than two years later, Meta agreed to pay $650 million to settle a class action brought by Illinois residents under that state’s biometric privacy law. Then, in July 2024, it settled with Texas for $1.4 billion over the same defunct system. Nearly $7 billion across three settlements, all tied to face recognition practices the company ultimately abandoned.

Keep reading

Discord to Demand Face Scan or ID to Access All Features

Discord is preparing to make age classification a constant background process across its platform. Beginning next month, every account will default to a teen-appropriate experience unless the user takes steps to prove adulthood.

Age determination will sit underneath routine activity, shaping what people can see, say, and join.

For accounts that are not verified as adult, access will narrow immediately. Age-restricted servers and channels will be blocked, voice participation in live “stage” channels will be disabled, and automated filters will apply to content Discord identifies as graphic or sensitive.

Friend requests from unfamiliar users will trigger warning prompts, and direct messages from unknown accounts will be routed into a separate inbox.

Core features such as direct messages with known contacts and servers without age restrictions will continue to function. Age-restricted servers will effectively disappear until verification is completed, including servers that a user joined years earlier.

The global rollout reflects a broader regulatory environment that is pushing platforms toward more aggressive age controls. Discord has already tested similar systems.

Last year, age checks were introduced in the UK and Australia.

For many adult users, the concern is less about access to content and more about surveillance and the ability to communicate anonymously. Verification systems introduce new forms of monitoring, whether through documents, facial analysis, or ongoing behavioral assessment.

Keep reading

ICE observer says her Global Entry was revoked after agent scanned her face

Minnesota resident Nicole Cleland had her Global Entry and TSA PreCheck privileges revoked three days after an incident in which she observed activity by immigration agents, the woman said in a court declaration. An agent told Cleland that he used facial recognition technology to identify her, she wrote in a declaration filed in US District Court for the District of Minnesota.

Cleland, a 56-year-old resident of Richfield and a director at Target Corporation, volunteers with a group that tracks potential Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) vehicles in her neighborhood, according to her declaration. On the morning of January 10, she “observed a white Dodge Ram being driven by what I believed to be federal enforcement agents” and “maneuvered behind the vehicle with the intent of observing the agents’ actions.”

Cleland said that she and another observer in a different car followed the Dodge Ram because of “concern about a local apartment building being raided.” She followed the car for a short time and from a safe distance until “the Dodge Ram stopped in front of the other commuter’s vehicle,” she wrote. Cleland said two other vehicles apparently driven by federal agents stopped in front of the Dodge Ram, and her path forward was blocked.

“An agent exited the vehicle and approached my vehicle,” Cleland wrote. “I remained in my vehicle. The agent addressed me by my name and informed me that they had ‘facial recognition’ and that his body cam was recording. The agent stated that he worked for border patrol. He wore full camouflage fatigues. The agent stated that I was impeding their work. He indicated he was giving me a verbal warning and if I was found to be impeding again, I would be arrested.”

Cleland acknowledged that she heard what the agent said, and they drove off in opposite directions, according to her declaration. Cleland submitted the declaration on January 21 in a lawsuit filed by Minnesota residents against US government officials with the Department of Homeland Security and ICE. Cleland’s court filing was mentioned yesterday in a Boston Globe column about tactics used by ICE agents to intimidate protesters.

Keep reading

Britain To Roll Out Facial Recognition in Police Overhaul

Britain’s policing system, we are told, is broken. And on Monday, the home secretary, Shabana Mahmood, announced that the fix would arrive in the form of algorithms, facial recognition vans, and a large check made out to the future.

The government plans to spend £140m ($191M) on artificial intelligence and related technology, with the promise that it will free up six million police hours a year, the equivalent of 3,000 officers.

It is being billed as the biggest overhaul of policing in England and Wales in 200 years, aimed at dragging a creaking system into the modern world.

The ambition is serious. The implications are too.

The plan is for AI software that will analyze CCTV, doorbell, and mobile phone footage, detect deepfakes, carry out digital forensics, and handle administrative tasks such as form filling, redaction, and transcription. Mahmood’s argument is that criminals are getting smarter, while parts of the police service are stuck with tools that belong to another era.

She put it plainly: “Criminals are operating in increasingly sophisticated ways. However, some police forces are still fighting crime with analogue methods.”

And she promised results: “We will roll out state-of-the-art tech to get more officers on the streets and put rapists and murderers behind bars.”

There is logic here. Few people would argue that trained officers should be buried in paperwork. Technology can help with that. The concern is what else comes with it.

Live facial recognition is being expanded aggressively. The number of police vans equipped with the technology will increase fivefold, from ten to fifty, operating across the country. These systems scan faces in public spaces and compare them to watch lists of wanted individuals.

This is a form of mass surveillance and when automated systems get things wrong, the consequences fall on real people.

Keep reading

British Transport Police Launch Facial Recognition Trials in London Stations

Some people, when they want to improve public transport safety, hire more staff, fix the lighting, or maybe even try being on time.

The British Transport Police, however, have gone full Black Mirror, deciding the best way to protect you from crime on your morning commute is by pointing cameras at your face and feeding your biometric soul into a machine.

Yes, for many Britons, facial recognition is coming to a railway station near them. Smile. Or don’t. It makes no difference. The algorithm will be watching anyway.

In the coming weeks, British Transport Police (BTP) will be trialling Live Facial Recognition (LFR) tech in London stations. It’s being sold as a six-month pilot program, which in government-speak usually means it will last somewhere between forever and the heat death of the universe.

The idea is to deploy these cameras in “key transport hubs,” which is bureaucratic code for: “places you’re likely to be standing around long enough for a camera to decide whether or not you look criminal.”

BTP assures us that the system is “intelligence-led,” which doesn’t mean they’ll be targeting shady characters with crowbars, but rather that the cameras will be feeding your face into a watchlist generated from police data systems.

They’re looking for criminals and missing people, they say. But here’s how it works: if your face doesn’t match anyone on the list, it gets deleted immediately. Allegedly. If it does match, an officer gets a ping, stares at a screen, and decides whether you’re a knife-wielding fugitive or just a man who looks like one.

And you have to love the quaint touch of QR codes, and signs stuck up around the station letting you know that, yes, your biometric identity is being scanned in real time.

Chief Superintendent Chris Casey would like you to know that “we’re absolutely committed to using LFR ethically and in line with privacy safeguards.”

The deployments, we’re told, will come with “internal governance” and even “external engagement with ethics and independent advisory groups.”

Keep reading

Thailand orders suspension of iris scans and deletion of data collected from 1.2 million users

Thailand’s Personal Data Protection Committee has ordered TIDC Worldverse to suspend its iris-scanning services and delete biometric data collected from 1.2 million users, citing non-compliance with Thailand’s Personal Data Protection Act.

TIDC Worldverse is part of Sam Altman’s World ID project, which has faced scrutiny over potential links to cryptocurrency scams and unauthorised data use, including cases where people were allegedly hired to scan irises for others.

The National Health Security Office in Thailand has ordered the suspension of iris biometric data collection by TIDC Worldverse and has demanded the deletion of biometric data already collected from approximately 1.2 million Thai citizens.

TIDC Worldverse is the Thai representative of Sam Altman’s Tools for Humanity, which operates the World ID project (formerly Worldcoin) in Thailand. The initiative uses iris-scanning “Orb” devices to provide a digital “proof-of-human” credential.  Participants receive Worldcoin (“WLD”) tokens as an incentive for biometric verification.

Explaining in simple terms what the “Orb” is, Business Insider said, “The Orb is a polished, volleyball-sized metal sphere that scans irises to generate a ‘World ID’ – a kind of digital passport meant to distinguish humans from machines online.”

Keep reading

German States Expand Police Powers to Train AI Surveillance Systems with Personal Data

Several German states are preparing to widen police powers by allowing personal data to be used in the training of surveillance technologies.

North Rhine-Westphalia and Baden-Württemberg are introducing legislative changes that would let police feed identifiable information such as names and facial images into commercial AI systems.

Both drafts permit this even when anonymization or pseudonymization is bypassed because the police consider it “impossible” or achievable only with “disproportionate effort.”

Hamburg adopted similar rules earlier this year, and its example appears to have encouraged other regions to follow. These developments together mark a clear move toward normalizing the use of personal information as fuel for surveillance algorithms.

The chain reaction began in Bavaria, where police in early 2024 tested Palantir’s surveillance software with real personal data.

The experiment drew objections from the state’s data protection authority, but still served as a model for others.

Hamburg used the same idea in January 2025 to amend its laws, granting permission to train “learning IT systems” on data from bystanders. Now Baden-Württemberg and North Rhine-Westphalia plan to adopt nearly identical language.

In North Rhine-Westphalia, police would be allowed to upload clear identifiers such as names or faces into commercial systems like Palantir’s and to refine behavioral or facial recognition programs with real, unaltered data.

Bettina Gayk, the state’s data protection officer, warned that “the proposed regulation addresses significant constitutional concerns.”

She argued that using data from people listed as victims or complainants was excessive and added that “products from commercial providers are improved with the help of state-collected and stored data,” which she found unacceptable.

The state government has embedded this expansion of surveillance powers into a broader revision of the Police Act, a change initially required by the Federal Constitutional Court.

The court had previously ruled that long-term video monitoring under the existing law violated the Basic Law.

Instead of narrowing these powers, the new draft introduces a clause allowing police to “develop, review, change or train IT products” with personal data.

This wording effectively enables continued use of Palantir’s data analysis platform while avoiding the constitutional limits the court demanded.

Across North Rhine-Westphalia, Baden-Württemberg, and Hamburg, the outcome will be similar: personal data can be used for training as soon as anonymization is judged to be disproportionately difficult, with the assessment left to police discretion.

Gayk has urged that the use of non-anonymized data be prohibited entirely, warning that the exceptions are written so broadly that “they will ultimately not lead to any restrictions in practice.”

Baden-Württemberg’s green-black coalition plans to pass its bill this week.

Keep reading

DHS Expands Nationwide Airport Biometric Tracking

The Department of Homeland Security has introduced a new rule that will greatly expand biometric tracking at US borders, establishing a system to photograph and identify every non-citizen who enters or leaves the country.

Although the regulation applies to non-citizens, the cameras do not distinguish citizens from non-citizens in real time.

CBP says US citizens may opt out by presenting their passports manually, and that photos of citizens are deleted within twelve hours once nationality is confirmed. However, that’s after the fact.

Starting December 26, Customs and Border Protection will have authority to take photographs of “all aliens” not only at airports and land crossings but at “any other point of departure” the agency designates.

We obtained a copy of the rule for you here.

DHS describes the change as “operational modernization.”

Keep reading

Are Your Identification Photos in a Face Recognition Database?

A majority of Americans are in face recognition databases in use by the U.S. government. Are you one of them? The Electronic Frontier Foundation (EFF) has launched a new quiz called “Who Has Your Face” to help you find out.

“Your driver’s license picture and other ID photos are often shared with law enforcement and other agencies like Immigration and Customs Enforcement (ICE),” said EFF Digital Strategist Jason Kelley. “Those agencies use facial recognition technology to compare your face with those in mugshots and with other photos of people suspected of committing crimes—putting you at risk of being misidentified. So we created this quiz to help show people what we know about who has their face.”

To create the Who Has Your Face quiz, EFF and the Center on Privacy & Technology at Georgetown Law reviewed thousands of pages of public records to determine as much as possible which government photos of U.S. citizens, residents, and travelers are shared with which agencies for facial recognition purposes.

We learned that government agencies—including ICE, the Department of Homeland Security, and the FBI—could all have some access to these photos. However, despite hundreds of hours of research it’s nearly impossible to know precisely which agencies are sharing which photos, and with whom. For example, each state DMV shares access to their photos differently, depending on agreements with local police, other states, and federal agencies.  Our Who Has Your Face quiz asks you questions like what kind of ID you have and which state you live in to help you narrow down which agencies might have copies of your photos.

“These public records have shown us that biometric database sharing is widespread and completely unregulated—and this is still just a partial picture,” said Clare Garvie, senior associate with the Center on Privacy & Technology. “Americans deserve to know how their biometric information is being used, especially when it may put them at risk of being misidentified as a criminal suspect.”

“Here’s the truth: it should be easy to learn the full list of which entities have personal data that you’ve been required to hand over in exchange for a driver’s license or for re-entry into the country after visiting family abroad—especially when that’s a photo of your face,” said EFF Surveillance Litigation Director Jennifer Lynch. “Most people realize that their photos are scanned into a database, but they don’t realize this effectively makes them part of a perpetual police line-up. That’s what’s happening to millions of people, without their knowledge, and it’s practically impossible to opt out.”

Despite the proliferation of federal, state, and local face databases, we can fight back. Laws that ban government use of face recognition are increasingly passing around the country. Several states already don’t allow or don’t have face recognition at DMVs. Cities like San Francisco, Berkeley, and Oakland, California, as well as Somerville, Massachusetts have also passed bans on its use.  To help ban government use of face recognition in your city, visit our About Face campaign.

Keep reading