Another Wrongful Arrest Based on Faulty Facial Recognition Raises Growing Concerns

Jason Killinger walked into Reno’s Peppermill Casino in September of 2023, later that evening as he was exiting the building he would be arrested. The casino’s facial recognition system had flagged him as a “100 percent match” for an individual that had previously been banned from the property. The only problem? It was completely wrong.

After the system flagged Killinger, casino security would approach him, referring to him as “Mike”, an individual who had been previously removed from the property. Despite his insistence and ability to prove that he was in fact not Mike, security would surround and handcuff Killinger before calling the Reno Police Department. Shortly thereafter rookie Officer Richard Jager would arrive on the scene.

Killinger quickly proved he was not the man identified in the system, as he was carrying three valid forms of identification, including a Nevada Real ID compliant drivers license, his Peppermill player’s card, and a debit card, all with his name on it. When this wasn’t enough he offered to retrieve more from his vehicle, which included a pay stub, vehicle registration, and a medical card. Despite all of this copious documentation proving who he was, Officer Jager declined to investigate further, not bothering to look at any of his other identifying documents. Killinger would be arrested and charged with criminal trespass.

He would then spend 11 hours in police custody, only after a fingerprint check at Washoe County jail confirmed his identity would he finally be released.

Keep reading

Mexico Speeds Up Biometric ID Rollout

Mexico’s government wants you to believe that handing over your fingerprints, iris scans, and facial data is voluntary. President Claudia Sheinbaum has said so publicly.

But by July 2026, every one of the country’s roughly 130 million mobile phone lines must be linked to a biometric national ID, and unregistered numbers get suspended on July 1.

Refuse the biometric credential and lose your phone.

The CURP Biométrica upgrades Mexico’s existing population registry code, the Clave Única de Registro de Población, from an 18-character alphanumeric string into something far more personal. The updated system captures face, fingerprint, and iris biometrics, packages them with a QR code and digital signature, and produces what amounts to a mobile-readable identity document tied to your body.

Registration happens at RENAPO and Civil Registry offices, where staff scan all ten fingerprints, both irises, take a facial photograph, and record a digital signature. You’ll need a valid photo ID, a certified CURP, and an original or certified birth certificate just to walk in.

The government has framed this primarily as a tool for addressing Mexico’s crisis of forced disappearances. The biometric data feeds into a Unified Identity Platform connecting the National Population Registry with the National Forensic Data Bank and records held by prosecutors and intelligence agencies, enabling real-time identity searches. That’s the stated purpose.

The actual system being built does considerably more than locate missing people. The legislation gives broad access to biometric and personal information to law enforcement, intelligence agencies, and the National Guard, and the law doesn’t require authorities to notify citizens when their data gets accessed. You won’t know who’s looking at your biometrics, or why, or how often.

Keep reading

FC Barcelona Fined for Privacy Violations Over Biometric Data Collection

FC Barcelona got fined €500,000 ($579,219) for scanning the faces and recording the voices of over 100,000 members without doing the legal homework first.

Spain’s data protection authority, the AEPD, found the club had deployed biometric identity verification during a membership census update and processed all of it without a valid Data Protection Impact Assessment.

Members renewing their details remotely were required to either submit a facial scan through their device camera or record their voice. Both systems were live, both were processing biometric data at scale, and the documentation Barcelona produced to justify any of it didn’t meet the bar GDPR sets for high-risk processing.

Article 35 of the GDPR requires organizations to conduct a DPIA before deploying any system likely to create a high risk for individuals. Biometric data used for identification qualifies automatically.

Processing that touches more than 100,000 people, including minors, qualifies. Using new technologies qualifies. Barcelona’s system hit all three. The AEPD concluded the club’s documentation was missing the essential components of a genuine assessment: no real necessity and proportionality analysis, no adequate evaluation of what the processing actually risks for the people whose faces and voices it captured.

The AEPD’s decision in case PS-00450-2024 makes one point with particular clarity: consent doesn’t substitute for a DPIA. Barcelona had asked members to agree to biometric data collection, and members had agreed.

That agreement is legally irrelevant to the separate procedural obligation to assess risk before the system goes live. The GDPR treats them as independent requirements. Satisfying one doesn’t discharge the other.

What a valid DPIA actually requires, according to the decision, is a clear description of the processing, a genuine necessity and proportionality assessment, a detailed risk evaluation, proposed mitigation measures, and a residual risk assessment after mitigations are applied. Organizations that generate DPIA documentation as a compliance checkbox, without substantively working through those questions, remain exposed regardless of what consent language they put in front of users.

The appetite for facial biometric data has become near-universal across industries, and the Barcelona case lands in a moment when that appetite is accelerating faster than the rules meant to govern it.

Keep reading

The Dark Side of AI: Innocent Grandmother Wrongfully Jailed for 6 Months After Facial Recognition Error

A Tennessee grandmother spent nearly six months behind bars in North Dakota, a state she had never even stepped foot in, after being wrongfully identified by AI facial recognition technology in a bank fraud investigation.

The Grand Forks Herald reports that Angela Lipps, a 50-year-old mother of three and grandmother of five from Tennessee, found herself trapped in a nightmare that began last July when U.S. Marshals arrested her at gunpoint while she was babysitting four young children. Fargo police had used facial recognition software to identify her as the primary suspect in an organized bank fraud case, despite the fact that she had never set foot in North Dakota.

The case began in April and May 2025 when Fargo Police Department detectives investigated several bank fraud incidents. Surveillance footage captured a woman using a fraudulent U.S. Army military identification card to withdraw tens of thousands of dollars from local banks. To identify the suspect, investigators employed facial recognition software, which incorrectly matched the woman in the videos to Lipps.

According to court documents obtained through an open records request, the detective assigned to the case reviewed Lipps’ social media accounts and Tennessee driver’s license photo after receiving the facial recognition match. In the charging document, the detective stated that Lipps appeared to be the suspect based on facial features, body type, hairstyle, and hair color. Notably, no one from the Fargo Police Department contacted Lipps to question her before filing charges.

Lipps was arrested on July 14 and booked into her county jail in Tennessee as a fugitive from justice. She faced four counts of unauthorized use of personal identifying information and four counts of theft in North Dakota. Held without bail due to her fugitive status, Lipps spent 108 days in the Tennessee jail before North Dakota officers transported her to Fargo on October 30.

“It was so scary, I can still see it in my head, over and over again,” Lipps said during an interview about her ordeal.

Keep reading

UK Lords Back Facial Recognition Overreach, Protest Crackdown Powers

The UK Lords spent March 9 dismantling what little legal cover existed for anonymous protest and privacy, and building new tools to suppress it entirely.

Start with what they refused to protect. Peers voted down an amendment that would have kept the DVLA database (the equivalent of the DMV in the US) out of live facial recognition searches.

That database isn’t a surveillance archive. It was built to verify driving licenses. It contains photographs linked to the confirmed real-world identities of most UK drivers, and the Lords just cleared the path for police to run it against faces captured in real time at public gatherings. A licensing bureaucracy would become an identification engine. The repurposing happened quietly, through a vote most people won’t read about.

The Lords also voted down a proposed “defence of reasonable excuse” for concealing identity at protests. The amendment would have shifted the burden of proof onto police officers to justify why a face covering made someone arrestable.

Keep reading

Mexico Mandates Biometric SIM Registration for All Phone Numbers

Anonymous prepaid SIM cards are dying in Mexico. By July 1, 2026, every active cell phone number in the country must be biometrically linked to a named, government-credentialed individual or face suspension. That’s around 127 million numbers, each one tethered to an identity the Mexican government can look up by name.

The mobile registration law took effect January 9, 2026, covering prepaid and postpaid plans, physical SIMs, and eSIMs alike. Existing subscribers have until June 30 to complete registration. New lines activated after January 9 get 30 days. Miss the window, and the line goes dark.

The enforcement mechanism runs through the CURP Biométrica, Mexico’s biometric upgrade to its existing population registry code. The new credential embeds a photograph, electronic signature, and QR code that ties directly to biometrically verified records held in the national registry.

Residents registering a mobile line must provide their CURP number alongside a valid government ID, which makes biometric enrollment not optional but structurally required. You cannot register a phone number without first handing your biometric data to the state.

What Mexico is building here is a national phone network where every number has a face attached to it.

Keep reading

Meta Considers Timed Face Recognition Launch to Exploit Distracted Society

Meta is weighing whether to add face recognition to its camera-equipped smart glasses, and The New York Times obtained an internal company document that reveals more than just the plan itself.

It reveals how Meta thinks about when to launch it: “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

Read that plainly: Meta wants to release a mass biometric surveillance product while the people most likely to fight it are too distracted to respond.

The technology would scan the face of every person who enters the glasses’ field of view, building a faceprint to match against a database. Every passerby. Every stranger on the subway. Every person who happens to walk through the frame of someone else’s device. None of them consented. Most of them won’t even know they were captured.

Faceprints are among the most sensitive data a company can collect. Unlike a password, a face cannot be changed after a breach. Once collected, this data enables mass surveillance, fuels discrimination, and creates a permanent identification trail attached to a person’s physical movement through the world.

Putting that capability into wearable glasses carried by ordinary people in ordinary places moves it off servers and into every room, street, and gathering that people enter.

Meta ran this experiment before and lost.

The company shut down (only kind of) its photo face-scanning tool in November 2021, simultaneously announcing it would delete (if you believe them) over a billion stored face templates. That retreat came after years of mounting legal exposure that produced a very expensive record.

In July 2019, Facebook settled a Federal Trade Commission investigation for $5 billion. The allegations included that the company’s face recognition settings were confusing and deceptive, and the settlement required the company to obtain consent before running face recognition on users going forward.

Less than two years later, Meta agreed to pay $650 million to settle a class action brought by Illinois residents under that state’s biometric privacy law. Then, in July 2024, it settled with Texas for $1.4 billion over the same defunct system. Nearly $7 billion across three settlements, all tied to face recognition practices the company ultimately abandoned.

Keep reading

Discord to Demand Face Scan or ID to Access All Features

Discord is preparing to make age classification a constant background process across its platform. Beginning next month, every account will default to a teen-appropriate experience unless the user takes steps to prove adulthood.

Age determination will sit underneath routine activity, shaping what people can see, say, and join.

For accounts that are not verified as adult, access will narrow immediately. Age-restricted servers and channels will be blocked, voice participation in live “stage” channels will be disabled, and automated filters will apply to content Discord identifies as graphic or sensitive.

Friend requests from unfamiliar users will trigger warning prompts, and direct messages from unknown accounts will be routed into a separate inbox.

Core features such as direct messages with known contacts and servers without age restrictions will continue to function. Age-restricted servers will effectively disappear until verification is completed, including servers that a user joined years earlier.

The global rollout reflects a broader regulatory environment that is pushing platforms toward more aggressive age controls. Discord has already tested similar systems.

Last year, age checks were introduced in the UK and Australia.

For many adult users, the concern is less about access to content and more about surveillance and the ability to communicate anonymously. Verification systems introduce new forms of monitoring, whether through documents, facial analysis, or ongoing behavioral assessment.

Keep reading

ICE observer says her Global Entry was revoked after agent scanned her face

Minnesota resident Nicole Cleland had her Global Entry and TSA PreCheck privileges revoked three days after an incident in which she observed activity by immigration agents, the woman said in a court declaration. An agent told Cleland that he used facial recognition technology to identify her, she wrote in a declaration filed in US District Court for the District of Minnesota.

Cleland, a 56-year-old resident of Richfield and a director at Target Corporation, volunteers with a group that tracks potential Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) vehicles in her neighborhood, according to her declaration. On the morning of January 10, she “observed a white Dodge Ram being driven by what I believed to be federal enforcement agents” and “maneuvered behind the vehicle with the intent of observing the agents’ actions.”

Cleland said that she and another observer in a different car followed the Dodge Ram because of “concern about a local apartment building being raided.” She followed the car for a short time and from a safe distance until “the Dodge Ram stopped in front of the other commuter’s vehicle,” she wrote. Cleland said two other vehicles apparently driven by federal agents stopped in front of the Dodge Ram, and her path forward was blocked.

“An agent exited the vehicle and approached my vehicle,” Cleland wrote. “I remained in my vehicle. The agent addressed me by my name and informed me that they had ‘facial recognition’ and that his body cam was recording. The agent stated that he worked for border patrol. He wore full camouflage fatigues. The agent stated that I was impeding their work. He indicated he was giving me a verbal warning and if I was found to be impeding again, I would be arrested.”

Cleland acknowledged that she heard what the agent said, and they drove off in opposite directions, according to her declaration. Cleland submitted the declaration on January 21 in a lawsuit filed by Minnesota residents against US government officials with the Department of Homeland Security and ICE. Cleland’s court filing was mentioned yesterday in a Boston Globe column about tactics used by ICE agents to intimidate protesters.

Keep reading

Britain To Roll Out Facial Recognition in Police Overhaul

Britain’s policing system, we are told, is broken. And on Monday, the home secretary, Shabana Mahmood, announced that the fix would arrive in the form of algorithms, facial recognition vans, and a large check made out to the future.

The government plans to spend £140m ($191M) on artificial intelligence and related technology, with the promise that it will free up six million police hours a year, the equivalent of 3,000 officers.

It is being billed as the biggest overhaul of policing in England and Wales in 200 years, aimed at dragging a creaking system into the modern world.

The ambition is serious. The implications are too.

The plan is for AI software that will analyze CCTV, doorbell, and mobile phone footage, detect deepfakes, carry out digital forensics, and handle administrative tasks such as form filling, redaction, and transcription. Mahmood’s argument is that criminals are getting smarter, while parts of the police service are stuck with tools that belong to another era.

She put it plainly: “Criminals are operating in increasingly sophisticated ways. However, some police forces are still fighting crime with analogue methods.”

And she promised results: “We will roll out state-of-the-art tech to get more officers on the streets and put rapists and murderers behind bars.”

There is logic here. Few people would argue that trained officers should be buried in paperwork. Technology can help with that. The concern is what else comes with it.

Live facial recognition is being expanded aggressively. The number of police vans equipped with the technology will increase fivefold, from ten to fifty, operating across the country. These systems scan faces in public spaces and compare them to watch lists of wanted individuals.

This is a form of mass surveillance and when automated systems get things wrong, the consequences fall on real people.

Keep reading