UK Expands Live Facial Recognition as First Legal Challenge Targets Met Police Misidentification

Police forces across England are preparing to expand their use of live facial recognition (LFR) surveillance as the government moves forward with a national policy to guide deployments.

Policing minister Sarah Jones confirmed during the Labour Party conference that formal guidance is in development to instruct officers on when and where the technology should be used.

Funding from the Home Office has already been allocated to support LFR operations in seven additional regions: Greater Manchester, West Yorkshire, Bedfordshire, Surrey, Sussex, Thames Valley, and Hampshire.

Government officials have pointed to early deployments in London and Cardiff as successful, citing arrests.

Reflecting on those results, Jones stated:

“What we’ve seen in Croydon is that it has worked. We just need to make sure it’s clear what the technology is going to be useful for going forward. If we are going to use it more, if we do want to roll it out across the country, what are the parameters? Live facial recognition is a really good tool that has led to arrests that wouldn’t have come otherwise, and it’s very, very valuable.”

The software links live camera feeds to a watchlist of people wanted by police. When someone passes a camera, facial measurements are analyzed and compared against the database. If a match is found, officers are alerted to intervene.

However, the use of LFR has expanded sharply. In London, the number of people included on watchlists has more than doubled between 2020 and 2025.

The volume of facial scans during deployments has also grown, with single-day scans now reaching into the tens of thousands.

The Metropolitan Police insists it has safeguards in place and maintains that data from individuals not on a watchlist is deleted immediately.

Keep reading

Britain’s policing minister punts facial recog nationwide

The government is to encourage police forces across England and Wales to adopt live facial recognition (LFR) technology, with a minister praising its use by the London’s Metropolitan Police in a suburb in the south of the city.

Policing minister Sarah Jones confirmed the UK government is consulting on guidance on where, when, and how police forces can use LFR with publication due later this year. “What we’ve seen in Croydon is that it has worked,” she told a fringe event at the Labour party conference on September 29, referring to the Met’s installation of permanent LFR cameras in the town.

“We just need to make sure it’s clear what the technology is going to be useful for going forward. If we are going to use it more, if we do want to roll it out across the country, what are the parameters?” she added. “Live facial recognition is a really good tool that has led to arrests that wouldn’t have come otherwise and it’s very, very valuable.”

In August, the Home Office said that seven more police forces will start using ten new vans kitted out with LFR technology, in addition to existing use by the Metropolitan Police in London and South Wales Police. At the time it said the two forces have used LFR to make 580 arrests over the previous 12 months.

Keep reading

German Federal Police Sued Over Facial Recognition Database Use

Germany’s top criminal police authority is facing legal action over the handling of biometric data.

On September 19, 2025, IT-security expert Janik Besendorf, with the support of the Chaos Computer Club (CCC), brought a case before the Administrative Court in Wiesbaden.

He argues that photographs taken of him during a 2018 police matter, which was later dismissed, were unlawfully fed into facial recognition testing programs instead of being deleted.

According to Netzpolitik, images from Besendorf and millions of others stored in the police database INPOL-Z were repurposed without consent or statutory approval.

The legal complaint points in particular to the BKA’s “EGES” initiative, a 2019 project aimed at improving its facial recognition capabilities.

In this program, Fraunhofer IGD ran trials of four commercial recognition systems using roughly five million frontal photos of around three million individuals, in addition to volunteer contributions.

Freedom of information records show that officials had already raised doubts about whether the project had any legal footing.

The BKA maintains that the testing was conducted safely, emphasizing that all work was carried out on isolated internal systems, with no direct handover of personal data to outside partners.

It also insists that the program is qualified as research under the BKA Act. Regulators have taken a different view, arguing that benchmarking market-ready tools cannot be described as scientific research and pointing out the absence of a clear legal mandate.

A judgment against the BKA would set limits on how police photo archives can be repurposed for experimentation and product vetting.

Keep reading

YouTube Expands AI Age Checks, Users Face ID Verification

Google’s video platform is tightening its controls, and a growing number of YouTube users are suddenly finding themselves locked out of content unless they hand over proof of age through ID verification.

The new direction stems from YouTube’s decision to push its AI-driven age estimation system much more broadly than before.

The technology, which YouTube began experimenting with in Europe before confirming its US debut in July, surveils people’s accounts and examines everything from account details and viewing habits to search activity.

Those surveillance signals are then used to guess whether someone is under 18.

The pace of enforcement has accelerated sharply. Within the 24 hours leading up to September 24, Reddit’s r/youtube subreddit saw a flood of posts from users saying their accounts were suddenly flagged for verification, suggesting a big uptick in restrictions.

Accounts flagged as too young are immediately restricted: targeted advertising is reduced, recommendations deemed unsafe are cut back, and access to adult material is blocked outright.

When YouTube’s system determines that an account likely belongs to someone under 18, the platform automatically enforces a set of limits designed to control how the service is used.

Keep reading

Chicago Cubs Sued for Allegedly Using Facial Recognition on Fans Without Consent at Wrigley Field

A federal lawsuit filed in Illinois claims the Chicago Cubs and their security contractors used facial recognition technology on fans at Wrigley Field without following basic legal requirements under state privacy law.

The proposed class action, lodged on September 15, 2025, says the team and two private security firms captured biometric data without proper notice or permission.

The legal complaint names the Cubs alongside Blue Star Security, LLC and Security Services Holdings LLC, which operates under the name Protos Security.

We obtained a copy of the complaint for you here.

Together, they are accused of deploying facial recognition tools at Wrigley Field that scanned the faces of attendees without providing the written disclosures or obtaining the signed releases required by Illinois law.

The suit states that this happened to “millions of fans.”

Illinois’ Biometric Information Privacy Act (BIPA), passed in 2008, sets out some of the most protective rules in the country when it comes to biometric data.

The statute prohibits the collection of biometric identifiers, like fingerprints or facial geometry, unless the person is informed in writing and gives signed authorization.

Lawsuits can be brought by individuals even if they haven’t suffered financial or emotional harm, a position backed by the Illinois Supreme Court in Rosenbach v. Six Flags.

Keep reading

Facial, Fingerprint, & Iris Scans: WEF/Bill Gates’ Vision For Biometric Digital Wallet Quietly Gaining Momentum

The push for a tightly controlled payment and identity system took a quiet but alarming step forward with a little-noticed deal between credit card giant Visa and an obscure tech firm called TECH5. Their seven-year agreement aims to fast-track digital identity and payment systems under the deceptively tame “Digital Public Infrastructure” (DPI), Biometric Update reports.

The troubling partnership, signed last week in Dubai, merges Visa’s massive financial network with TECH5’s invasive biometric tech, which includes facial, fingerprint, and iris scans, setting the stage for a surveillance-friendly future, all packaged as “convenience.” The goal? Integrated platforms to store your verified credentials for so-called seamless access to services and transactions. The companies claim these systems will adapt to “local laws and markets,” but that’s a thin promise when privacy protections often lag. The “identity wallets” they’re touting? They’re not just for verifying who you are, but they will have payment features built in, powered by Visa’s global payment infrastructure and TECH5’s AI-driven biometric tools.

If you weren’t already uneasy, Reclaim The Net has previously reported on how the usual globalist cheerleaders are all-in on digital identities for financial transactions:

The initiative, formalized in Dubai, supports a vision promoted by organizations including the United Nations, the European Union, the World Economic Forum, and Bill Gates. DPI strategies are being pushed as part of a global roadmap to digitize identity and financial access by 2030.

The move reflects a broader international push to integrate verified digital identity with financial services. This is often presented as a way to reduce friction in service delivery, expand inclusion, and prevent fraud. However, privacy advocates continue to raise alarms over the implications of centralizing both identification and payment systems.

Unsurprisingly, Visa’s leadership tried to soften the blow to civil liberties and privacy concerns.

At Visa, we believe that secure, inclusive, and scalable digital identity is foundational to the future of payments,” said Dr. Svyatoslav Senyuta, Head of Visa Government Solutions in the CEMEA region.

“Our partnership with Tech5 reflects our commitment to advancing Digital Public Infrastructure globally. By combining Tech5’s biometric and identity innovations with Visa’s trusted payment technologies, we aim to empower governments and institutions to drive financial inclusion and digital trust at scale.”

Keep reading

Cardiff Man Wrongly Accused of Theft After Facial Recognition Error Triggers Privacy Complaint

A Cardiff man has filed a formal complaint with the Information Commissioner’s Office after being wrongly accused of theft in a store using facial recognition software.

The case is now drawing wider attention to the unchecked spread of biometric surveillance in everyday retail environments.

On 29 April 2025, Byron Long, 66, arrived at the B&M outlet in Cardiff Bay Retail Park expecting an ordinary shopping trip.

Instead, he was approached by staff and told he was barred from the premises. In front of other customers, he was accused of stealing £75 ($101) worth of goods during a visit earlier that month.

That accusation was entirely false. During the visit in question on 9 April, Long had bought a single item: a £7 ($9.50) packet of cat treats. He paid for them in full. He later obtained CCTV footage showing himself at the checkout in a Red Bull Formula 1 jacket, clearly completing the purchase.

“It was a horrible experience, and I haven’t been back to the store since. The incident has had a very serious impact on my mental health, which is very fragile anyway, and I am now very anxious whenever I go shopping,” Long said, as reported by Nation Cymru.

The misidentification came from Facewatch, a private firm contracted by retailers to run facial recognition scans on customers. Images from Long’s previous visit were processed and matched to a database of alleged offenders. That match triggered the alert that led B&M staff to accuse him.

B&M later acknowledged the error, issuing a written apology and stating: “Our B&M store and security teams have a duty of care to all our customers and to our company, and this includes challenging people that they believe are potentially shoplifting. This is an extremely difficult task, and sadly we don’t always get it right; your case would be one of these instances…We can confirm your data has been removed from Facewatch.”

They also offered a £25 ($34) voucher as compensation, an offer Long flatly rejected.

Facewatch responded to the incident by suspending the user who had submitted the incorrect data. Michele Bond, the company’s Head of Incident Review and Data Protection Enquiries, said: “Facewatch Incident data is submitted by authorized users, who must confirm the accuracy of the information provided. Once the error was identified, the user responsible was immediately suspended from using the Facewatch system.”

Long has since taken the matter to Big Brother Watch, a civil liberties group focused on privacy and surveillance. The organization has now submitted a complaint to the ICO on his behalf.

Keep reading

Thailand – A Case Study for Biometric Data Control

Thailand has become a test case for the use of biometric data in every facet of life. Facial recognition data is required for any single transfer above 50,000 baht (around $1,580), daily transfers above 200,000 baht, and any international transfers from personal accounts.  All major Thai banks, such as Bangkok Bank, Kasikorn (KBank), SCB, Krungthai, and Krungsri, require customers to submit biometric data, and the Bank of Thailand (BOT) provides the general guidelines that these banks must follow.

It may begin with banking and documentation, but the ultimate goal is to develop digital IDs that are stored on a centralized database. The board of Thailand’s National Broadcasting and Telecommunications Commission (NBTC) proposed that users must submit biometric data to register SIM cards. The rule went into effect in August and applies to everyone in Thailand, including tourists.

The Thai Ministry of Public Health (MOPH), the Thai Red Cross Society, and the National Science and Technology Development Agency (NSTDA) has implemented the use of biometric data to track undocumented persons. Health agencies claim the technology can identify the spread of disease and assist in providing humanitarian aid and medical services. The MOPH claims the technology is 99.75% accurate. According to the Department of Labour’s Bureau of Alien Workers Administration, over 1 million undocumented migrants were in the nation as of July 2025.

“The application of biometric technology not only improves healthcare, disease prevention and control, medical services, and humanitarian aid with accuracy and inclusivity, but also reflects the protection of human rights and dignity of undocumented people in Thailand. It also creates opportunities for education and research by Thai public health professionals to develop further benefits for the general population,” Health Minister Somsak Thepsuthin stated.

The Thai Red Cross Society is a branch of the global Red Cross agency. Thailand’s Personal Data Protection Act (PDPA) claims all personal data will be securely protected, but they have already begun sharing with international agencies.

Keep reading

Australia Orders Tech Giants to Enforce Age Verification Digital ID by December 10

Australia is preparing to enforce one of the most invasive online measures in its history under the guise of child safety.

With the introduction of mandatory age verification across social media platforms, privacy advocates are warning that the policy, set to begin December 10, 2025, risks eroding fundamental digital rights for every user, not just those under 16.

eSafety Commissioner Julie Inman Grant has told tech giants like Google, Meta, TikTok, and Snap that they must be ready to detect and shut down accounts held by Australians under the age threshold.

She has made it clear that platforms are expected to implement broad “age assurance” systems across their services, and that “self-declaration of age will not, on its own, be enough to constitute reasonable steps.”

The new rules stem from the Online Safety Amendment (Social Media Minimum Age) Act 2024, which gives the government sweeping new authority to dictate how users verify their age before accessing digital services. Any platform that doesn’t comply could be fined up to $31M USD.

While the government claims the law isn’t a ban on social media for children under 16, in practice, it forces platforms to block these users unless they can pass age checks, which means a digital ID.

There will be no penalties for children or their parents, but platforms face immense legal and financial pressure to enforce restrictions, pressure that almost inevitably leads to surveillance-based systems.

The Commissioner said companies must “detect and de-activate these accounts from 10 December, and provide account holders with appropriate information and support before then.”

These expectations extend to providing “clear, age-appropriate communications” and making sure users can download their data and find emotional or mental health resources when their accounts are terminated.

She further stated that “efficacy will require layered safety measures, sometimes known as a ‘waterfall approach’,” a term often associated with collecting increasing amounts of personal data at multiple steps of user interaction.

Such layered systems often rely on facial scanning, government ID uploads, biometric estimation, or AI-powered surveillance tools to estimate age.

Keep reading

Dystopian Rollout Of Digital IDs & CBDCs Is Happening

This isn’t conspiracy; it’s all in their own documentation.

They are building a full-spectrum digital cage, and its two locked doors are Digital Identity and Central Bank Digital Currencies (CBDCs). You cannot have one without the other.

The plan is to replace your government-issued ID with a Digital ID, but it’s not just a card in your phone. It is fundamentally built upon your immutable biometrics: your fingerprints, the precise structure of your face, the unique pattern of your iris.

This biometric data is the key.

It is the hard link that ties your physical body directly to your digital identity credential.

Your very body becomes your password. The reason this is so critical for them is the financial system. UN & Bank for International Settlements docs overtly state that Digital ID and CBDCs are designed to be integrated.

The system cannot exist without this biometric digital ID.

Why?

Know Your Customer (KYC) protocols.

For this new digital financial system to function, they must absolutely “know” every single participant. Your digital wallet will be tied to your digital ID, which is mapped to your biometrics. Total financial-biological linkage.

We see the prototypes being rolled out now:

  • Sam Altman’s WorldCoin lures people to scan their irises for a “unique identifier” and a digital wallet. This is the exact model.
  • The UN’s “Building Blocks” program forces refugees to scan their iris at checkout to receive food rations. The value is deducted from a wallet tied to that biometric ID.

They justify this total surveillance under the guise of closing the “identity gap,” claiming the world’s poor need digital IDs to access essential services like banking and healthcare.

The reality?

This is the ultimate onboarding mechanism into a system of programmable control, where your access to society and your own money is permissioned and revocable based on your compliance.

This is the bedrock of the new global financial system.

It is not about convenience. It is about control.

Keep reading