Thailand – A Case Study for Biometric Data Control

Thailand has become a test case for the use of biometric data in every facet of life. Facial recognition data is required for any single transfer above 50,000 baht (around $1,580), daily transfers above 200,000 baht, and any international transfers from personal accounts.  All major Thai banks, such as Bangkok Bank, Kasikorn (KBank), SCB, Krungthai, and Krungsri, require customers to submit biometric data, and the Bank of Thailand (BOT) provides the general guidelines that these banks must follow.

It may begin with banking and documentation, but the ultimate goal is to develop digital IDs that are stored on a centralized database. The board of Thailand’s National Broadcasting and Telecommunications Commission (NBTC) proposed that users must submit biometric data to register SIM cards. The rule went into effect in August and applies to everyone in Thailand, including tourists.

The Thai Ministry of Public Health (MOPH), the Thai Red Cross Society, and the National Science and Technology Development Agency (NSTDA) has implemented the use of biometric data to track undocumented persons. Health agencies claim the technology can identify the spread of disease and assist in providing humanitarian aid and medical services. The MOPH claims the technology is 99.75% accurate. According to the Department of Labour’s Bureau of Alien Workers Administration, over 1 million undocumented migrants were in the nation as of July 2025.

“The application of biometric technology not only improves healthcare, disease prevention and control, medical services, and humanitarian aid with accuracy and inclusivity, but also reflects the protection of human rights and dignity of undocumented people in Thailand. It also creates opportunities for education and research by Thai public health professionals to develop further benefits for the general population,” Health Minister Somsak Thepsuthin stated.

The Thai Red Cross Society is a branch of the global Red Cross agency. Thailand’s Personal Data Protection Act (PDPA) claims all personal data will be securely protected, but they have already begun sharing with international agencies.

Keep reading

Australia Orders Tech Giants to Enforce Age Verification Digital ID by December 10

Australia is preparing to enforce one of the most invasive online measures in its history under the guise of child safety.

With the introduction of mandatory age verification across social media platforms, privacy advocates are warning that the policy, set to begin December 10, 2025, risks eroding fundamental digital rights for every user, not just those under 16.

eSafety Commissioner Julie Inman Grant has told tech giants like Google, Meta, TikTok, and Snap that they must be ready to detect and shut down accounts held by Australians under the age threshold.

She has made it clear that platforms are expected to implement broad “age assurance” systems across their services, and that “self-declaration of age will not, on its own, be enough to constitute reasonable steps.”

The new rules stem from the Online Safety Amendment (Social Media Minimum Age) Act 2024, which gives the government sweeping new authority to dictate how users verify their age before accessing digital services. Any platform that doesn’t comply could be fined up to $31M USD.

While the government claims the law isn’t a ban on social media for children under 16, in practice, it forces platforms to block these users unless they can pass age checks, which means a digital ID.

There will be no penalties for children or their parents, but platforms face immense legal and financial pressure to enforce restrictions, pressure that almost inevitably leads to surveillance-based systems.

The Commissioner said companies must “detect and de-activate these accounts from 10 December, and provide account holders with appropriate information and support before then.”

These expectations extend to providing “clear, age-appropriate communications” and making sure users can download their data and find emotional or mental health resources when their accounts are terminated.

She further stated that “efficacy will require layered safety measures, sometimes known as a ‘waterfall approach’,” a term often associated with collecting increasing amounts of personal data at multiple steps of user interaction.

Such layered systems often rely on facial scanning, government ID uploads, biometric estimation, or AI-powered surveillance tools to estimate age.

Keep reading

Dystopian Rollout Of Digital IDs & CBDCs Is Happening

This isn’t conspiracy; it’s all in their own documentation.

They are building a full-spectrum digital cage, and its two locked doors are Digital Identity and Central Bank Digital Currencies (CBDCs). You cannot have one without the other.

The plan is to replace your government-issued ID with a Digital ID, but it’s not just a card in your phone. It is fundamentally built upon your immutable biometrics: your fingerprints, the precise structure of your face, the unique pattern of your iris.

This biometric data is the key.

It is the hard link that ties your physical body directly to your digital identity credential.

Your very body becomes your password. The reason this is so critical for them is the financial system. UN & Bank for International Settlements docs overtly state that Digital ID and CBDCs are designed to be integrated.

The system cannot exist without this biometric digital ID.

Why?

Know Your Customer (KYC) protocols.

For this new digital financial system to function, they must absolutely “know” every single participant. Your digital wallet will be tied to your digital ID, which is mapped to your biometrics. Total financial-biological linkage.

We see the prototypes being rolled out now:

  • Sam Altman’s WorldCoin lures people to scan their irises for a “unique identifier” and a digital wallet. This is the exact model.
  • The UN’s “Building Blocks” program forces refugees to scan their iris at checkout to receive food rations. The value is deducted from a wallet tied to that biometric ID.

They justify this total surveillance under the guise of closing the “identity gap,” claiming the world’s poor need digital IDs to access essential services like banking and healthcare.

The reality?

This is the ultimate onboarding mechanism into a system of programmable control, where your access to society and your own money is permissioned and revocable based on your compliance.

This is the bedrock of the new global financial system.

It is not about convenience. It is about control.

Keep reading

CIA and Mossad-linked Surveillance System Quietly Being Installed Throughout the US

Launched in 2016 in response to a Tel Aviv shooting and the Pulse Nightclub shooting in Orlando, Florida, Gabriel offers a suite of surveillance products for “security and safety” incidents at “so-called soft targets and communal spaces, including schools, community centers, synagogues and churches.” The company makes the lofty promise that its products “stop mass shootings.” According to a 2018 report on Gabriel published in the Jerusalem Post, there were an estimated 475,000 such “soft targets” across the U.S., meaning that “the potential market for Gabriel is huge.”

Gabriel, since its founding, has been backed by “an impressive group of leaders,” mainly “former leaders of Mossad, Shin Bet [Israel’s domestic intelligence agency], FBI and CIA.” In recent years, even more former leaders of Israeli and American intelligence agencies have found their way onto Gabriel’s advisory board and have promoted the company’s products.

While the adoption of its surveillance technology was slower than expected in the United States, that dramatically changed last year, when an “anonymous philanthropist” gave the company $1 million to begin installing its products throughout schools, houses of worship and community centers throughout the country. That same “philanthropist” has promised to recruit others to match his donation, with the ultimate goal of installing Gabriel’s system in “every single synagogue, school and campus community in the country.”

With this CIA, FBI and Mossad-backed system now being installed throughout the United States for “free,” it is worth taking a critical look at Gabriel and its products, particularly the company’s future vision for its surveillance system. Perhaps unsurprisingly, much of the company’s future vision coincides with the vision of the intelligence agencies backing it – pre-crime, robotic policing and biometric surveillance.

Keep reading

Sequins, feathers… and a groundbreaking arrest using facial recognition cameras: The Daily Mail sees police deploy slick new technology at Notting Hill Carnival

Even the harshest critics of the Metropolitan Police admit the force has its work cut out with the Notting Hill Carnival.

Describing Europe’s biggest street party as a policing challenge would be a bit like referring to the Second World War as an unfortunate diplomatic incident.

Of course, it is not just the crowds of more than two million that put a strain on police resources every year on the August bank holiday. In recent years, it has also been the criminality – drugs, violence, knife crime, sexual offences, even murder – that all too frequently overshadows the celebrations.

So even with around 7,000 officers on duty, it is perhaps unsurprising that Met chiefs have introduced the use of live facial recognition (LFR) – previously deployed at the King’s coronation as well as Premier League matches – for the 2025 carnival.

Festivities officially began yesterday morning with the Children’s Day Parade. Thousands of revellers – many wearing ornate costumes of sequins and feathers – danced through the west London streets as drummers pounded unrelenting rhythms. Elsewhere, more than 30 sound systems blared out Caribbean and electronic dance music.

Meanwhile, officers were putting in place the final touches to their LFR system, which records images of people via sophisticated cameras. It uses biometric software to assess head size and other facial features, then converts these details into digital data. According to experts, any individual whose image scores 0.64 or higher (on a scale of zero to one) is highly likely to be a match for someone whose photo is on file.

At 6.23am yesterday, several hours before the parade got underway, specialists at the Met finalised a ‘watchlist’ of 16,231 individuals of interest to them. They included people wanted by the courts or being sought for alleged criminal activity that would merit jail time of ‘a year or more’.

Others on the list included those who have been freed under certain restrictions – including former prisoners released on licence from life sentences – to ensure they are sticking to the conditions imposed on them by the authorities.

Keep reading

Game Day Just Got Creepy: Florida Stadium Swaps Tickets for Faces

The University of Florida has launched a facial recognition-based entry system for football games, making it the first college in the country to introduce this technology at a stadium.

Instead of showing a ticket or scanning a phone, participating fans will now be able to walk into games by having their face scanned at dedicated lanes.

The system, called Express Entry, was created by Wicket and reflects a larger pattern of biometric screening being integrated into major sporting events.

To sign up, fans must link their Ticketmaster accounts and submit a selfie.

Once registered, they can skip traditional lines and enter the stadium through special facial recognition lanes. The University claims the process is quick, easy, and designed to relieve congestion. “With Express Entry, fans can bypass the lines and enter games using their face instead of their phone or ticket. Enrollment is free and simple,” the University Athletic Association explained.

This move is part of a shift in how universities are beginning to experiment with surveillance-oriented technologies under the banner of convenience.

Though the program is optional and traditional ticketing methods remain available, the arrival of facial recognition at a public university venue introduces serious concerns around biometric data collection and surveillance practices in educational and public entertainment settings.

Keep reading

Airlines urge senators to reject bill limiting facial recognition

A group representing several major airlines alongside travel companies and airports is opposing a Senate bill that would require the Transportation Security Administration (TSA) to generally use manual ID verification at security checkpoints instead of facial recognition.

The bill, introduced by Sen. Jeff Merkley (D-Ore.), would broadly restrict TSA’s ability to use biometrics and facial recognition, carving out a few exemptions for the agency’s PreCheck and other Trusted Traveler programs. Passengers may still opt in to the use of facial recognition at the checkpoint.

In a letter Monday to Sens. Ted Cruz (R-Texas) and Maria Cantwell (D-Wash.), the air industry groups said the law was a “step backward” and that facial recognition technology made security screenings far more efficient.

“The future of seamless and secure travel relies on the appropriate use of this technology to ensure security effectiveness and operational efficiency as daily travel volume continues to rise,” they wrote. “We are concerned that the vague and confusing exceptions to this blanket ban will have major consequences for the identity verification process, screening operations, and trusted traveler enrollment programs.”

Cruz and Cantwell are their parties’ highest-ranking members of the Senate Commerce, Science and Transportation Committee, which is scheduled to mark up the bill Wednesday.

In addition to limiting the use of facial recognition, Merkley’s bill would also require TSA to delete most images collected at checkpoints within 24 hours of a passenger’s departure.

Travelers going through a TSA checkpoint are generally able to opt out of facial recognition, the agency says. Merkley has argued the agency’s enforcement is inconsistent, posting on social media in February about his difficulties navigating the policy at Reagan Washington National Airport.

“This is big government coming to take away your privacy, trying to set up a national surveillance system,” the Oregon Democrat said in February. 

The airlines, however, warned that restricting the use of facial recognition could slow down security and divert TSA’s resources toward maintaining officer staffing, rather than focusing on automated innovations. The group also said it felt it had been insufficiently consulted on the legislation, “despite the major impact the bill would have on aviation security, airports, airlines, travelers, and technology companies.”

Keep reading

London is the Testing Lab for Big Brother Mass Facial Scanning Tech

Since the start of 2024, the Metropolitan Police has been quietly transforming London into a testing ground for live facial recognition (LFR).

Depending on who you ask, this is either a technological triumph that’s making the capital safer or a mass surveillance experiment that would make any privacy advocate wince.

The numbers are eye-watering: in just over 18 months, the Met has scanned the faces of around 2.4 million people. And from that sea of biometric data, they’ve made 1,035 arrests. That’s a hit rate of 0.04%. Or, to put it plainly, more than 99.9% of those scanned had done absolutely nothing wrong.

The police, of course, are eager to present this as a success story. Lindsey Chiswick, who oversees the Met’s facial recognition program, calls it a game-changer. “This milestone of 1,000 arrests is a demonstration of how cutting-edge technology can make London safer by removing dangerous offenders from our streets,” she said.

Of those arrested, 773 were charged or cautioned. Some were suspects in serious cases, including violent crimes against women and girls.

But here’s where things get complicated. To secure those 1,000 arrests, millions of innocent people have had their faces scanned and processed.

What’s being billed as precision policing can start to look more like casting an enormous net and hoping you catch something worthwhile.

Keep reading

ICE Is Using A New Facial Recognition App To Identify People, Leaked Emails Show

Immigration and Customs Enforcement (ICE) is using a new mobile phone app that can identify someone based on their fingerprints or face by simply pointing a smartphone camera at them, according to internal ICE emails viewed by 404 Media. The underlying system used for the facial recognition component of the app is ordinarily used when people enter or exit the U.S. Now, that system is being used inside the U.S. by ICE to identify people in the field.

The news highlights the Trump administration’s growing use of sophisticated technology for its mass deportation efforts and ICE’s enforcement of its arrest quotas. The document also shows how biometric systems built for one reason can be repurposed for another, a constant fear and critique from civil liberties proponents of facial recognition tools.

“Face recognition technology is notoriously unreliable, frequently generating false matches and resulting in a number of known wrongful arrests across the country. Immigration agents relying on this technology to try to identify people on the street is a recipe for disaster. Congress has never authorized DHS to use face recognition technology in this way, and the agency should shut this dangerous experiment down,” Nathan Freed Wessler, deputy director of the American Civil Liberties Union’s Speech, Privacy, and Technology Project, told 404 Media in an email.

“The Mobile Fortify App empowers users with real-time biometric identity verification capabilities utilizing contactless fingerprints and facial images captured by the camera on an ICE issued cell phone without a secondary collection device,” one of the emails, which was sent to all Enforcement and Removal Operations (ERO) personnel and seen by 404 Media, reads. ERO is the section of ICE specifically focused on deporting people.

The idea is for ICE to use this new tool to identify people whose identity ICE officers do not know. “This information can be used to identify unknown subjects in the field,” the email continues. “Officers are reminded that the fingerprint matching is currently the most accurate biometric indicator available in the application,” it adds, indicating that the fingerprint functionality is more accurate than the facial recognition component.

The emails also show the app has a “training range,” a feature that lets ICE officers practice capturing facial images and fingerprints in a “training non-live environment.”

video posted to social media this month shows apparent ICE officers carefully pointing their phones at a protester in his vehicle, but it is not clear if the officers were taking ordinary photos or using this tool.

Broadly, facial recognition tools work by taking one image to be tested and comparing it to a database of other images. Clearview AI for example, a commercially available facial recognition tool which is used by law enforcement but which doesn’t appear to be related to this ICE tool, compares a photo to a massive database of peoples’ photos scraped from social media and the wider web.

Keep reading

Guilt by Algorithm: Woman Wrongly Accused of Shoplifting Due to Facial Recognition Error

A woman was left “fuming” after being erroneously accused of stealing toilet paper and ejected from two Home Bargains stores in Greater Manchester, UK, due to an apparent mix-up with a facial recognition system designed to prevent shoplifting.

BBC News reports that Danielle Horan, a makeup business owner, found herself in a distressing situation when she was escorted out of Home Bargains branches in Salford and Manchester, without initially being given any explanation for her removal. It was later discovered that Horan had been falsely accused of stealing approximately £10 worth of items after her profile was added to a facial recognition watchlist used by the stores.

The incident unfolded on May 24, when Horan visited the Home Bargains store on Regent Road in Salford. As she was shopping, the store manager approached her and asked her to leave, causing Horan to feel embarrassed and confused in front of other customers. Despite her protestations, the manager advised her to contact Facewatch, the retail security firm that provides the facial recognition technology, directly.

Horan’s attempts to reach out to both Facewatch and Home Bargains initially proved futile. However, when she visited another Home Bargains store in Fallowfield, Manchester, with her 81-year-old mother on June 4, she was once again surrounded by staff and told to leave the premises as soon as she entered the store. This time, Horan stood her ground and demanded an explanation for her treatment.

After persistent emails to Facewatch and Home Bargains, Horan finally learned that there had been an allegation of theft involving approximately £10 worth of toilet rolls in early May. Somehow, her picture had been circulated to local stores, alerting them not to allow her entry. Horan checked her bank account and confirmed that she had, in fact, paid for the items in question.

Eventually, Facewatch responded to Horan, stating that a review of the incident showed she had not stolen anything. The firm acknowledged the distressing nature of Horan’s experience and noted that the retailer had since undertaken additional staff training. However, Horan’s ordeal had already taken a toll on her mental well-being, causing anxiety and stress as she questioned her actions and felt sick to her stomach for a week.

Keep reading