New York Is Quietly Rolling Out Precrime Surveillance Tech

Picture this: it’s rush hour in New York City. A guy in a Mets cap mutters to himself on the F train platform, pacing in tight circles. Nearby, a woman checks her phone five times in ten seconds. Overhead, cameras are watching. Behind the cameras? A machine. And behind that machine? An army of bureaucrats who’ve convinced themselves that bad vibes are now a crime category.

Welcome to the MTA’s shiny new plan for keeping you safe: an AI surveillance system designed to detect “irrational or concerning conduct” before anything happens. Not after a crime. Not even during. Before. The sort of thing that, in less tech-horny times, might’ve been called “having a bad day.”

MTA Chief Security Officer Michael Kemper, the man standing between us and a future where talking to yourself means a visit from the NYPD, is calling it “predictive prevention.”

“AI is the future,” Kemper assured the MTA’s safety committee.

So far, the MTA insists this isn’t about watching you, per se. It’s watching your behavior. Aaron Donovan, MTA spokesperson and professional splitter of hairs, clarified: “The technology being explored by the MTA is designed to identify behaviors, not people.”

And don’t worry about facial recognition, they say. That’s off the table. For now. Just ignore the dozens of vendors currently salivating over multimillion-dollar public contracts to install “emotion detection” software that’s about as accurate as your aunt’s horoscope app.

Keep reading

Asda launches massive trial of live facial recognition technology that can pick up on thieves in SECONDS in an attempt to combat shoplifting ‘epidemic’

In a move branded ‘disproportionate’ and ‘chilling’ by anti-surveillance groups, the retailer is introducing the scheme in five shops across the Manchester area.

The technology has been integrated into Asda’s existing CCTV network and works by scanning images and comparing the results to a known list of individuals who have previously committed criminal activity in one of its stores.

If a match is found by the automated system, in a matter of seconds head office security will conduct a check and report it to the store in question immediately.

The trial is just one of a battery of measures being taken by major stores to combat an ‘epidemic’ of retail crime – just as plummeting conviction rates have led to accusations that shoplifters are able to ‘act with impunity’.

They include Co-op, which has installed ‘fortified’ kiosks featuring toughened screen and keycode-controlled entry in hundreds of stores.

It is also trialling AI that uses CCTV to track suspicious behaviour.

Meanwhile Tesco controversially introduced weighing scales at its Gateshead to check whether customers using ‘Scan as you shop’ aren’t taking home extra goods.

Keep reading

Facial Recognition Company Clearview Attempted to Buy Social Security Numbers and Mugshots for its Database

Controversial facial recognition company Clearview AI attempted to purchase hundreds of millions of arrest records including social security numbers, mugshots, and even email addresses to incorporate into its product, 404 Media has learned. 

For years, Clearview AI has collected billions of photos from social media websites including Facebook, LinkedIn and others and sold access to its facial recognition tool to law enforcement. The collection and sale of user-generated photos by a private surveillance company to police without that person’s knowledge or consent sparked international outcry when it was first revealed by the New York Times in 2020. 

New documents obtained by 404 Media reveal that Clearview AI spent nearly a million dollars in a bid to purchase “690 million arrest records and 390 million arrest photos” from all 50 states from an intelligence firm. The contract further describes the records as including current and former home addresses, dates of birth, arrest photos, social security and cell phone numbers, and email addresses. Clearview attempted to purchase this data from Investigative Consultant, Inc. (ICI) which billed itself as an intelligence company with access to tens of thousands of databases and the ability to create unique data streams for its clients. The contract was signed in mid-2019, at a time when Clearview AI was quietly collecting billions of photos off the internet and was relatively unknown at the time. 

Ultimately, the entire deal fell apart after Clearview and ICI clashed about the utility of the data with each company filing breach of contract claims. The dispute ultimately went into arbitration where it is common for disputes to be settled privately. The arbiter ultimately sided with Clearview AI in 2024 and ordered ICI to return the contract money. To date, ICI has not paid Clearview, with the company now seeking a court order to enforce the arbiter’s ruling. The president of ICI, Donald Berlin, has been previously accused in a lawsuit of fabricating intelligence reports and libel. Clearview currently advertises to customers that its technology “includes the largest known database of 50+ billion facial images sourced from public-only web sources, including news media, mugshot websites, public social media, and many other open sources,” and Clearview has previously told customers that it was “working to acquire all U.S. mugshots nationally from the last 15 years.”

ICI and Clearview did not respond to multiple requests for comment. 

Keep reading

Finland’s Big Bet on Biometrics: Crime-Fighting Tool or Privacy Nightmare?

Finland has come out with a plan to expand the use of biometric data, with a new a new proposal from the country’s Interior Ministry.

Even as the push to introduce various forms of advanced biometric surveillance, including that incorporating facial recognition, is gaining momentum in countries around the world – so is the pushback from civil rights and privacy campaigners, which ensures that such initiatives these days rarely fly under the radar.

Finland’s Interior Ministry announced on its website that the proposal aims to amend existing rules on biometric data stored by the police and the immigration service – stored, that is, in Finnish citizens’ ID cards, and registers containing biometric data of foreigners.

The government says the intent is not only to strengthen crime prevention – but also to “improve the conditions for using biometrics in law enforcement.”

In addition to the collection of data captured by facial recognition devices, the proposal includes DNA samples and fingerprints taken from suspects. The process is then to attempt to match this biometric data with other types already contained in the law enforcement’s databases – for “crime prevention and investigative purposes.”

The groups keeping a close eye on this development are warning about some of the issues that crop up time and again around similar legislative efforts: the wording that allows for future “mission creep”- as well as unsatisfactory level of provisions that would guarantee against any abuse of such highly sensitive personal information.

Currently, the Finish proposal is yet to be presented to the lawmakers – the Interior Ministry is seeking comments before this can happen. And while the announcement of the proposal goes into the intent driving it, it is short on detail regarding the elephant in the room – privacy safeguards.

Keep reading

Is ATF Using AI-Powered Facial Recognition To ID Gun Owners?

Several reports now indicate that ATF has access to and utilizes facial recognition technology to identify gun owners. In fact, two Government Accountability Office reports confirm that ATF does have access to various facial recognition systems, including Clearview AI, Vigilant Solutions, and other systems owned by other federal, state, local, tribal, or territorial law enforcement agencies.

The Government Accountability Office described the Clearview AI database as: 

“A web-based facial recognition service using 30+ billion facial images sourced from publicly available websites, including news media, mugshot, and social media websites, among others.”

A 2021 GAO report found that ATF did not have sufficient accountability mechanisms “to track what non-federal systems with facial recognition technology are used by employees.”

According to testimony given to the U.S. Commission on Civil Rights, ATF:

“Initially used these services without requiring their staff to take training on topics such as how the technology works, what photos are appropriate to use, and how to interpret the results.”

GAO scolded federal agencies, stating that they “must consider the potential impact of its use on civil rights and civil liberties because the potential for error and the potential to misidentify someone could lead to the arrest and prosecution of an innocent person.”

GAO was also concerned that government use of facial recognition technology can have “a chilling effect on an individual’s exercise of their First Amendment rights.”

Similarly, ATF’s use of Facial Recognition Technology could have a chilling effect on the People’s exercise of their Second Amendment rights.

According to GAO, ATF is reported to have conducted at least 549 facial recognition searches on gun owners between October 2019 and March 2022.

GAO indicates that “as of April 2023, ATF… reported that they had halted their use of such services.”

However, the most recent reports suggest otherwise.

Keep reading

EU AI Act Effectively Legalizes Biometric Mass Surveillance

On February 2, the EU AI Act, which came into force in August, reached the first compliance deadline. These are provisions that contain rules that allow the bloc to legalize biometric mass surveillance.

This is happening via Article 5, that on the face of it seeks to introduce protections against using AI systems that represent “unacceptable risk.” But, there are also exceptions to this, defined in the act as the “strictly necessary” cases when those same systems are allowed to be used.

It is this that gave rise to the fear that one of the consequences of the AI Act is to in fact legalize some highly contentious deployments of biometric data-fueled mass surveillance.

Article 5 prohibits real-time remote biometric ID systems from being used by law enforcement in public spaces – but only “unless and in so far as such use is strictly necessary.”

The “strictly necessary” instances are described as those when law enforcement is searching for “specific victims” of crimes like abduction, trafficking, and sexual exploitation, but also when they look for missing persons.

The second definition gets less precise in scope as it allows for AI surveillance systems to be used to prevent crime. This includes a threat to life or physical safety of individuals that is deemed to be “specific, substantial, and imminent” – or threat of a terrorist attack that law enforcement decides is “genuine and present” but also – “genuine and foreseeable.”

Lastly, the AI Act treats as “strictly necessary” to exempt from prohibited AI activities the following: “Localization or identification of a person suspected of having committed a criminal offense, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offenses.”

Keep reading

Face Scanning App Developed to Identify Unconscious Patients Decades After Human Microchips Failed to Gain Traction

The Dubai Corporation for Ambulance Services has announced a smartphone app that utilizes facial recognition technology aimed at identifying unconscious patients in emergency scenarios. By the middle of 2025 it is expected to be operational in the United Arab Emirates’ ambulatory setting.

“With the app, a patient in an emergency situation such as in a state of unconsciousness can have their face scanned by paramedics for their personal identification information to be retrieved, thanks to the system’s integration with the UAE national identity database,” Biometric Update said Friday.

Dubai is undergoing a so-called ‘digital transformation‘ where facial recognition is taking center stage in the areas of public transit and palm print scanning is being implemented in the areas of payments and commerce.

Being face scanned while unconscious may be the least of someone’s worries, as in the U.S. unconscious patients are being enrolled into medical experiments and administered lethal Covid vaccines.

While face scanning may be seen as invasive by some, American company VeriChip Corporation took things a step further, much further, when they developed an Orwellian implantable human microchip called the VeriChip in the early 2000s with the express purpose of identifying patients in the medical setting as well as identifying children.

Keep reading

Automation in Retail Is Even Worse Than You Thought

Brianna Bagley’s favorite hobby is playing Horizon Zero Dawn, a role-playing game featuring a young hunter who battles murderous robotic organisms on a postapocalyptic planet overrun by machines. When she isn’t leveling up in the game, Bagley is hard at work in the produce department of a chain supermarket in Salt Lake City, Utah. Seven years in the grocery industry has given her plenty of experience with the real-world technology that is automating stores.

During the pandemic, Bagley earned about $15 per hour in a supermarket e-commerce department dedicated to filling online orders and preparing them for delivery. The department was unable to fill the flood of orders that came in each day. Managers pulled employees from other parts of the store to double the department’s staff—but only about half were actually employed in the e-commerce department. The rest were cashiers, baggers, and others conscripted into emergency service. Bagley was grateful for the help, but recognized that it came at others’ expense. “It was harder for those departments to provide customer service with fewer employees,” the 26-year-old said.

Bagley’s experience is of a piece with the broader trend in retail toward automation and other technological shortcuts. From self-checkout machines to payment by app, technology is rapidly changing the way we buy groceries. Progressive members of Congress are sounding the alarm: Representative Rashida Tlaib of Michigan and 13 colleagues wrote to the CEO of the supermarket behemoth Kroger in November about electronic price tags (often called electronic shelf labels or ESLs). These digital displays allow companies to change prices automatically from a mobile app. Tlaib warned that this so-called “dynamic pricing” permits retailers to adjust prices based on their whims. Just as Uber raises prices during storms or rush hour, retailers like Kroger use ESLs to adjust prices based on factors like time of day or the weather. Supermarkets could conceivably mine a shopper’s personal data to set prices as high as possible. “My concern is that these tools will be abused in the pursuit of profit, surging prices on essential goods in areas with fewer and fewer grocery stores,” Tlaib wrote.

Keep reading

US Expands Biometric Technology in Airports Despite Privacy Concerns

Biometric technology is being rolled out at US airports at an unprecedented pace, with plans to extend these systems to hundreds more locations in the coming years. The Transportation Security Administration (TSA) is driving a significant push toward facial recognition and other biometric tools, claiming improved efficiency and security. However, the expansion has sparked growing concerns, with privacy advocates and lawmakers voicing concerns about data security, transparency, and the potential for misuse of such technology.

US Customs and Border Protection (CBP) has already implemented its Biometric Facial Comparison system at 238 airports, including 14 international locations. This includes all CBP Preclearance sites and several major departure hubs. CBP says its Biometric Exit program is rapidly gaining traction, with new airport partners joining monthly and positive feedback reported from passengers.

Meanwhile, the TSA has equipped nearly 84 airports with its next-generation Credential Authentication Technology (CAT-2) scanners, which incorporate facial recognition. This rollout is part of a broader effort to bring biometrics to over 400 airports nationwide. These advancements are detailed in a TSA fact sheet aimed at building public awareness of the initiative.

Keep reading

Apple patent uses FRT with ‘body data’ so cameras can ID people without seeing faces

Apple has been granted a patent for “identity recognition utilizing face-associated body characteristics.” The face recognition technology is anticipated to appear in a forthcoming smart security product from the tech giant.

Patent No. 12154386 2B, filed in May 2022 and granted on November 26, 2024, describes a system that associates facial recognition with other body characteristics, which might include things like clothing, gait, or gesture, to recognize certain people even if their faces are not visible to the camera.

The patent outlines the problem it intends to solve in clear terms: “sometimes a video camera may not be able to perform facial recognition of a person, given a particular video feed.”

It then describes the capability to monitor a video feed and determine, based on the analysis of video frames and previously stored face and body biometrics, whether an identification can be made with a primary body characteristic (face) or requires a secondary characteristic.

The system might work by linking a gallery of “body croppings” such as torso, arms or legs with their face biometrics, then comparing the data with a live video feed. It proceeds in a stepped approach, identifying face, then body parts, then, if needed, “physical characteristics” that could include body shape, skin color, or the texture or color of clothing. The order of operations is adaptable to the scenario.

The resulting data constitutes a cluster of “bodyprints” which can be assigned a confidence score against a person’s faceprint and other characteristics. Since there is a limited time in which certain identifiers are useful (clothing, for instance), the technology can utilize storage periods as brief as 24 hours.

Keep reading