New Orleans Police Secretly Used Prohibited Facial Recognition Surveillance for Years

The New Orleans Police Department (NOPD) secretly received real-time, AI-generated alerts from 200 facial recognition cameras throughout the city for two years, despite a city ordinance barring generalized surveillance of the public.

“Police increasingly use facial recognition software to identify unknown culprits from still images, usually taken by surveillance cameras at or near the scene of a crime,” an exposé by The Washington Post explains. However, “New Orleans police took this technology a step further,” automatically alerting officers with real-time updates of names and locations of possible matches of wanted suspects from a private network of cameras through a mobile app. 

“This is the facial recognition technology nightmare scenario that we have been worried about,” Nathan Freed Wessler, a deputy director for the American Civil Liberties Union’s Speech, Privacy, and Technology project, told the Post. “This is the government giving itself the power to track anyone—for that matter, everyone—as we go about our lives walking around in public.” According to Wessler, New Orleans is the first known instance in which a major American city has used artificial intelligence to identify people through live footage for the purpose of making arrests.

The use of these automatic alerts may have violated a city ordinance meant to protect the public’s privacy from a generalized surveillance tool and prevent wrongful arrests due to software errors. 

Passed in 2022 in response to New Orleans’ post-pandemic crime wave, the Surveillance Technology and Data Protection Ordinance removed a previous prohibition on surveillance technology in criminal investigations to increase public safety. Mayor LaToya Cantrell said at the time that the NOPD needed “every tool available at their disposal” to keep the city’s “residents, businesses and visitors safe.” However, the ordinance stopped short of allowing the NOPD to utilize a “face surveillance system”—defined as “any computer software or application that performs face surveillance”—while limiting data collection to “only the minimum amount of personal information needed to fulfill a narrow well-defined purpose.”  

While violent crime in New Orleans has declined since 2022, so have the crime rates in most major American cities that do not use real-time facial recognition surveillance systems. 

Anne Kirkpatrick, superintendent of the NOPD since September 2023, paused the automatic alerts in April after learning about potential legal problems with using the system. Records obtained by the Post reveal that Kirkpatrick sent an email to Project NOLA, the nonprofit that provides the NOPD with facial recognition services, on April 8 stating “that the automated alerts must be turned off until she is ‘sure that the use of the app meets all the requirements of the law and policies.'” The network of cameras remains in place. 

While automatic pings of potential suspect matches to NOPD officers are paused, Kirkpatrick maintains that facial recognition technology is essential to law enforcement. On May 16, 10 inmates escaped from the New Orleans jail, prompting a manhunt (five inmates remain at large). Facial recognition is credited with the capture of two of the escaped inmates. Kirkpatrick told WVUE, the local Fox affiliate, that such a situation is “the exact reason facial recognition technology is so critical and well within our boundaries of the ordinance here.” Bryan Lagarde, Project NOLA’s executive director, confirmed that NOPD is not currently using real-time, AI-generated alerts but is still utilizing facial recognition technology and footage from 5,000 cameras across New Orleans to track and apprehend the escapees. Lagarde described to WVUE an instance in which officers narrowly missed an inmate by a matter of minutes, insinuating that automated alerts might be necessary to protect public safety, despite the cost to privacy. 

Keep reading

Google Reaches $1.375 Billion Settlement with Texas Over Privacy Violations Involving Location Tracking and Biometric Data

Google has reached a $1.375 billion settlement with Texas over allegations the tech giant intruded on user privacy by collecting data without proper consent. The resolution, announced by Texas Attorney General Ken Paxton, concludes two lawsuits centered on the company’s handling of sensitive information across several of its products.

The lawsuits focused on practices involving Google’s location tracking, biometric data collection, and its private browsing tool, Incognito mode. According to Paxton, the company engaged in prolonged surveillance of individuals’ movements, online activity, and even biometric identifiers like voiceprints and facial features, activities he claimed were conducted without user knowledge or agreement.

“In Texas, Big Tech is not above the law,” said Paxton. “For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won.”

Although the total settlement figure has been made public, specific terms remain undisclosed, and the state has not explained how the funds will be distributed.

Google has denied any wrongdoing and emphasized that the agreement resolves claims based on policies that have already been updated. “This settles a raft of old claims, many of which have already been resolved elsewhere, concerning product policies we have long since changed,” said Google spokesperson José Castañeda. He added, “We are pleased to put them behind us, and we will continue to build robust privacy controls into our services.”

The original lawsuits, filed in 2022, accused Google of circumventing user privacy settings, continuing to track locations despite users believing the feature was off. They also charged that the company’s so-called private browsing mode did not actually provide meaningful privacy and that Google had collected biometric data from Texans without obtaining legally required consent.

Keep reading

The Next Phase Of Surveillance? Getting Under Your Skin

AI and transhumanism: Hackable animals

My friends, let me introduce you to Yuval Noah Harari, a man chock-full of big ideas. He explained during the COVID crisis:

“COVID is critical because this is what convinces people to accept, to legitimize, total biometric surveillance. If we want to stop this epidemic, we need not just to monitor people, we need to monitor what’s happening under their skin.”

In a 60 Minutes interview with Anderson Cooper, Harari repeated this idea: “What we have seen so far is corporations and governments collecting data about where we go, who we meet, what movies we watch.

The next phase is the surveillance going under our skin … He likewise told India Today, when commenting on changes accepted by the population during COVID-19:

“We now see mass surveillance systems established even in democratic countries which previously rejected them, and we also see a change in the nature of surveillance. Previously, surveillance was mainly above the skin; now we want it under the skin.

“Governments want to know not just where we go or who we meet. They want to know what’s happening under our skin: what is our body temperature; what is our blood pressure; what is our medical condition?”

Harari is clearly a man who wants to … get under your skin. He just might succeed.

Another recent interview finds him waxing philosophical:

“Now humans are developing even bigger powers than ever before. We are really acquiring divine powers of creation and destruction. We are really upgrading humans into gods. We are acquiring, for instance, the power to re-engineer human life.”

As Kierkegaard once said of Hegel when he talks about the Absolute, when Harari talks about the future, he sounds like he’s going up in a balloon.

Forgive me, but a few last nuggets from professor Harari will round out the picture of his philosophy, and his lofty hopes and dreams:

Humans are now hackable animals. You know, the whole idea that humans have this soul or spirit, and they have free will and nobody knows what’s happening inside me, so, whatever I choose, whether in the election or in the supermarket, that’s my free will — that’s over.”

Harari explains that to hack human being, you need a lot of computing power and a lot of biometric data, which was not possible until recently with the advent of AI.

Keep reading

Asda launches massive trial of live facial recognition technology that can pick up on thieves in SECONDS in an attempt to combat shoplifting ‘epidemic’

In a move branded ‘disproportionate’ and ‘chilling’ by anti-surveillance groups, the retailer is introducing the scheme in five shops across the Manchester area.

The technology has been integrated into Asda’s existing CCTV network and works by scanning images and comparing the results to a known list of individuals who have previously committed criminal activity in one of its stores.

If a match is found by the automated system, in a matter of seconds head office security will conduct a check and report it to the store in question immediately.

The trial is just one of a battery of measures being taken by major stores to combat an ‘epidemic’ of retail crime – just as plummeting conviction rates have led to accusations that shoplifters are able to ‘act with impunity’.

They include Co-op, which has installed ‘fortified’ kiosks featuring toughened screen and keycode-controlled entry in hundreds of stores.

It is also trialling AI that uses CCTV to track suspicious behaviour.

Meanwhile Tesco controversially introduced weighing scales at its Gateshead to check whether customers using ‘Scan as you shop’ aren’t taking home extra goods.

Keep reading

Finland’s Big Bet on Biometrics: Crime-Fighting Tool or Privacy Nightmare?

Finland has come out with a plan to expand the use of biometric data, with a new a new proposal from the country’s Interior Ministry.

Even as the push to introduce various forms of advanced biometric surveillance, including that incorporating facial recognition, is gaining momentum in countries around the world – so is the pushback from civil rights and privacy campaigners, which ensures that such initiatives these days rarely fly under the radar.

Finland’s Interior Ministry announced on its website that the proposal aims to amend existing rules on biometric data stored by the police and the immigration service – stored, that is, in Finnish citizens’ ID cards, and registers containing biometric data of foreigners.

The government says the intent is not only to strengthen crime prevention – but also to “improve the conditions for using biometrics in law enforcement.”

In addition to the collection of data captured by facial recognition devices, the proposal includes DNA samples and fingerprints taken from suspects. The process is then to attempt to match this biometric data with other types already contained in the law enforcement’s databases – for “crime prevention and investigative purposes.”

The groups keeping a close eye on this development are warning about some of the issues that crop up time and again around similar legislative efforts: the wording that allows for future “mission creep”- as well as unsatisfactory level of provisions that would guarantee against any abuse of such highly sensitive personal information.

Currently, the Finish proposal is yet to be presented to the lawmakers – the Interior Ministry is seeking comments before this can happen. And while the announcement of the proposal goes into the intent driving it, it is short on detail regarding the elephant in the room – privacy safeguards.

Keep reading

Is ATF Using AI-Powered Facial Recognition To ID Gun Owners?

Several reports now indicate that ATF has access to and utilizes facial recognition technology to identify gun owners. In fact, two Government Accountability Office reports confirm that ATF does have access to various facial recognition systems, including Clearview AI, Vigilant Solutions, and other systems owned by other federal, state, local, tribal, or territorial law enforcement agencies.

The Government Accountability Office described the Clearview AI database as: 

“A web-based facial recognition service using 30+ billion facial images sourced from publicly available websites, including news media, mugshot, and social media websites, among others.”

A 2021 GAO report found that ATF did not have sufficient accountability mechanisms “to track what non-federal systems with facial recognition technology are used by employees.”

According to testimony given to the U.S. Commission on Civil Rights, ATF:

“Initially used these services without requiring their staff to take training on topics such as how the technology works, what photos are appropriate to use, and how to interpret the results.”

GAO scolded federal agencies, stating that they “must consider the potential impact of its use on civil rights and civil liberties because the potential for error and the potential to misidentify someone could lead to the arrest and prosecution of an innocent person.”

GAO was also concerned that government use of facial recognition technology can have “a chilling effect on an individual’s exercise of their First Amendment rights.”

Similarly, ATF’s use of Facial Recognition Technology could have a chilling effect on the People’s exercise of their Second Amendment rights.

According to GAO, ATF is reported to have conducted at least 549 facial recognition searches on gun owners between October 2019 and March 2022.

GAO indicates that “as of April 2023, ATF… reported that they had halted their use of such services.”

However, the most recent reports suggest otherwise.

Keep reading

EU AI Act Effectively Legalizes Biometric Mass Surveillance

On February 2, the EU AI Act, which came into force in August, reached the first compliance deadline. These are provisions that contain rules that allow the bloc to legalize biometric mass surveillance.

This is happening via Article 5, that on the face of it seeks to introduce protections against using AI systems that represent “unacceptable risk.” But, there are also exceptions to this, defined in the act as the “strictly necessary” cases when those same systems are allowed to be used.

It is this that gave rise to the fear that one of the consequences of the AI Act is to in fact legalize some highly contentious deployments of biometric data-fueled mass surveillance.

Article 5 prohibits real-time remote biometric ID systems from being used by law enforcement in public spaces – but only “unless and in so far as such use is strictly necessary.”

The “strictly necessary” instances are described as those when law enforcement is searching for “specific victims” of crimes like abduction, trafficking, and sexual exploitation, but also when they look for missing persons.

The second definition gets less precise in scope as it allows for AI surveillance systems to be used to prevent crime. This includes a threat to life or physical safety of individuals that is deemed to be “specific, substantial, and imminent” – or threat of a terrorist attack that law enforcement decides is “genuine and present” but also – “genuine and foreseeable.”

Lastly, the AI Act treats as “strictly necessary” to exempt from prohibited AI activities the following: “Localization or identification of a person suspected of having committed a criminal offense, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offenses.”

Keep reading

Face Scanning App Developed to Identify Unconscious Patients Decades After Human Microchips Failed to Gain Traction

The Dubai Corporation for Ambulance Services has announced a smartphone app that utilizes facial recognition technology aimed at identifying unconscious patients in emergency scenarios. By the middle of 2025 it is expected to be operational in the United Arab Emirates’ ambulatory setting.

“With the app, a patient in an emergency situation such as in a state of unconsciousness can have their face scanned by paramedics for their personal identification information to be retrieved, thanks to the system’s integration with the UAE national identity database,” Biometric Update said Friday.

Dubai is undergoing a so-called ‘digital transformation‘ where facial recognition is taking center stage in the areas of public transit and palm print scanning is being implemented in the areas of payments and commerce.

Being face scanned while unconscious may be the least of someone’s worries, as in the U.S. unconscious patients are being enrolled into medical experiments and administered lethal Covid vaccines.

While face scanning may be seen as invasive by some, American company VeriChip Corporation took things a step further, much further, when they developed an Orwellian implantable human microchip called the VeriChip in the early 2000s with the express purpose of identifying patients in the medical setting as well as identifying children.

Keep reading

Automation in Retail Is Even Worse Than You Thought

Brianna Bagley’s favorite hobby is playing Horizon Zero Dawn, a role-playing game featuring a young hunter who battles murderous robotic organisms on a postapocalyptic planet overrun by machines. When she isn’t leveling up in the game, Bagley is hard at work in the produce department of a chain supermarket in Salt Lake City, Utah. Seven years in the grocery industry has given her plenty of experience with the real-world technology that is automating stores.

During the pandemic, Bagley earned about $15 per hour in a supermarket e-commerce department dedicated to filling online orders and preparing them for delivery. The department was unable to fill the flood of orders that came in each day. Managers pulled employees from other parts of the store to double the department’s staff—but only about half were actually employed in the e-commerce department. The rest were cashiers, baggers, and others conscripted into emergency service. Bagley was grateful for the help, but recognized that it came at others’ expense. “It was harder for those departments to provide customer service with fewer employees,” the 26-year-old said.

Bagley’s experience is of a piece with the broader trend in retail toward automation and other technological shortcuts. From self-checkout machines to payment by app, technology is rapidly changing the way we buy groceries. Progressive members of Congress are sounding the alarm: Representative Rashida Tlaib of Michigan and 13 colleagues wrote to the CEO of the supermarket behemoth Kroger in November about electronic price tags (often called electronic shelf labels or ESLs). These digital displays allow companies to change prices automatically from a mobile app. Tlaib warned that this so-called “dynamic pricing” permits retailers to adjust prices based on their whims. Just as Uber raises prices during storms or rush hour, retailers like Kroger use ESLs to adjust prices based on factors like time of day or the weather. Supermarkets could conceivably mine a shopper’s personal data to set prices as high as possible. “My concern is that these tools will be abused in the pursuit of profit, surging prices on essential goods in areas with fewer and fewer grocery stores,” Tlaib wrote.

Keep reading

US Expands Biometric Technology in Airports Despite Privacy Concerns

Biometric technology is being rolled out at US airports at an unprecedented pace, with plans to extend these systems to hundreds more locations in the coming years. The Transportation Security Administration (TSA) is driving a significant push toward facial recognition and other biometric tools, claiming improved efficiency and security. However, the expansion has sparked growing concerns, with privacy advocates and lawmakers voicing concerns about data security, transparency, and the potential for misuse of such technology.

US Customs and Border Protection (CBP) has already implemented its Biometric Facial Comparison system at 238 airports, including 14 international locations. This includes all CBP Preclearance sites and several major departure hubs. CBP says its Biometric Exit program is rapidly gaining traction, with new airport partners joining monthly and positive feedback reported from passengers.

Meanwhile, the TSA has equipped nearly 84 airports with its next-generation Credential Authentication Technology (CAT-2) scanners, which incorporate facial recognition. This rollout is part of a broader effort to bring biometrics to over 400 airports nationwide. These advancements are detailed in a TSA fact sheet aimed at building public awareness of the initiative.

Keep reading