Airlines urge senators to reject bill limiting facial recognition

A group representing several major airlines alongside travel companies and airports is opposing a Senate bill that would require the Transportation Security Administration (TSA) to generally use manual ID verification at security checkpoints instead of facial recognition.

The bill, introduced by Sen. Jeff Merkley (D-Ore.), would broadly restrict TSA’s ability to use biometrics and facial recognition, carving out a few exemptions for the agency’s PreCheck and other Trusted Traveler programs. Passengers may still opt in to the use of facial recognition at the checkpoint.

In a letter Monday to Sens. Ted Cruz (R-Texas) and Maria Cantwell (D-Wash.), the air industry groups said the law was a “step backward” and that facial recognition technology made security screenings far more efficient.

“The future of seamless and secure travel relies on the appropriate use of this technology to ensure security effectiveness and operational efficiency as daily travel volume continues to rise,” they wrote. “We are concerned that the vague and confusing exceptions to this blanket ban will have major consequences for the identity verification process, screening operations, and trusted traveler enrollment programs.”

Cruz and Cantwell are their parties’ highest-ranking members of the Senate Commerce, Science and Transportation Committee, which is scheduled to mark up the bill Wednesday.

In addition to limiting the use of facial recognition, Merkley’s bill would also require TSA to delete most images collected at checkpoints within 24 hours of a passenger’s departure.

Travelers going through a TSA checkpoint are generally able to opt out of facial recognition, the agency says. Merkley has argued the agency’s enforcement is inconsistent, posting on social media in February about his difficulties navigating the policy at Reagan Washington National Airport.

“This is big government coming to take away your privacy, trying to set up a national surveillance system,” the Oregon Democrat said in February. 

The airlines, however, warned that restricting the use of facial recognition could slow down security and divert TSA’s resources toward maintaining officer staffing, rather than focusing on automated innovations. The group also said it felt it had been insufficiently consulted on the legislation, “despite the major impact the bill would have on aviation security, airports, airlines, travelers, and technology companies.”

Keep reading

London is the Testing Lab for Big Brother Mass Facial Scanning Tech

Since the start of 2024, the Metropolitan Police has been quietly transforming London into a testing ground for live facial recognition (LFR).

Depending on who you ask, this is either a technological triumph that’s making the capital safer or a mass surveillance experiment that would make any privacy advocate wince.

The numbers are eye-watering: in just over 18 months, the Met has scanned the faces of around 2.4 million people. And from that sea of biometric data, they’ve made 1,035 arrests. That’s a hit rate of 0.04%. Or, to put it plainly, more than 99.9% of those scanned had done absolutely nothing wrong.

The police, of course, are eager to present this as a success story. Lindsey Chiswick, who oversees the Met’s facial recognition program, calls it a game-changer. “This milestone of 1,000 arrests is a demonstration of how cutting-edge technology can make London safer by removing dangerous offenders from our streets,” she said.

Of those arrested, 773 were charged or cautioned. Some were suspects in serious cases, including violent crimes against women and girls.

But here’s where things get complicated. To secure those 1,000 arrests, millions of innocent people have had their faces scanned and processed.

What’s being billed as precision policing can start to look more like casting an enormous net and hoping you catch something worthwhile.

Keep reading

ICE Is Using A New Facial Recognition App To Identify People, Leaked Emails Show

Immigration and Customs Enforcement (ICE) is using a new mobile phone app that can identify someone based on their fingerprints or face by simply pointing a smartphone camera at them, according to internal ICE emails viewed by 404 Media. The underlying system used for the facial recognition component of the app is ordinarily used when people enter or exit the U.S. Now, that system is being used inside the U.S. by ICE to identify people in the field.

The news highlights the Trump administration’s growing use of sophisticated technology for its mass deportation efforts and ICE’s enforcement of its arrest quotas. The document also shows how biometric systems built for one reason can be repurposed for another, a constant fear and critique from civil liberties proponents of facial recognition tools.

“Face recognition technology is notoriously unreliable, frequently generating false matches and resulting in a number of known wrongful arrests across the country. Immigration agents relying on this technology to try to identify people on the street is a recipe for disaster. Congress has never authorized DHS to use face recognition technology in this way, and the agency should shut this dangerous experiment down,” Nathan Freed Wessler, deputy director of the American Civil Liberties Union’s Speech, Privacy, and Technology Project, told 404 Media in an email.

“The Mobile Fortify App empowers users with real-time biometric identity verification capabilities utilizing contactless fingerprints and facial images captured by the camera on an ICE issued cell phone without a secondary collection device,” one of the emails, which was sent to all Enforcement and Removal Operations (ERO) personnel and seen by 404 Media, reads. ERO is the section of ICE specifically focused on deporting people.

The idea is for ICE to use this new tool to identify people whose identity ICE officers do not know. “This information can be used to identify unknown subjects in the field,” the email continues. “Officers are reminded that the fingerprint matching is currently the most accurate biometric indicator available in the application,” it adds, indicating that the fingerprint functionality is more accurate than the facial recognition component.

The emails also show the app has a “training range,” a feature that lets ICE officers practice capturing facial images and fingerprints in a “training non-live environment.”

video posted to social media this month shows apparent ICE officers carefully pointing their phones at a protester in his vehicle, but it is not clear if the officers were taking ordinary photos or using this tool.

Broadly, facial recognition tools work by taking one image to be tested and comparing it to a database of other images. Clearview AI for example, a commercially available facial recognition tool which is used by law enforcement but which doesn’t appear to be related to this ICE tool, compares a photo to a massive database of peoples’ photos scraped from social media and the wider web.

Keep reading

Guilt by Algorithm: Woman Wrongly Accused of Shoplifting Due to Facial Recognition Error

A woman was left “fuming” after being erroneously accused of stealing toilet paper and ejected from two Home Bargains stores in Greater Manchester, UK, due to an apparent mix-up with a facial recognition system designed to prevent shoplifting.

BBC News reports that Danielle Horan, a makeup business owner, found herself in a distressing situation when she was escorted out of Home Bargains branches in Salford and Manchester, without initially being given any explanation for her removal. It was later discovered that Horan had been falsely accused of stealing approximately £10 worth of items after her profile was added to a facial recognition watchlist used by the stores.

The incident unfolded on May 24, when Horan visited the Home Bargains store on Regent Road in Salford. As she was shopping, the store manager approached her and asked her to leave, causing Horan to feel embarrassed and confused in front of other customers. Despite her protestations, the manager advised her to contact Facewatch, the retail security firm that provides the facial recognition technology, directly.

Horan’s attempts to reach out to both Facewatch and Home Bargains initially proved futile. However, when she visited another Home Bargains store in Fallowfield, Manchester, with her 81-year-old mother on June 4, she was once again surrounded by staff and told to leave the premises as soon as she entered the store. This time, Horan stood her ground and demanded an explanation for her treatment.

After persistent emails to Facewatch and Home Bargains, Horan finally learned that there had been an allegation of theft involving approximately £10 worth of toilet rolls in early May. Somehow, her picture had been circulated to local stores, alerting them not to allow her entry. Horan checked her bank account and confirmed that she had, in fact, paid for the items in question.

Eventually, Facewatch responded to Horan, stating that a review of the incident showed she had not stolen anything. The firm acknowledged the distressing nature of Horan’s experience and noted that the retailer had since undertaken additional staff training. However, Horan’s ordeal had already taken a toll on her mental well-being, causing anxiety and stress as she questioned her actions and felt sick to her stomach for a week.

Keep reading

Biometric Surveillance Expands: American Airlines Rolls Out Facial Recognition at Four Major Airports

American Airlines has begun using facial recognition to verify passenger identities at airport security, further embedding biometric technology into the air travel experience. The airline’s new Touchless ID program, now live at several major airports, allows select travelers to move through TSA PreCheck without showing ID or boarding passes.

As of May 29, travelers passing through Ronald Reagan Washington National, LaGuardia, Hartsfield-Jackson Atlanta, and Salt Lake City International can now confirm who they are simply by standing in front of a camera. That image is instantly compared against official federal photo databases such as passports or Global Entry records. If there’s a match, the traveler proceeds; no physical documents required.

This identity-verification option is available only to American Airlines AAdvantage members who are 18 or older, have a valid passport, and have an active TSA PreCheck membership with a Known Traveler Number. Users can enroll through the airline’s website or app, and participation lasts for a year, with the freedom to opt-out and revert to standard ID screening at any time.

The integration of facial recognition at TSA checkpoints may seem like a convenience upgrade, but it introduces concrete privacy risks that go far beyond the airport.

Keep reading

Cities nationwide are quietly deploying facial recognition technology to track your every move

Police in cities across America want to deploy AI-driven facial-recognition technology that’s capable of tracking and identifying every human being who enters public spaces in real time.

Even politicians in some cities are calling for a pause or outright banishment of this technology from ever getting into the hands of cops. But the battle is shaping up to be a big one in cities nationwide, and if I was a betting man I would put my money on the technocrats and the cops. They will likely win out over the few politicians and taxpaying citizens who are concerned about privacy and civil liberties. They almost always do. They have the money and the media propaganda machine on their side.

According to an article in Biometric Update, two-thirds of Milwaukee’s city council says no, they don’t want this technology given to cops. An article in the Milwaukee Journal-Sentinel says 11 of 15 city alderpersons signed a letter opposing use of the facial recognition technology by the Milwaukee Police Department, citing concerns about bias, ethics and potential overreach.

Below is an excerpt from the article in Biometric Update, and notice the rationale — it’s always the same whenever technocrats are involved: Safety, speed and efficiency.

Milwaukee police currently don’t have a facial recognition system — but they want one, and have tested the technology. They say it makes solving crimes faster, and “can be done with the appropriate parameters in place to ensure that the use will not violate individual civil rights.” They say it would not be, and had never been, used as exclusively as probable cause to arrest someone. They have pledged to engage in public consultation as part of any formal acquisition process.

Nonetheless, the Council’s letter, written “in strong opposition to the deployment of facial recognition technology by the Milwaukee Police Department,” says that “while we understand the desire to enhance public safety and the promises people have made for this emerging technology, we believe these benefits are significantly outweighed by the risks.”

The article goes on to note that the council’s letter “names potential overreach by the administration of President Donald Trump as a risk factor, as well as studies showing that the majority of facial-recognition algorithms are more likely to misidentify people with darker skin, women and the elderly.

How absurdly shortsighted that their major concern is Trump using this technology. This suggests they’d be perfectly fine with facial-recognition being deployed if we just had a different person in the White House, someone with a “D” in front of their name like Gavin Newsom or Kamala Harris.

The American Civil Liberties Union of Wisconsin has asked the Milwaukee Council to adopt a two-year pause on any new surveillance technology across city services, including police.

Keep reading

New Orleans Police Secretly Used Prohibited Facial Recognition Surveillance for Years

The New Orleans Police Department (NOPD) secretly received real-time, AI-generated alerts from 200 facial recognition cameras throughout the city for two years, despite a city ordinance barring generalized surveillance of the public.

“Police increasingly use facial recognition software to identify unknown culprits from still images, usually taken by surveillance cameras at or near the scene of a crime,” an exposé by The Washington Post explains. However, “New Orleans police took this technology a step further,” automatically alerting officers with real-time updates of names and locations of possible matches of wanted suspects from a private network of cameras through a mobile app. 

“This is the facial recognition technology nightmare scenario that we have been worried about,” Nathan Freed Wessler, a deputy director for the American Civil Liberties Union’s Speech, Privacy, and Technology project, told the Post. “This is the government giving itself the power to track anyone—for that matter, everyone—as we go about our lives walking around in public.” According to Wessler, New Orleans is the first known instance in which a major American city has used artificial intelligence to identify people through live footage for the purpose of making arrests.

The use of these automatic alerts may have violated a city ordinance meant to protect the public’s privacy from a generalized surveillance tool and prevent wrongful arrests due to software errors. 

Passed in 2022 in response to New Orleans’ post-pandemic crime wave, the Surveillance Technology and Data Protection Ordinance removed a previous prohibition on surveillance technology in criminal investigations to increase public safety. Mayor LaToya Cantrell said at the time that the NOPD needed “every tool available at their disposal” to keep the city’s “residents, businesses and visitors safe.” However, the ordinance stopped short of allowing the NOPD to utilize a “face surveillance system”—defined as “any computer software or application that performs face surveillance”—while limiting data collection to “only the minimum amount of personal information needed to fulfill a narrow well-defined purpose.”  

While violent crime in New Orleans has declined since 2022, so have the crime rates in most major American cities that do not use real-time facial recognition surveillance systems. 

Anne Kirkpatrick, superintendent of the NOPD since September 2023, paused the automatic alerts in April after learning about potential legal problems with using the system. Records obtained by the Post reveal that Kirkpatrick sent an email to Project NOLA, the nonprofit that provides the NOPD with facial recognition services, on April 8 stating “that the automated alerts must be turned off until she is ‘sure that the use of the app meets all the requirements of the law and policies.'” The network of cameras remains in place. 

While automatic pings of potential suspect matches to NOPD officers are paused, Kirkpatrick maintains that facial recognition technology is essential to law enforcement. On May 16, 10 inmates escaped from the New Orleans jail, prompting a manhunt (five inmates remain at large). Facial recognition is credited with the capture of two of the escaped inmates. Kirkpatrick told WVUE, the local Fox affiliate, that such a situation is “the exact reason facial recognition technology is so critical and well within our boundaries of the ordinance here.” Bryan Lagarde, Project NOLA’s executive director, confirmed that NOPD is not currently using real-time, AI-generated alerts but is still utilizing facial recognition technology and footage from 5,000 cameras across New Orleans to track and apprehend the escapees. Lagarde described to WVUE an instance in which officers narrowly missed an inmate by a matter of minutes, insinuating that automated alerts might be necessary to protect public safety, despite the cost to privacy. 

Keep reading

New York Is Quietly Rolling Out Precrime Surveillance Tech

Picture this: it’s rush hour in New York City. A guy in a Mets cap mutters to himself on the F train platform, pacing in tight circles. Nearby, a woman checks her phone five times in ten seconds. Overhead, cameras are watching. Behind the cameras? A machine. And behind that machine? An army of bureaucrats who’ve convinced themselves that bad vibes are now a crime category.

Welcome to the MTA’s shiny new plan for keeping you safe: an AI surveillance system designed to detect “irrational or concerning conduct” before anything happens. Not after a crime. Not even during. Before. The sort of thing that, in less tech-horny times, might’ve been called “having a bad day.”

MTA Chief Security Officer Michael Kemper, the man standing between us and a future where talking to yourself means a visit from the NYPD, is calling it “predictive prevention.”

“AI is the future,” Kemper assured the MTA’s safety committee.

So far, the MTA insists this isn’t about watching you, per se. It’s watching your behavior. Aaron Donovan, MTA spokesperson and professional splitter of hairs, clarified: “The technology being explored by the MTA is designed to identify behaviors, not people.”

And don’t worry about facial recognition, they say. That’s off the table. For now. Just ignore the dozens of vendors currently salivating over multimillion-dollar public contracts to install “emotion detection” software that’s about as accurate as your aunt’s horoscope app.

Keep reading

Asda launches massive trial of live facial recognition technology that can pick up on thieves in SECONDS in an attempt to combat shoplifting ‘epidemic’

In a move branded ‘disproportionate’ and ‘chilling’ by anti-surveillance groups, the retailer is introducing the scheme in five shops across the Manchester area.

The technology has been integrated into Asda’s existing CCTV network and works by scanning images and comparing the results to a known list of individuals who have previously committed criminal activity in one of its stores.

If a match is found by the automated system, in a matter of seconds head office security will conduct a check and report it to the store in question immediately.

The trial is just one of a battery of measures being taken by major stores to combat an ‘epidemic’ of retail crime – just as plummeting conviction rates have led to accusations that shoplifters are able to ‘act with impunity’.

They include Co-op, which has installed ‘fortified’ kiosks featuring toughened screen and keycode-controlled entry in hundreds of stores.

It is also trialling AI that uses CCTV to track suspicious behaviour.

Meanwhile Tesco controversially introduced weighing scales at its Gateshead to check whether customers using ‘Scan as you shop’ aren’t taking home extra goods.

Keep reading

Facial Recognition Company Clearview Attempted to Buy Social Security Numbers and Mugshots for its Database

Controversial facial recognition company Clearview AI attempted to purchase hundreds of millions of arrest records including social security numbers, mugshots, and even email addresses to incorporate into its product, 404 Media has learned. 

For years, Clearview AI has collected billions of photos from social media websites including Facebook, LinkedIn and others and sold access to its facial recognition tool to law enforcement. The collection and sale of user-generated photos by a private surveillance company to police without that person’s knowledge or consent sparked international outcry when it was first revealed by the New York Times in 2020. 

New documents obtained by 404 Media reveal that Clearview AI spent nearly a million dollars in a bid to purchase “690 million arrest records and 390 million arrest photos” from all 50 states from an intelligence firm. The contract further describes the records as including current and former home addresses, dates of birth, arrest photos, social security and cell phone numbers, and email addresses. Clearview attempted to purchase this data from Investigative Consultant, Inc. (ICI) which billed itself as an intelligence company with access to tens of thousands of databases and the ability to create unique data streams for its clients. The contract was signed in mid-2019, at a time when Clearview AI was quietly collecting billions of photos off the internet and was relatively unknown at the time. 

Ultimately, the entire deal fell apart after Clearview and ICI clashed about the utility of the data with each company filing breach of contract claims. The dispute ultimately went into arbitration where it is common for disputes to be settled privately. The arbiter ultimately sided with Clearview AI in 2024 and ordered ICI to return the contract money. To date, ICI has not paid Clearview, with the company now seeking a court order to enforce the arbiter’s ruling. The president of ICI, Donald Berlin, has been previously accused in a lawsuit of fabricating intelligence reports and libel. Clearview currently advertises to customers that its technology “includes the largest known database of 50+ billion facial images sourced from public-only web sources, including news media, mugshot websites, public social media, and many other open sources,” and Clearview has previously told customers that it was “working to acquire all U.S. mugshots nationally from the last 15 years.”

ICI and Clearview did not respond to multiple requests for comment. 

Keep reading