Biometric Surveillance Expands: American Airlines Rolls Out Facial Recognition at Four Major Airports

American Airlines has begun using facial recognition to verify passenger identities at airport security, further embedding biometric technology into the air travel experience. The airline’s new Touchless ID program, now live at several major airports, allows select travelers to move through TSA PreCheck without showing ID or boarding passes.

As of May 29, travelers passing through Ronald Reagan Washington National, LaGuardia, Hartsfield-Jackson Atlanta, and Salt Lake City International can now confirm who they are simply by standing in front of a camera. That image is instantly compared against official federal photo databases such as passports or Global Entry records. If there’s a match, the traveler proceeds; no physical documents required.

This identity-verification option is available only to American Airlines AAdvantage members who are 18 or older, have a valid passport, and have an active TSA PreCheck membership with a Known Traveler Number. Users can enroll through the airline’s website or app, and participation lasts for a year, with the freedom to opt-out and revert to standard ID screening at any time.

The integration of facial recognition at TSA checkpoints may seem like a convenience upgrade, but it introduces concrete privacy risks that go far beyond the airport.

Keep reading

Cities nationwide are quietly deploying facial recognition technology to track your every move

Police in cities across America want to deploy AI-driven facial-recognition technology that’s capable of tracking and identifying every human being who enters public spaces in real time.

Even politicians in some cities are calling for a pause or outright banishment of this technology from ever getting into the hands of cops. But the battle is shaping up to be a big one in cities nationwide, and if I was a betting man I would put my money on the technocrats and the cops. They will likely win out over the few politicians and taxpaying citizens who are concerned about privacy and civil liberties. They almost always do. They have the money and the media propaganda machine on their side.

According to an article in Biometric Update, two-thirds of Milwaukee’s city council says no, they don’t want this technology given to cops. An article in the Milwaukee Journal-Sentinel says 11 of 15 city alderpersons signed a letter opposing use of the facial recognition technology by the Milwaukee Police Department, citing concerns about bias, ethics and potential overreach.

Below is an excerpt from the article in Biometric Update, and notice the rationale — it’s always the same whenever technocrats are involved: Safety, speed and efficiency.

Milwaukee police currently don’t have a facial recognition system — but they want one, and have tested the technology. They say it makes solving crimes faster, and “can be done with the appropriate parameters in place to ensure that the use will not violate individual civil rights.” They say it would not be, and had never been, used as exclusively as probable cause to arrest someone. They have pledged to engage in public consultation as part of any formal acquisition process.

Nonetheless, the Council’s letter, written “in strong opposition to the deployment of facial recognition technology by the Milwaukee Police Department,” says that “while we understand the desire to enhance public safety and the promises people have made for this emerging technology, we believe these benefits are significantly outweighed by the risks.”

The article goes on to note that the council’s letter “names potential overreach by the administration of President Donald Trump as a risk factor, as well as studies showing that the majority of facial-recognition algorithms are more likely to misidentify people with darker skin, women and the elderly.

How absurdly shortsighted that their major concern is Trump using this technology. This suggests they’d be perfectly fine with facial-recognition being deployed if we just had a different person in the White House, someone with a “D” in front of their name like Gavin Newsom or Kamala Harris.

The American Civil Liberties Union of Wisconsin has asked the Milwaukee Council to adopt a two-year pause on any new surveillance technology across city services, including police.

Keep reading

Third World Countries Continue Rolling Out Digital Biometric IDs as ID4AFRICA Agenda is Underway

Numerous third world countries are continuing the global push of implementing biometric identification systems for their citizens and incorporate these systems into digital ID smartphone apps for increased tracking abilities. Behind this push are globalist organizations such as the World Bank and the World Economic Forum. Notably, a digital ID app could be theoretically shut off if the user fails to meet specific requirements, such as vaccination update schedules, although this has not happened, yet.

The ID4AFRICA event going on between May 20-23 in Ethiopia “unites the global identity community to advance the ID4D agenda, and to explore how digital identity and aligned services can support Africa’s socio-economic development and individual empowerment.”

Language on the site says that ‘stakeholders’ are able to attend the event – the word ‘stakeholder’ likely referencing the new globalist economy of ‘stakeholder capitalism‘ that the World Economic Forum promotes.

“The ID4Africa AGM welcomes participation from all stakeholder groups,” the ID4AFRICA about page said.

The ID4D agenda is an operation by the World Bank to roll out digital identification systems globally to help achieve the Sustainable Development Goals (SDGs), which are part of the United Nation’s Agenda 2030 plans for a world government.

“According to the ID4D Dataset, approximately 850 million people lack official ID, and 3.3 billion do not have access to digital ID for official transactions online. The ID4D Initiative works with countries towards reducing this staggering number, and ensure that identification systems are accessible, protect people’s rights and data, and capable of facilitating transactions in the digital age,” the organization said.

In addition to GhanaVenezuelaIvory CoastEdo StateNigeria and Iraq which take biometrics of voters, a number of other third world countries are now implementing systems which are wider reaching in both scope and use case.

Keep reading

New Orleans Police Secretly Used Prohibited Facial Recognition Surveillance for Years

The New Orleans Police Department (NOPD) secretly received real-time, AI-generated alerts from 200 facial recognition cameras throughout the city for two years, despite a city ordinance barring generalized surveillance of the public.

“Police increasingly use facial recognition software to identify unknown culprits from still images, usually taken by surveillance cameras at or near the scene of a crime,” an exposé by The Washington Post explains. However, “New Orleans police took this technology a step further,” automatically alerting officers with real-time updates of names and locations of possible matches of wanted suspects from a private network of cameras through a mobile app. 

“This is the facial recognition technology nightmare scenario that we have been worried about,” Nathan Freed Wessler, a deputy director for the American Civil Liberties Union’s Speech, Privacy, and Technology project, told the Post. “This is the government giving itself the power to track anyone—for that matter, everyone—as we go about our lives walking around in public.” According to Wessler, New Orleans is the first known instance in which a major American city has used artificial intelligence to identify people through live footage for the purpose of making arrests.

The use of these automatic alerts may have violated a city ordinance meant to protect the public’s privacy from a generalized surveillance tool and prevent wrongful arrests due to software errors. 

Passed in 2022 in response to New Orleans’ post-pandemic crime wave, the Surveillance Technology and Data Protection Ordinance removed a previous prohibition on surveillance technology in criminal investigations to increase public safety. Mayor LaToya Cantrell said at the time that the NOPD needed “every tool available at their disposal” to keep the city’s “residents, businesses and visitors safe.” However, the ordinance stopped short of allowing the NOPD to utilize a “face surveillance system”—defined as “any computer software or application that performs face surveillance”—while limiting data collection to “only the minimum amount of personal information needed to fulfill a narrow well-defined purpose.”  

While violent crime in New Orleans has declined since 2022, so have the crime rates in most major American cities that do not use real-time facial recognition surveillance systems. 

Anne Kirkpatrick, superintendent of the NOPD since September 2023, paused the automatic alerts in April after learning about potential legal problems with using the system. Records obtained by the Post reveal that Kirkpatrick sent an email to Project NOLA, the nonprofit that provides the NOPD with facial recognition services, on April 8 stating “that the automated alerts must be turned off until she is ‘sure that the use of the app meets all the requirements of the law and policies.'” The network of cameras remains in place. 

While automatic pings of potential suspect matches to NOPD officers are paused, Kirkpatrick maintains that facial recognition technology is essential to law enforcement. On May 16, 10 inmates escaped from the New Orleans jail, prompting a manhunt (five inmates remain at large). Facial recognition is credited with the capture of two of the escaped inmates. Kirkpatrick told WVUE, the local Fox affiliate, that such a situation is “the exact reason facial recognition technology is so critical and well within our boundaries of the ordinance here.” Bryan Lagarde, Project NOLA’s executive director, confirmed that NOPD is not currently using real-time, AI-generated alerts but is still utilizing facial recognition technology and footage from 5,000 cameras across New Orleans to track and apprehend the escapees. Lagarde described to WVUE an instance in which officers narrowly missed an inmate by a matter of minutes, insinuating that automated alerts might be necessary to protect public safety, despite the cost to privacy. 

Keep reading

Google Reaches $1.375 Billion Settlement with Texas Over Privacy Violations Involving Location Tracking and Biometric Data

Google has reached a $1.375 billion settlement with Texas over allegations the tech giant intruded on user privacy by collecting data without proper consent. The resolution, announced by Texas Attorney General Ken Paxton, concludes two lawsuits centered on the company’s handling of sensitive information across several of its products.

The lawsuits focused on practices involving Google’s location tracking, biometric data collection, and its private browsing tool, Incognito mode. According to Paxton, the company engaged in prolonged surveillance of individuals’ movements, online activity, and even biometric identifiers like voiceprints and facial features, activities he claimed were conducted without user knowledge or agreement.

“In Texas, Big Tech is not above the law,” said Paxton. “For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won.”

Although the total settlement figure has been made public, specific terms remain undisclosed, and the state has not explained how the funds will be distributed.

Google has denied any wrongdoing and emphasized that the agreement resolves claims based on policies that have already been updated. “This settles a raft of old claims, many of which have already been resolved elsewhere, concerning product policies we have long since changed,” said Google spokesperson José Castañeda. He added, “We are pleased to put them behind us, and we will continue to build robust privacy controls into our services.”

The original lawsuits, filed in 2022, accused Google of circumventing user privacy settings, continuing to track locations despite users believing the feature was off. They also charged that the company’s so-called private browsing mode did not actually provide meaningful privacy and that Google had collected biometric data from Texans without obtaining legally required consent.

Keep reading

The Next Phase Of Surveillance? Getting Under Your Skin

AI and transhumanism: Hackable animals

My friends, let me introduce you to Yuval Noah Harari, a man chock-full of big ideas. He explained during the COVID crisis:

“COVID is critical because this is what convinces people to accept, to legitimize, total biometric surveillance. If we want to stop this epidemic, we need not just to monitor people, we need to monitor what’s happening under their skin.”

In a 60 Minutes interview with Anderson Cooper, Harari repeated this idea: “What we have seen so far is corporations and governments collecting data about where we go, who we meet, what movies we watch.

The next phase is the surveillance going under our skin … He likewise told India Today, when commenting on changes accepted by the population during COVID-19:

“We now see mass surveillance systems established even in democratic countries which previously rejected them, and we also see a change in the nature of surveillance. Previously, surveillance was mainly above the skin; now we want it under the skin.

“Governments want to know not just where we go or who we meet. They want to know what’s happening under our skin: what is our body temperature; what is our blood pressure; what is our medical condition?”

Harari is clearly a man who wants to … get under your skin. He just might succeed.

Another recent interview finds him waxing philosophical:

“Now humans are developing even bigger powers than ever before. We are really acquiring divine powers of creation and destruction. We are really upgrading humans into gods. We are acquiring, for instance, the power to re-engineer human life.”

As Kierkegaard once said of Hegel when he talks about the Absolute, when Harari talks about the future, he sounds like he’s going up in a balloon.

Forgive me, but a few last nuggets from professor Harari will round out the picture of his philosophy, and his lofty hopes and dreams:

Humans are now hackable animals. You know, the whole idea that humans have this soul or spirit, and they have free will and nobody knows what’s happening inside me, so, whatever I choose, whether in the election or in the supermarket, that’s my free will — that’s over.”

Harari explains that to hack human being, you need a lot of computing power and a lot of biometric data, which was not possible until recently with the advent of AI.

Keep reading

Asda launches massive trial of live facial recognition technology that can pick up on thieves in SECONDS in an attempt to combat shoplifting ‘epidemic’

In a move branded ‘disproportionate’ and ‘chilling’ by anti-surveillance groups, the retailer is introducing the scheme in five shops across the Manchester area.

The technology has been integrated into Asda’s existing CCTV network and works by scanning images and comparing the results to a known list of individuals who have previously committed criminal activity in one of its stores.

If a match is found by the automated system, in a matter of seconds head office security will conduct a check and report it to the store in question immediately.

The trial is just one of a battery of measures being taken by major stores to combat an ‘epidemic’ of retail crime – just as plummeting conviction rates have led to accusations that shoplifters are able to ‘act with impunity’.

They include Co-op, which has installed ‘fortified’ kiosks featuring toughened screen and keycode-controlled entry in hundreds of stores.

It is also trialling AI that uses CCTV to track suspicious behaviour.

Meanwhile Tesco controversially introduced weighing scales at its Gateshead to check whether customers using ‘Scan as you shop’ aren’t taking home extra goods.

Keep reading

Finland’s Big Bet on Biometrics: Crime-Fighting Tool or Privacy Nightmare?

Finland has come out with a plan to expand the use of biometric data, with a new a new proposal from the country’s Interior Ministry.

Even as the push to introduce various forms of advanced biometric surveillance, including that incorporating facial recognition, is gaining momentum in countries around the world – so is the pushback from civil rights and privacy campaigners, which ensures that such initiatives these days rarely fly under the radar.

Finland’s Interior Ministry announced on its website that the proposal aims to amend existing rules on biometric data stored by the police and the immigration service – stored, that is, in Finnish citizens’ ID cards, and registers containing biometric data of foreigners.

The government says the intent is not only to strengthen crime prevention – but also to “improve the conditions for using biometrics in law enforcement.”

In addition to the collection of data captured by facial recognition devices, the proposal includes DNA samples and fingerprints taken from suspects. The process is then to attempt to match this biometric data with other types already contained in the law enforcement’s databases – for “crime prevention and investigative purposes.”

The groups keeping a close eye on this development are warning about some of the issues that crop up time and again around similar legislative efforts: the wording that allows for future “mission creep”- as well as unsatisfactory level of provisions that would guarantee against any abuse of such highly sensitive personal information.

Currently, the Finish proposal is yet to be presented to the lawmakers – the Interior Ministry is seeking comments before this can happen. And while the announcement of the proposal goes into the intent driving it, it is short on detail regarding the elephant in the room – privacy safeguards.

Keep reading

Is ATF Using AI-Powered Facial Recognition To ID Gun Owners?

Several reports now indicate that ATF has access to and utilizes facial recognition technology to identify gun owners. In fact, two Government Accountability Office reports confirm that ATF does have access to various facial recognition systems, including Clearview AI, Vigilant Solutions, and other systems owned by other federal, state, local, tribal, or territorial law enforcement agencies.

The Government Accountability Office described the Clearview AI database as: 

“A web-based facial recognition service using 30+ billion facial images sourced from publicly available websites, including news media, mugshot, and social media websites, among others.”

A 2021 GAO report found that ATF did not have sufficient accountability mechanisms “to track what non-federal systems with facial recognition technology are used by employees.”

According to testimony given to the U.S. Commission on Civil Rights, ATF:

“Initially used these services without requiring their staff to take training on topics such as how the technology works, what photos are appropriate to use, and how to interpret the results.”

GAO scolded federal agencies, stating that they “must consider the potential impact of its use on civil rights and civil liberties because the potential for error and the potential to misidentify someone could lead to the arrest and prosecution of an innocent person.”

GAO was also concerned that government use of facial recognition technology can have “a chilling effect on an individual’s exercise of their First Amendment rights.”

Similarly, ATF’s use of Facial Recognition Technology could have a chilling effect on the People’s exercise of their Second Amendment rights.

According to GAO, ATF is reported to have conducted at least 549 facial recognition searches on gun owners between October 2019 and March 2022.

GAO indicates that “as of April 2023, ATF… reported that they had halted their use of such services.”

However, the most recent reports suggest otherwise.

Keep reading

EU AI Act Effectively Legalizes Biometric Mass Surveillance

On February 2, the EU AI Act, which came into force in August, reached the first compliance deadline. These are provisions that contain rules that allow the bloc to legalize biometric mass surveillance.

This is happening via Article 5, that on the face of it seeks to introduce protections against using AI systems that represent “unacceptable risk.” But, there are also exceptions to this, defined in the act as the “strictly necessary” cases when those same systems are allowed to be used.

It is this that gave rise to the fear that one of the consequences of the AI Act is to in fact legalize some highly contentious deployments of biometric data-fueled mass surveillance.

Article 5 prohibits real-time remote biometric ID systems from being used by law enforcement in public spaces – but only “unless and in so far as such use is strictly necessary.”

The “strictly necessary” instances are described as those when law enforcement is searching for “specific victims” of crimes like abduction, trafficking, and sexual exploitation, but also when they look for missing persons.

The second definition gets less precise in scope as it allows for AI surveillance systems to be used to prevent crime. This includes a threat to life or physical safety of individuals that is deemed to be “specific, substantial, and imminent” – or threat of a terrorist attack that law enforcement decides is “genuine and present” but also – “genuine and foreseeable.”

Lastly, the AI Act treats as “strictly necessary” to exempt from prohibited AI activities the following: “Localization or identification of a person suspected of having committed a criminal offense, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offenses.”

Keep reading