Facial Recognition Shows Up in Public Housing, Small Cities

The race to make biometric surveillance commonplace is only getting faster, with systems going up in public housing and municipalities far from city crime.

With the growth comes a mission that residents worldwide have often been told is off the table, that of the all-seeing, always analyzing sentinel that never stops recording what happens in the community.

The issue is again in the news, this time following a lengthy article in The Washington Post reporting on facial recognition systems being used in United States public housing.

Also, Context, a Thomson Reuters Foundation analytical publication, has shown how surveillance vendors are selling smaller cities on big-city facial recognition systems – and how residents are being cajoled into linking their own cameras to police networks.

Post reporters said they found six public housing centers whose boards have purchased surveillance cameras and computer servers. Some of those on the list also use biometric surveillance algorithms.

They were the Cincinnati Metropolitan Housing, Omaha Housing, Scott County (Virginia) Redevelopment & Housing, Jefferson County (Ohio) Housing and Grand Rapids (Michigan) Housing agencies.

Keep reading

Clearview Facial Recognition: A Perpetual Police Lineup

Clearview AI CEO Hoan Ton-That admitted that the company scraped 30 billion photos from Facebook and other social media platforms and used them in its massive facial recognition database accessible by law enforcement agencies across the U.S. Critics call the company’s database a “perpetual police lineup.” 

This is an example of the growing cooperation between private companies and government agencies in the ever-growing U.S. surveillance state.

The photos were collected from social media platforms without users’ permission or knowledge.

Clearview AI markets its facial recognition database as a tool allowing law enforcement to rapidly generate leads “to help identify suspects, witnesses and victims to close cases faster and keep communities safe.” According to Ton-That, law enforcement agencies across the U.S. have accessed the company’s database over 1 million times since 2017.

According to a CNN report last year, more than 3,100 U.S. agencies use Clearview AI, including the FBI and the Department of Homeland Security.

In a statement, Ton-That said, “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

While photo scraping might be legal, Facebook sent Clearview AI a cease and desist order in 2020 for violation of the platform’s terms of service. In an email to Insider, a Meta spokesperson said, “Clearview AI’s actions invade people’s privacy, which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services.”

Fight for the Future director of campaigns Caitlin Seeley George called Clearview “a total affront to peoples’ rights, full stop,” and said, “Police should not be able to use this tool.”

Keep reading

Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a ‘perpetual police line-up’

A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company’s CEO recently admitted, creating what critics called a “perpetual police line-up,” even for people who haven’t done anything wrong. 

The company, Clearview AI, boasts of its potential for identifying rioters at the January 6 attack on the Capitol, saving children being abused or exploited, and helping exonerate people wrongfully accused of crimes. But critics point to privacy violations and wrongful arrests fueled by faulty identifications made by facial recognition, including cases in Detroit and New Orleans, as cause for concern over the technology. 

Clearview took photos without users’ knowledge, its CEO Hoan Ton-That acknowledged in an interview last month with the BBC. Doing so allowed for the rapid expansion of the company’s massive database, which is marketed on its website to law enforcement as a tool “to bring justice to victims.”

Ton-That told the BBC that Clearview AI’s facial recognition database has been accessed by US police nearly a million times since the company’s founding in 2017, though the relationships between law enforcement and Clearview AI remain murky and that number could not be confirmed by Insider. 

In a statement emailed Insider, Ton-That said “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

The company’s CEO added: “Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

Keep reading

Giving up biometrics at US airports soon won’t be optional, transport security chief says

The chief of the Transportation Security Administration (TSA) David Pekoske said that the agency is considering biometric technology to reduce traveler processing times and reduce the number of screening officers. He made the comments at the South by Southwest conference, which focused on aviation security.

Pekoske noted that the TSA’s role is maintaining security and the transportation system and staying ahead of threats. For those reasons, it is “critically important that this system has as little friction as it possibly can, while we provide for safety and security.”

The TSA has been relying on biometric technology in the identification verification process. According to the agency, the newest technology it has been using is over 99% effective and does not have problems identifying darker-skinned people like the old technology.

“We’re upgrading our camera systems all the time, upgrading our lighting systems,” Pekoske said. “[We’re] upgrading our algorithms, so that we are using the very most advanced algorithms and technology we possibly can.”

Pekoske said that the agency will ensure it remains transparent with the public about the data that is taken, what it is used for, and for how long it will be stored. For now, he said that travelers can opt out of processes they are not comfortable with.

According to The Dallas Morning News, giving up biometric data for travel will eventually not be optional.

Keep reading

Pentagon, FBI Collaborated On AI, Facial Recognition Tech For Federal Agencies, Documents Show

The Department of Defense (DOD) and the FBI collaborated on an artificial intelligence-driven facial recognition technology program provided to at least six federal agencies and a Pentagon agency that supports civilian police forces, The Washington Post reported.

The facial recognition software could be used to identify individuals whose features were captured by drones and CCTV cameras, the Post reported, citing documents obtained through a Freedom of Information Act request as part of an ongoing lawsuit by the American Civil Liberties Union (ACLU) filed against the FBI. The documents reveal federal authorities were more deeply involved in development of the technology than was previously known, sparking concerns over Americans’ privacy rights.

“Americans’ ability to navigate our communities without constant tracking and surveillance is being chipped away at an alarming pace,” Democratic Sen. Ed Markey of Massachusetts told the Post. “We cannot stand by as the tentacles of the surveillance state dig deeper into our private lives, treating every one of us like suspects in an unbridled investigation that undermines our rights and freedom.”

Keep reading

Face Biometrics Getting Deeper into Policing, Sparking Concerns

Those worried about the use of facial recognition by law enforcement have warned about how the technology could become entrenched in bureaucracies, growing in use and getting harder to question from outside governments.

A trio of recent reports, from Germany, The Netherlands, and the United Kingdom seem to bear that out.

In Germany, a civil rights activist, Matthias Monroy, writing in his own blog, says a facial recognition system used to identify unknown people has grown “dramatically” from 2021 to 2022.

The database reportedly belongs to Germany’s federal police. According to Monroy, it was searched about 7,700 in 2022, compared to 6,100 times in 2021.

About 2,800 people were identified using the police’s algorithm last year, compared to 1,300 in 2021.

The advocate says that the Federal Ministry offered the information after being asked by a party in parliament. He also said that, according to the ministry, the same data has not been received from German states.

The images are gathered from CCTV cameras and from phones used by police to record the faces of suspects of crimes. Asylum seekers are in the same database.

Reportedly, the number of facial images in the police database grew by about 1.5 million last year compared to the previous year primarily because only 400,000 images were deleted.

If German police are starting to hold on to photos longer, they might be in good company.

Trade publication ComputerWeekly is reporting that some in the UK feel the government is adopting a biometrics “culture of retention.”

Keep reading

Dystopia Down Under: Facial Mood-Tracking CCTV Cameras Deployed at Mardi Gras Pride Parade

Surveillance cameras with the ability to measure a crowd’s “mood” and track the number of people by counting cell phone frequencies were deployed at the Lesbian and Gay Mardi Gras Parade on Saturday in Sydney, Australia.

The CCTV technology from the Dynamic Crowd Measurement firm was used to monitor Oxford Street in Sydney as the city’s LGBTQ-themed Mardi Gras parade was held for the first time since 2019, having been cancelled for the past three years due to the Chinese coronavirus.

According to a report from the Sydney Morning Herald, the cutting-edge cameras came equipped with the ability to track the mood of the crowd, with software being able to track facial expressions and determine whether they are displaying signs of happiness, anger, or neutrality.

The cameras also come with the ability to measure crowd density by counting the number of cell phone frequencies emitted in a given area. It was estimated that around 12,000 people attended on Saturday.

Keep reading

The Air Force’s Drones Can Now Recognize Faces. Uh-Oh.

The U.S. Air Force now has the capability to use facial recognition on drones that could target specific people. Special operations forces can use the drones to gather intelligence and to aid in other missions, according to a contract first spotted by New Scientist. It’s part of a growing movement to develop automated weaponry that raises legal and ethical questions.

The drone software maker, Seattle-based firm RealNetworks, claims the uncrewed craft will use artificial intelligence (AI) to fly itself and discriminate between friend and foe. The company has said that its software can also be used for rescue missions, perimeter protection, and domestic search operations.

The new Air Force drone system isn’t the only drone system to try to use facial recognition. An Israeli company is working on a drone that uses AI to help a drone find the best angles for facial recognition.

Keep reading

Another Facial Recognition Snafu Leads to False Arrest, Wrongful Imprisonment; ACLU Asks Lawmakers to Ban Police Use

Instead of enjoying a late Thanksgiving meal with his mother in Georgia, Randal Reid spent nearly a week in jail in November after he was falsely identified as a luxury purse thief by Louisiana authorities using facial recognition technology.

That’s according to Monday reporting by NOLA.com, which caught the attention of Fight for the Future, a digital rights group that has long advocated against law enforcement and private entities using such technology, partly because of its shortcomings and the risk of outcomes like this.

“So much wrong here,” Fight for the Future said Tuesday, sharing the story on Twitter. The group highlighted that many cops can use facial recognition systems without publicly disclosing it, and anyone’s “life can be upended because of a machine’s mistake.”

Keep reading

‘Power Run Amok’: Madison Square Garden Uses Face-Scanning Tech to Remove Perceived Adversaries

BARBARA HART WAS celebrating her wedding anniversary and waiting for Brandi Carlile to take the stage at Madison Square Garden on Oct. 22, when a pair of security guards approached her and her husband by their seats and asked for the couple to follow them. At first, Hart tells Rolling Stone she was excited, thinking it was some sort of surprise before the concert started. Her excitement turned to anxiety soon after, however, as she spoke with security and gathered that she’d been identified using facial-recognition technology. Then they escorted her out of the venue. 

Hart was initially confused, having no idea why she was flagged. She says security informed her that she was being ejected because of her job as an attorney at Grant & Eisenhofer, a law firm currently litigating against Madison Square Garden’s parent company in a Delaware class-action suit involving several groups of shareholders.

Madison Square Garden Entertainment, owned by James Dolan (who has been known to kick out fans who anger him), confirms to RS that it enacted a policy in recent months forbidding anyone in active litigation against the company from entry to the company’s venues — which include the New York arena that gives the company its name, along with Radio City Music Hall, Beacon Theatre, and the Chicago Theatre. The company’s use of facial recognition tools itself dates back to at least 2018, when the New York Times reported on it; anyone who enters the venue is subject to scanning, and that practice now seems to coincide with the policy against opposing litigants.

“This is retaliatory behavior of powerful people against others, and that should be concerning to us,” says Hart, who also spoke of the incident in a sworn affidavit last month, as Reuters reported. Hart recalls that she declined to give MSG security her ID, but that they were able to correctly identify her anyway; she says security mentioned her picture appearing on Grant & Eisenhofer’s website, leading her to the conclusion that facial recognition was involved. “It was a very eerie experience to be on the receiving end of at that moment.”

Keep reading