The Greatest Trick Big Brother Ever Pulled

“The greatest trick the Devil ever pulled was convincing the world he didn’t exist” is a quote generally attributed to Charles Baudelaire – or possibly Keyser Söze depending on who you ask on the internet. Something similar can be said about Big Brother.

When you think about what our emerging surveillance state will look like, you think 1984. You imagine East Germany powered by Google and Amazon. You recall your favorite dystopian sci-fi film – or maybe horror stories of China’s social credit system. Thoughts of a frustrated middle-aged police chief from a mid-sized Midwestern town attempting to procure security cameras with innovative new features probably don’t come to mind. You definitely don’t think of a guy in a lawn chair jotting down the license plate numbers of passing vehicles in a notebook. And that’s partly how the surveillance state is going to emerge as it creeps its way into one small town at a time.

Whether a surveillance state is the end goal is hard to say. The police chief of Pawnee, Indiana probably isn’t plotting the development of his own mini-Oceania. But, 18,000-plus mini-Oceanias operating across multiple platforms with varying degrees of integration, both locally and nationally, is undoubtedly the direction in which we are heading as salespeople peddle shiny new surveillance gadgets to cities big and small, making often unverified but intuitively appealing claims of how their devices will decrease crime or prove to be useful investigative tools.

Facial recognition tends to be the surveillance gadget that receives the most attention these days. You’ve seen it in movies and maybe feel some unease over visions of government agents sitting in a penumbrous room illuminated only by the faint glow of countless monitors with little boxes tracking the faces of every person walking down a busy city street. Likely, by now, you’ve also probably heard of facial recognition being used for relatively petty purposes or leading to incidents in which innocent people were harassed or arrested because a program made a mistake. Maybe you’ve even been following the efforts to ban the technology.

Yet, other surveillance gadgets that aren’t quite as sexy or as prevalent in pop culture manage to remain under the radar of even the most privacy-conscious as they are promoted through law enforcement peer referral programs organized by surveillance gadget companies seeking to have their devices in every town in America.

Keep reading

Facial recognition used after Sunglass Hut robbery led to man’s wrongful jailing, says suit

A 61-year-old man is suing Macy’s and the parent company of Sunglass Hut over the stores’ alleged use of a facial recognition system that misidentified him as the culprit behind an armed robbery and led to his wrongful arrest. While in jail, he was beaten and raped, according to his suit.

Harvey Eugene Murphy Jr was accused and arrested on charges of robbing a Houston-area Sunglass Hut of thousands of dollars of merchandise in January 2022, though his attorneys say he was living in California at the time of the robbery. He was arrested on 20 October 2023, according to his lawyers.

According to Murphy’s lawsuit, an employee of EssilorLuxottica, Sunglass Hut’s parent company, worked with its retail partner Macy’s and used facial recognition software to identify Murphy as the robber. The image that was put through the facial recognition system came from low-quality cameras, according to the lawsuit. While Houston police department was investigating the armed robbery, the EssilorLuxottica employee called police to say they could stop the investigation because the employee had identified one of two robbers with the technology. The employee also said the system had pointed to Murphy as committing two other robberies, according to the lawsuit.

When Murphy returned to Texas from California, he went to the department of motor vehicles (DMV) to renew his license. Within minutes of identifying himself to a DMV clerk, Murphy told the Guardian he was approached by a police officer who notified him there was a warrant out for his arrest for an aggravated robbery. Murphy said he was not told any details about his supposed crime except for the date the robbery occurred. He realized he was in Sacramento, California, at the time of the robbery – more than a thousand miles away.

“I almost thought it was a joke,” Murphy said.

Still, he was arrested and taken to the local county jail, where he was held for 10 days before being transferred to and processed in Harris county jail.

After a few days at Harris county, his alibi was confirmed by both his court-appointed defense attorney and the prosecutor, and the charges against him were ultimately dropped, according to the lawsuit.

Murphy was never convicted of a crime. Nonetheless, he says his detainment left him with deep scars. He was brutally beaten and gang-raped by three other men in the jail hours before he was released, he alleges. They threatened to kill him if he tried to report them to the jail staff, according to Murphy. After the alleged attack, Murphy remained in the same cell as them until he was released.

“That was kind of terrifying,” Murphy said. “Your anxiety is up so high, you’re still shaking the entire time. And I just got up on my bunk and just faced the wall and was just praying that something would come through and get me out of that tank.”

“The attack left him with permanent injuries that he has to live with every day of his life,” the lawsuit reads. “All of this happened to Murphy because the Defendants relied on facial recognition technology that is known to be error prone and faulty.”

Murphy did not realize facial recognition technology may have been used as evidence against him until two weeks ago, when he began working with his attorney, Daniel Dutko.

Dutko said he discovered from police documents that the Sunglass Hut worker shared camera footage with Macy’s, which employees from the department store chain used to identify Murphy. After that, Macy’s and Sunglass Hut contacted the police together, according to Dutko. Though Macy’s has retail partnerships with the eyewear brand in several locations, Macy’s had no connection to this robbery as the Sunglass Hut in question is a standalone location, he said.

“We feel very comfortable saying facial recognition software is the only possible explanation, and it’s the only reason why [Sunglass Hut] would go to Macy’s to try to identify him,” Dutko said.

Keep reading

Miami Police Used Clearview AI Facial Recognition in Arrest of Homeless Man

Facial recognition technology is increasingly being deployed by police officers across the country, but the scope of its use has been hard to pin down.

In Miami, it’s used for cases big and exceedingly small, as one case Reason recently reviewed showed: Miami police used facial recognition technology to identify a homeless man who refused to give his name to an officer. That man was arrested, but prosecutors quickly dropped the case after determining the officer lacked probable cause for the arrest. 

The case was barely a blip in the daily churn of Miami’s criminal justice system, but it shows the spread of facial recognition technology and the use of retaliatory charges against those who annoy the police.

Lisa Femia, a staff attorney at the Electronic Frontier Foundation (EFF), which advocates for digital privacy rights, calls the case “a particularly egregious example of mission creep with facial recognition technology.”

“It’s often advertised as a way for law enforcement to solve the worst of the worst crimes,” Femia says. “And instead we have law enforcement here using it to harass the homeless.”

According to a police incident report, a man, who Reason is not identifying because he was ultimately not prosecuted, was sleeping on a bench in a parking garage at Miami International Airport on the morning of November 13, 2023, when he was approached by a Miami-Dade County police officer.

“While on routine patrol at the Miami International Airport I observed defendant sleeping on a bench in the Dolphin garage, covered with a blanket and unbagged personal items on airport luggage cart,” the officer wrote in his report. “The bench is provided for passengers waiting for vehicles to and from the airport. It is not designated for housing.”

The report notes that Miami-Dade police have been directed to address homelessness at the airport and that the officer initiated contact to see if the man had been previously issued a trespass warning.

The man didn’t have an ID, and he gave the officer a fake name and 2010 date of birth.

“Defendant was obviously not a 13-year-old juvenile,” the report says. “I provided defendant several opportunities to provide correct information and he refused.”

Under Florida law, police can demand identification from a pedestrian only when there is reasonable suspicion that they have committed a crime. For example, two Florida sheriff’s deputies were disciplined in 2022 after they arrested a legally blind man for refusing to show his ID.

This officer had other means at his disposal, though. “I identified defendant via facial recognition from Clearview, with assistance from C. Perez, analyst at the MDPD real time crime center,” the report says.

Keep reading

DATA SOLUTIONS PROVIDER TELUS INTERNATIONAL IS PAYING $50 FOR IMAGES OF KIDS TO TRAIN GOOGLE’S AI

In a recent initiative, Google and TELUS International, a subsidiary of the Canadian tech conglomerate TELUS, have collaborated to collect biometric data from children for age verification purposes. This project, running from November 2023 to January 2024, involved parents filming their children’s faces, capturing details such as eyelid shape, skin tone, and facial geometry. Parents who participated were paid $50 per child.

First reported by 404media, the project requested that parents take 11 short videos of their children while wearing things like face masks or hats. Another request was for children’s faces with no coverings at all. Each video must be less than 40 seconds, and participants were expected to spend 30 to 45 minutes on the task.

According to the summary document, which has now been taken down, a TELUS International moderator would be on a call while the parent took these videos of the child.

According to TELUS International, the purpose of this project was to capture a diverse range of biometric data to ensure that their customer’s services and products are representative of various demographics. Google told 404media that the goal was to enhance authentication methods, thus providing more secure tools for users. 

“As part of our commitment to delivering age-appropriate experiences and to comply with laws and regulations around the world, we’re exploring ways to help our users verify their age. Last year, TELUS helped us find volunteers for a project exploring whether this could be done via selfies. From there, Google collected videos and images of faces, clearly explaining how the content would be used, and, as with all research involving minors, we required parental consent for participants under the age of 18. We’ve also put strict privacy protections in place, including limiting the amount of time the data will be retained and providing all participants the option to delete their data at any time,” Google told 404media in a statement.

While this aligns with Google’s broader commitment to developing responsible and ethical facial recognition technology, the project has raised significant concerns regarding children’s privacy and consent.

Parents had to consent to Google and TELUS International collecting their child’s personal and biometric information in order to participate. This included the shape of their eyelids, the color of their skin and their “facial geometry.” According to the TELUS International summary, Google would then keep the data for five years at most, which for some participants, would be into their early adulthood.

Keep reading

UK porn watchers could have faces scanned

Porn users could have their faces scanned to prove their age, with extra checks for young-looking adults, draft guidance from Ofcom suggests.

The watchdog has set out a number of ways explicit sites could prevent children from viewing pornography.

The average age children first view pornography is 13, a survey suggests.

Explicit website Pornhub said regulations requiring the collection of “highly sensitive personal information” could jeopardise user safety.

Privacy campaigners have also criticised the proposals warning of “catastrophic” consequences if data from age checks is leaked.

A large chunk of the UK population watch online pornography – nearly 14 million people, according to a recent report by Ofcom.

But the ease of access to online pornography has also raised concerns that children are viewing explicit websites – with one in ten children seeing it by age nine, according to a survey by the Children’s Commissioner.

The Online Safety Act, which recently became law, requires social media platforms and search engines to protect children from harmful content online.

It will be enforced by Ofcom, who can issue large fines if firms fail to comply.

Ofcom has now outlined how it expects firms to become “highly effective” at complying with the new regulations, which come into force sometime in 2025.

Keep reading

Sharp Rise in Facial Recognition Use by Scottish Police, UK Protest Footage Scanned

The police in Scotland have tripled the use of retrospective facial recognition over the last five years jumping from just under 1,300 in 2018 to nearly 4,000 in 2022.

The rising trend has continued during 2023 with more than 2,000 searches carried out in the first four months of the year, according to data obtained by a freedom of information request by UK investigative journalism organizations Liberty Investigates and The Ferret.

The trend has been rising in other parts of the country. In 2014, the total number of searches using retrospective facial recognition by all police forces in the UK amounted to just 3,360. By 2022, that number jumped to 85,158, according to UK Home Office data.

The Scottish police ranks fourth in the use of the technology in the UK. The leader is the London Metropolitan Police which accounted for 30 percent or 27,677 searches last year.

The UK police have been using retrospective facial recognition to match faces captured with CCTV cameras with millions of images stored in the Police National Database. The practice has proved controversial as the database still contains many images of people who were released without charge.

Police in Scotland operate a distinct policy from other UK forces, only uploading custody images to the database once an individual has been charged with a crime and removing images of those found innocent after 6 months.

Facial recognition use by the police has been a target of criticism from some lawmakers, non-governmental organizations and policy experts.

Keep reading

The Israel-Hamas War is ALREADY Pushing the Great Reset Agenda

Afew days ago we published an article discussing how the Great Reset agenda is still moving forward behind the scenes, while the headlines are full of Israel-Palestine.

But it’s also true that, in its thirteen days of existence, the war itself has already pushed that agenda forward as well.

CENSORSHIP

Normalising the suppression of dissent and creating a culture of fear around free expression are a major part of the Great Reset, after all the other steps are so much easier if you outlaw inconvenient protests.

And, naturally, calls for the suppression of freedom of expression have sprouted up everywhere since the war started. We covered this in our article “Israel-Hamas “war” – another excuse to shut down free speech”

Since that article was published this campaign has gained momentum.

European Union Commissioner Thierry Breton sent warning letters out to every major social media platform, claiming they needed to “combat disinformation” regarding Israel and threatening them with fines.

In yet another blow to the “China is on our side” narrative, Chinese video-sharing service TikTok has eagerly agreed to “combat disinformation”.

Students from Harvard and Berkeley have been threatened with “blacklisting” for voicing support for Palestine.

German and French police are breaking up pro-Palestine demonstrations, while – in both the UK and US – there are calls to arrest people for waving Palestinian flags, or deport those who “support Hamas”.

Creating a culture of fear, making people afraid to express themselves or their political opinions, is just one of the many things that Covid, Ukraine, Climate Change and now Israel have in common.

Keep reading

GAO Report Shows the Government Uses Face Recognition with No Accountability, Transparency, or Training

Federal agents are using face recognition software without training, policies, or oversight, according to the Government Accountability Office (GAO).

The government watchdog issued yet another report this month about the dangerously inadequate and nonexistent rules for how federal agencies use face recognition, underlining what we’ve already known: the government cannot be trusted with this flawed and dangerous technology.

The GAO review covered seven agencies within the Department of Homeland Security (DHS) and Department of Justice (DOJ), which together account for more than 80 percent of all federal officers and a majority of face recognition searches conducted by federal agents.

Across each of the agencies, GAO found that most law enforcement officers using face recognition have no training before being given access to the powerful surveillance tool. No federal laws or regulations mandate specific face recognition training for DHS or DOJ employees, and Homeland Security Investigations (HSI) and Marshals Service were the only agencies reviewed to now require training specific to face recognition. Though each agency has their own general policies on handling personally identifiable information (PII), like facial images used for face recognition, none of the seven agencies included in the GAO review fully complied with them.

Thousands of face recognition searches have been conducted by the federal agents without training or policies. In the period GAO studied, at least 63,000 searches had happened, but this number is a known undercount. A complete count of face recognition use is not possible. The number of federal agents with access to face recognition, the number of searches conducted, and the reasons for the searches does not exist, because some systems used by the Federal Bureau of Investigation (FBI) and Customs and Border Protection (CBP) don’t track these numbers.

Our faces are unique and mostly permanent — people don’t usually just get a new one— and face recognition technology, particularly when used by law enforcement and government, puts into jeopardy many of our important rights. Privacy, free expression, information security, and social justice are all at risk. The technology facilitates covert mass surveillance of the places we frequent and the people we know. It can be used to make judgments about how we feel and behave. Mass adoption of face recognition means being able to track people automatically as they go about their day visiting doctors, lawyers, houses of worship, as well as friends and family. It also means that law enforcement could, for example, fly a drone over a protest against police violence and walk away with a list of everyone in attendance. Either instance would create a chilling effect wherein people would be hesitant to attend protests or visit certain friends or romantic partners knowing there would be a permanent record of it.

Keep reading

Twitter can now harvest YOUR ‘biometric’ information including fingerprint, face recognition and eye tracking data – as Musk’s site quietly updates its T&Cs ‘for safety purposes’

The social media platform formerly known as Twitter can now harvest your biometric data and DNA.

A new update quietly added to the platform’s privacy policy says that X now has permission to harvest its users’ fingerprints, retinal scans, voice and face recognition and keystroke patterns.

The update would mean that anyone who uses fingerprint verification to log in to the app from their phone, posts selfies or videos to the platform or speaks their mind on X ‘spaces’ could see their unique biometric data catalogued by the company. 

The new policy, which describes its interest in users’ biometrics as ‘for safety, security, and identification purposes,’ also added the platform’s intent to scrape up data on users’ job history, educational background and ‘job search activity.’

The move follows nearly a year of turmoil for the microblogging app, which has included Musk requesting that its users pay subscription fees for premium services and verification: part of his larger plan to recover from cratering advertising revenue.

Keep reading

Microsoft Files For a Face-Tracking Patent

A patent filed in the US shows that Microsoft is working on technology that would allow it to track a person’s face in a way so comprehensive that the device powered by the tech could be referred to as a “face reader.”

And it could be used for gaming, but also for tracking remote employees. And who knows what else in between.

Microsoft says it needs the patent approved to develop mixed reality headsets that would be cheaper yet better at “understanding” expressions on human faces.

The patent filing doesn’t go into many considerations other than those of a purely technical nature, and this in a nutshell is how the under-development technology works.

Currently, converters are used that are not only bulkier to render high resolution tracking but also cost more to manufacture.

Now Microsoft wants to replace this method of tracking with elements directly embedded, circumventing the need for converters, and also what’s referred to as processing circuit area.

Keep reading