The Israel-Hamas War is ALREADY Pushing the Great Reset Agenda

Afew days ago we published an article discussing how the Great Reset agenda is still moving forward behind the scenes, while the headlines are full of Israel-Palestine.

But it’s also true that, in its thirteen days of existence, the war itself has already pushed that agenda forward as well.

CENSORSHIP

Normalising the suppression of dissent and creating a culture of fear around free expression are a major part of the Great Reset, after all the other steps are so much easier if you outlaw inconvenient protests.

And, naturally, calls for the suppression of freedom of expression have sprouted up everywhere since the war started. We covered this in our article “Israel-Hamas “war” – another excuse to shut down free speech”

Since that article was published this campaign has gained momentum.

European Union Commissioner Thierry Breton sent warning letters out to every major social media platform, claiming they needed to “combat disinformation” regarding Israel and threatening them with fines.

In yet another blow to the “China is on our side” narrative, Chinese video-sharing service TikTok has eagerly agreed to “combat disinformation”.

Students from Harvard and Berkeley have been threatened with “blacklisting” for voicing support for Palestine.

German and French police are breaking up pro-Palestine demonstrations, while – in both the UK and US – there are calls to arrest people for waving Palestinian flags, or deport those who “support Hamas”.

Creating a culture of fear, making people afraid to express themselves or their political opinions, is just one of the many things that Covid, Ukraine, Climate Change and now Israel have in common.

Keep reading

GAO Report Shows the Government Uses Face Recognition with No Accountability, Transparency, or Training

Federal agents are using face recognition software without training, policies, or oversight, according to the Government Accountability Office (GAO).

The government watchdog issued yet another report this month about the dangerously inadequate and nonexistent rules for how federal agencies use face recognition, underlining what we’ve already known: the government cannot be trusted with this flawed and dangerous technology.

The GAO review covered seven agencies within the Department of Homeland Security (DHS) and Department of Justice (DOJ), which together account for more than 80 percent of all federal officers and a majority of face recognition searches conducted by federal agents.

Across each of the agencies, GAO found that most law enforcement officers using face recognition have no training before being given access to the powerful surveillance tool. No federal laws or regulations mandate specific face recognition training for DHS or DOJ employees, and Homeland Security Investigations (HSI) and Marshals Service were the only agencies reviewed to now require training specific to face recognition. Though each agency has their own general policies on handling personally identifiable information (PII), like facial images used for face recognition, none of the seven agencies included in the GAO review fully complied with them.

Thousands of face recognition searches have been conducted by the federal agents without training or policies. In the period GAO studied, at least 63,000 searches had happened, but this number is a known undercount. A complete count of face recognition use is not possible. The number of federal agents with access to face recognition, the number of searches conducted, and the reasons for the searches does not exist, because some systems used by the Federal Bureau of Investigation (FBI) and Customs and Border Protection (CBP) don’t track these numbers.

Our faces are unique and mostly permanent — people don’t usually just get a new one— and face recognition technology, particularly when used by law enforcement and government, puts into jeopardy many of our important rights. Privacy, free expression, information security, and social justice are all at risk. The technology facilitates covert mass surveillance of the places we frequent and the people we know. It can be used to make judgments about how we feel and behave. Mass adoption of face recognition means being able to track people automatically as they go about their day visiting doctors, lawyers, houses of worship, as well as friends and family. It also means that law enforcement could, for example, fly a drone over a protest against police violence and walk away with a list of everyone in attendance. Either instance would create a chilling effect wherein people would be hesitant to attend protests or visit certain friends or romantic partners knowing there would be a permanent record of it.

Keep reading

Twitter can now harvest YOUR ‘biometric’ information including fingerprint, face recognition and eye tracking data – as Musk’s site quietly updates its T&Cs ‘for safety purposes’

The social media platform formerly known as Twitter can now harvest your biometric data and DNA.

A new update quietly added to the platform’s privacy policy says that X now has permission to harvest its users’ fingerprints, retinal scans, voice and face recognition and keystroke patterns.

The update would mean that anyone who uses fingerprint verification to log in to the app from their phone, posts selfies or videos to the platform or speaks their mind on X ‘spaces’ could see their unique biometric data catalogued by the company. 

The new policy, which describes its interest in users’ biometrics as ‘for safety, security, and identification purposes,’ also added the platform’s intent to scrape up data on users’ job history, educational background and ‘job search activity.’

The move follows nearly a year of turmoil for the microblogging app, which has included Musk requesting that its users pay subscription fees for premium services and verification: part of his larger plan to recover from cratering advertising revenue.

Keep reading

Microsoft Files For a Face-Tracking Patent

A patent filed in the US shows that Microsoft is working on technology that would allow it to track a person’s face in a way so comprehensive that the device powered by the tech could be referred to as a “face reader.”

And it could be used for gaming, but also for tracking remote employees. And who knows what else in between.

Microsoft says it needs the patent approved to develop mixed reality headsets that would be cheaper yet better at “understanding” expressions on human faces.

The patent filing doesn’t go into many considerations other than those of a purely technical nature, and this in a nutshell is how the under-development technology works.

Currently, converters are used that are not only bulkier to render high resolution tracking but also cost more to manufacture.

Now Microsoft wants to replace this method of tracking with elements directly embedded, circumventing the need for converters, and also what’s referred to as processing circuit area.

Keep reading

From Covert To Overt: UK Govt & Businesses Unleash Facial Recognition Technologies Across Urban Landscape

The Home Office is encouraging police forces across the country to make use of live facial recognition technologies for routine law enforcement. Retailers are also embracing the technology to monitor their customers. 

It increasingly seems that the UK decoupled from the European Union, its rules and regulations, only for its government to take the country in a progressively more authoritarian direction. This is, of course, a generalised trend among ostensibly “liberal democracies” just about everywhere, including EU Member States, as they increasingly adopt the trappings and tactics of more authoritarian regimes, such as restricting free speech, cancelling people and weakening the rule of law. But the UK is most definitely at the leading edge of this trend. A case in point is the Home Office’s naked enthusiasm for biometric surveillance and control technologies.

This week, for example, The Guardian revealed that the Minister for Policing Chris Philip and other senior figures of the Home Office had held a closed-door meeting with Simon Gordon, the founder of Facewatch, a leading facial recognition retail security company, in March. The main outcome of the meeting was that the government would lobby the Information Commissioner’s Office (ICO) on the benefits of using live facial recognition (LFR) technologies in retail settings. LFR involves hooking up facial recognition cameras to databases containing photos of people. Images from the cameras can then be screened against those photos to see if they match.

The lobbying effort was apparently successful. Just weeks after reaching out to the ICO, the ICO sent a letter to Facewatch affirming that the company “has a legitimate purpose for using people’s information for the detection and prevention of crime” and that its services broadly comply with UK Data Protection laws, which the Sunak government and UK intelligence agencies are trying to gut. As the Guardian report notes, “the UK’s data protection and information bill proposes to abolish the role of the government-appointed surveillance camera commissioner along with the requirement for a surveillance camera code of practice.”

The ICO’s approval gives legal cover to a practice that is already well established. Facewatch has been scanning the faces of British shoppers in thousands of retail stores across the UK for years. The cameras scan faces as people enter a store and screens them against a database of known offenders, alerting shop assistants if a “subject of interest” has entered. Shops using the technologies have placed notices in their windows (such as the one below) informing customers that facial recognition technologies are in operation, “to protect” the shop’s “employees, customers and stock.” But it is far from clear how many shoppers actually take notice of the notices.

Keep reading

Portland Goes Full China: Store Requires Looking at Camera to Enter

A convenience store in Portland, Oregon has implemented a digital face scan system that requires customers look at a camera before they are able to enter.

“Please look at the camera for entry,” a computerized voice says.

“Facial recognition in use,” the sign under the camera reads.

China has hundreds facial recognition stores.

It’s convenient to pay for your purchase with just your face, until your social credit score doesn’t permit you to enter the store.https://t.co/aTYt8eXuus pic.twitter.com/5FyxsZcR07

— Songpinganq (@songpinganq) July 10, 2023

The door remains locked until the customer consents and looks up at the camera.

Social media users have reacted to the development by drawing comparisons to China, which regularly requires citizens use facial recognition technology to access society. In communist China, citizens need to scan their face to buy products, and their social-credit scores get adjusted accordingly.

But with the democrat-run Portland having some of the highest crime rates in the United States, some stores have turned to the technology to save their businesses.

Keep reading

Consumer Group Warns Smartphone Facial Recognition Apps Are Vulnerable to Spoofing

Smartphone face biometrics from many leading brands are vulnerable to spoof attacks with 2D photographs, according to a new report from UK-based consumer testing and review group Which?, according to Yahoo Finance UK.

The group says the vulnerability is “unacceptable,” and has “worrying implications” for user’s security.

On-device biometrics are used for device unlocking and local authentication, while KYC processes for customer onboarding and strong remote identity verification is typically carried out with server-side biometrics and other signals, with a layer of liveness or presentation attack detection.

The phones tested include Honor, Motorola, Nokia, Oppo, Samsung, Vivo and Xiaomi handsets. Apple’s 3D FaceID biometrics were not fooled by the photos. The devices tested range in price from £89.99 to nearly £1,000 (approximately US$112 to $1,244), but the majority of phones that failed the test are lower-cost or mid-range models.

Out of 48 new smartphone models tested, 60 percent were not vulnerable to spoofing with a photograph.

Keep reading

Facial Recognition Shows Up in Public Housing, Small Cities

The race to make biometric surveillance commonplace is only getting faster, with systems going up in public housing and municipalities far from city crime.

With the growth comes a mission that residents worldwide have often been told is off the table, that of the all-seeing, always analyzing sentinel that never stops recording what happens in the community.

The issue is again in the news, this time following a lengthy article in The Washington Post reporting on facial recognition systems being used in United States public housing.

Also, Context, a Thomson Reuters Foundation analytical publication, has shown how surveillance vendors are selling smaller cities on big-city facial recognition systems – and how residents are being cajoled into linking their own cameras to police networks.

Post reporters said they found six public housing centers whose boards have purchased surveillance cameras and computer servers. Some of those on the list also use biometric surveillance algorithms.

They were the Cincinnati Metropolitan Housing, Omaha Housing, Scott County (Virginia) Redevelopment & Housing, Jefferson County (Ohio) Housing and Grand Rapids (Michigan) Housing agencies.

Keep reading

Clearview Facial Recognition: A Perpetual Police Lineup

Clearview AI CEO Hoan Ton-That admitted that the company scraped 30 billion photos from Facebook and other social media platforms and used them in its massive facial recognition database accessible by law enforcement agencies across the U.S. Critics call the company’s database a “perpetual police lineup.” 

This is an example of the growing cooperation between private companies and government agencies in the ever-growing U.S. surveillance state.

The photos were collected from social media platforms without users’ permission or knowledge.

Clearview AI markets its facial recognition database as a tool allowing law enforcement to rapidly generate leads “to help identify suspects, witnesses and victims to close cases faster and keep communities safe.” According to Ton-That, law enforcement agencies across the U.S. have accessed the company’s database over 1 million times since 2017.

According to a CNN report last year, more than 3,100 U.S. agencies use Clearview AI, including the FBI and the Department of Homeland Security.

In a statement, Ton-That said, “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

While photo scraping might be legal, Facebook sent Clearview AI a cease and desist order in 2020 for violation of the platform’s terms of service. In an email to Insider, a Meta spokesperson said, “Clearview AI’s actions invade people’s privacy, which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services.”

Fight for the Future director of campaigns Caitlin Seeley George called Clearview “a total affront to peoples’ rights, full stop,” and said, “Police should not be able to use this tool.”

Keep reading

Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a ‘perpetual police line-up’

A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company’s CEO recently admitted, creating what critics called a “perpetual police line-up,” even for people who haven’t done anything wrong. 

The company, Clearview AI, boasts of its potential for identifying rioters at the January 6 attack on the Capitol, saving children being abused or exploited, and helping exonerate people wrongfully accused of crimes. But critics point to privacy violations and wrongful arrests fueled by faulty identifications made by facial recognition, including cases in Detroit and New Orleans, as cause for concern over the technology. 

Clearview took photos without users’ knowledge, its CEO Hoan Ton-That acknowledged in an interview last month with the BBC. Doing so allowed for the rapid expansion of the company’s massive database, which is marketed on its website to law enforcement as a tool “to bring justice to victims.”

Ton-That told the BBC that Clearview AI’s facial recognition database has been accessed by US police nearly a million times since the company’s founding in 2017, though the relationships between law enforcement and Clearview AI remain murky and that number could not be confirmed by Insider. 

In a statement emailed Insider, Ton-That said “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

The company’s CEO added: “Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

Keep reading