DATA SOLUTIONS PROVIDER TELUS INTERNATIONAL IS PAYING $50 FOR IMAGES OF KIDS TO TRAIN GOOGLE’S AI

In a recent initiative, Google and TELUS International, a subsidiary of the Canadian tech conglomerate TELUS, have collaborated to collect biometric data from children for age verification purposes. This project, running from November 2023 to January 2024, involved parents filming their children’s faces, capturing details such as eyelid shape, skin tone, and facial geometry. Parents who participated were paid $50 per child.

First reported by 404media, the project requested that parents take 11 short videos of their children while wearing things like face masks or hats. Another request was for children’s faces with no coverings at all. Each video must be less than 40 seconds, and participants were expected to spend 30 to 45 minutes on the task.

According to the summary document, which has now been taken down, a TELUS International moderator would be on a call while the parent took these videos of the child.

According to TELUS International, the purpose of this project was to capture a diverse range of biometric data to ensure that their customer’s services and products are representative of various demographics. Google told 404media that the goal was to enhance authentication methods, thus providing more secure tools for users. 

“As part of our commitment to delivering age-appropriate experiences and to comply with laws and regulations around the world, we’re exploring ways to help our users verify their age. Last year, TELUS helped us find volunteers for a project exploring whether this could be done via selfies. From there, Google collected videos and images of faces, clearly explaining how the content would be used, and, as with all research involving minors, we required parental consent for participants under the age of 18. We’ve also put strict privacy protections in place, including limiting the amount of time the data will be retained and providing all participants the option to delete their data at any time,” Google told 404media in a statement.

While this aligns with Google’s broader commitment to developing responsible and ethical facial recognition technology, the project has raised significant concerns regarding children’s privacy and consent.

Parents had to consent to Google and TELUS International collecting their child’s personal and biometric information in order to participate. This included the shape of their eyelids, the color of their skin and their “facial geometry.” According to the TELUS International summary, Google would then keep the data for five years at most, which for some participants, would be into their early adulthood.

Keep reading

UK porn watchers could have faces scanned

Porn users could have their faces scanned to prove their age, with extra checks for young-looking adults, draft guidance from Ofcom suggests.

The watchdog has set out a number of ways explicit sites could prevent children from viewing pornography.

The average age children first view pornography is 13, a survey suggests.

Explicit website Pornhub said regulations requiring the collection of “highly sensitive personal information” could jeopardise user safety.

Privacy campaigners have also criticised the proposals warning of “catastrophic” consequences if data from age checks is leaked.

A large chunk of the UK population watch online pornography – nearly 14 million people, according to a recent report by Ofcom.

But the ease of access to online pornography has also raised concerns that children are viewing explicit websites – with one in ten children seeing it by age nine, according to a survey by the Children’s Commissioner.

The Online Safety Act, which recently became law, requires social media platforms and search engines to protect children from harmful content online.

It will be enforced by Ofcom, who can issue large fines if firms fail to comply.

Ofcom has now outlined how it expects firms to become “highly effective” at complying with the new regulations, which come into force sometime in 2025.

Keep reading

Sharp Rise in Facial Recognition Use by Scottish Police, UK Protest Footage Scanned

The police in Scotland have tripled the use of retrospective facial recognition over the last five years jumping from just under 1,300 in 2018 to nearly 4,000 in 2022.

The rising trend has continued during 2023 with more than 2,000 searches carried out in the first four months of the year, according to data obtained by a freedom of information request by UK investigative journalism organizations Liberty Investigates and The Ferret.

The trend has been rising in other parts of the country. In 2014, the total number of searches using retrospective facial recognition by all police forces in the UK amounted to just 3,360. By 2022, that number jumped to 85,158, according to UK Home Office data.

The Scottish police ranks fourth in the use of the technology in the UK. The leader is the London Metropolitan Police which accounted for 30 percent or 27,677 searches last year.

The UK police have been using retrospective facial recognition to match faces captured with CCTV cameras with millions of images stored in the Police National Database. The practice has proved controversial as the database still contains many images of people who were released without charge.

Police in Scotland operate a distinct policy from other UK forces, only uploading custody images to the database once an individual has been charged with a crime and removing images of those found innocent after 6 months.

Facial recognition use by the police has been a target of criticism from some lawmakers, non-governmental organizations and policy experts.

Keep reading

The Israel-Hamas War is ALREADY Pushing the Great Reset Agenda

Afew days ago we published an article discussing how the Great Reset agenda is still moving forward behind the scenes, while the headlines are full of Israel-Palestine.

But it’s also true that, in its thirteen days of existence, the war itself has already pushed that agenda forward as well.

CENSORSHIP

Normalising the suppression of dissent and creating a culture of fear around free expression are a major part of the Great Reset, after all the other steps are so much easier if you outlaw inconvenient protests.

And, naturally, calls for the suppression of freedom of expression have sprouted up everywhere since the war started. We covered this in our article “Israel-Hamas “war” – another excuse to shut down free speech”

Since that article was published this campaign has gained momentum.

European Union Commissioner Thierry Breton sent warning letters out to every major social media platform, claiming they needed to “combat disinformation” regarding Israel and threatening them with fines.

In yet another blow to the “China is on our side” narrative, Chinese video-sharing service TikTok has eagerly agreed to “combat disinformation”.

Students from Harvard and Berkeley have been threatened with “blacklisting” for voicing support for Palestine.

German and French police are breaking up pro-Palestine demonstrations, while – in both the UK and US – there are calls to arrest people for waving Palestinian flags, or deport those who “support Hamas”.

Creating a culture of fear, making people afraid to express themselves or their political opinions, is just one of the many things that Covid, Ukraine, Climate Change and now Israel have in common.

Keep reading

GAO Report Shows the Government Uses Face Recognition with No Accountability, Transparency, or Training

Federal agents are using face recognition software without training, policies, or oversight, according to the Government Accountability Office (GAO).

The government watchdog issued yet another report this month about the dangerously inadequate and nonexistent rules for how federal agencies use face recognition, underlining what we’ve already known: the government cannot be trusted with this flawed and dangerous technology.

The GAO review covered seven agencies within the Department of Homeland Security (DHS) and Department of Justice (DOJ), which together account for more than 80 percent of all federal officers and a majority of face recognition searches conducted by federal agents.

Across each of the agencies, GAO found that most law enforcement officers using face recognition have no training before being given access to the powerful surveillance tool. No federal laws or regulations mandate specific face recognition training for DHS or DOJ employees, and Homeland Security Investigations (HSI) and Marshals Service were the only agencies reviewed to now require training specific to face recognition. Though each agency has their own general policies on handling personally identifiable information (PII), like facial images used for face recognition, none of the seven agencies included in the GAO review fully complied with them.

Thousands of face recognition searches have been conducted by the federal agents without training or policies. In the period GAO studied, at least 63,000 searches had happened, but this number is a known undercount. A complete count of face recognition use is not possible. The number of federal agents with access to face recognition, the number of searches conducted, and the reasons for the searches does not exist, because some systems used by the Federal Bureau of Investigation (FBI) and Customs and Border Protection (CBP) don’t track these numbers.

Our faces are unique and mostly permanent — people don’t usually just get a new one— and face recognition technology, particularly when used by law enforcement and government, puts into jeopardy many of our important rights. Privacy, free expression, information security, and social justice are all at risk. The technology facilitates covert mass surveillance of the places we frequent and the people we know. It can be used to make judgments about how we feel and behave. Mass adoption of face recognition means being able to track people automatically as they go about their day visiting doctors, lawyers, houses of worship, as well as friends and family. It also means that law enforcement could, for example, fly a drone over a protest against police violence and walk away with a list of everyone in attendance. Either instance would create a chilling effect wherein people would be hesitant to attend protests or visit certain friends or romantic partners knowing there would be a permanent record of it.

Keep reading

Twitter can now harvest YOUR ‘biometric’ information including fingerprint, face recognition and eye tracking data – as Musk’s site quietly updates its T&Cs ‘for safety purposes’

The social media platform formerly known as Twitter can now harvest your biometric data and DNA.

A new update quietly added to the platform’s privacy policy says that X now has permission to harvest its users’ fingerprints, retinal scans, voice and face recognition and keystroke patterns.

The update would mean that anyone who uses fingerprint verification to log in to the app from their phone, posts selfies or videos to the platform or speaks their mind on X ‘spaces’ could see their unique biometric data catalogued by the company. 

The new policy, which describes its interest in users’ biometrics as ‘for safety, security, and identification purposes,’ also added the platform’s intent to scrape up data on users’ job history, educational background and ‘job search activity.’

The move follows nearly a year of turmoil for the microblogging app, which has included Musk requesting that its users pay subscription fees for premium services and verification: part of his larger plan to recover from cratering advertising revenue.

Keep reading

Microsoft Files For a Face-Tracking Patent

A patent filed in the US shows that Microsoft is working on technology that would allow it to track a person’s face in a way so comprehensive that the device powered by the tech could be referred to as a “face reader.”

And it could be used for gaming, but also for tracking remote employees. And who knows what else in between.

Microsoft says it needs the patent approved to develop mixed reality headsets that would be cheaper yet better at “understanding” expressions on human faces.

The patent filing doesn’t go into many considerations other than those of a purely technical nature, and this in a nutshell is how the under-development technology works.

Currently, converters are used that are not only bulkier to render high resolution tracking but also cost more to manufacture.

Now Microsoft wants to replace this method of tracking with elements directly embedded, circumventing the need for converters, and also what’s referred to as processing circuit area.

Keep reading

From Covert To Overt: UK Govt & Businesses Unleash Facial Recognition Technologies Across Urban Landscape

The Home Office is encouraging police forces across the country to make use of live facial recognition technologies for routine law enforcement. Retailers are also embracing the technology to monitor their customers. 

It increasingly seems that the UK decoupled from the European Union, its rules and regulations, only for its government to take the country in a progressively more authoritarian direction. This is, of course, a generalised trend among ostensibly “liberal democracies” just about everywhere, including EU Member States, as they increasingly adopt the trappings and tactics of more authoritarian regimes, such as restricting free speech, cancelling people and weakening the rule of law. But the UK is most definitely at the leading edge of this trend. A case in point is the Home Office’s naked enthusiasm for biometric surveillance and control technologies.

This week, for example, The Guardian revealed that the Minister for Policing Chris Philip and other senior figures of the Home Office had held a closed-door meeting with Simon Gordon, the founder of Facewatch, a leading facial recognition retail security company, in March. The main outcome of the meeting was that the government would lobby the Information Commissioner’s Office (ICO) on the benefits of using live facial recognition (LFR) technologies in retail settings. LFR involves hooking up facial recognition cameras to databases containing photos of people. Images from the cameras can then be screened against those photos to see if they match.

The lobbying effort was apparently successful. Just weeks after reaching out to the ICO, the ICO sent a letter to Facewatch affirming that the company “has a legitimate purpose for using people’s information for the detection and prevention of crime” and that its services broadly comply with UK Data Protection laws, which the Sunak government and UK intelligence agencies are trying to gut. As the Guardian report notes, “the UK’s data protection and information bill proposes to abolish the role of the government-appointed surveillance camera commissioner along with the requirement for a surveillance camera code of practice.”

The ICO’s approval gives legal cover to a practice that is already well established. Facewatch has been scanning the faces of British shoppers in thousands of retail stores across the UK for years. The cameras scan faces as people enter a store and screens them against a database of known offenders, alerting shop assistants if a “subject of interest” has entered. Shops using the technologies have placed notices in their windows (such as the one below) informing customers that facial recognition technologies are in operation, “to protect” the shop’s “employees, customers and stock.” But it is far from clear how many shoppers actually take notice of the notices.

Keep reading

Portland Goes Full China: Store Requires Looking at Camera to Enter

A convenience store in Portland, Oregon has implemented a digital face scan system that requires customers look at a camera before they are able to enter.

“Please look at the camera for entry,” a computerized voice says.

“Facial recognition in use,” the sign under the camera reads.

China has hundreds facial recognition stores.

It’s convenient to pay for your purchase with just your face, until your social credit score doesn’t permit you to enter the store.https://t.co/aTYt8eXuus pic.twitter.com/5FyxsZcR07

— Songpinganq (@songpinganq) July 10, 2023

The door remains locked until the customer consents and looks up at the camera.

Social media users have reacted to the development by drawing comparisons to China, which regularly requires citizens use facial recognition technology to access society. In communist China, citizens need to scan their face to buy products, and their social-credit scores get adjusted accordingly.

But with the democrat-run Portland having some of the highest crime rates in the United States, some stores have turned to the technology to save their businesses.

Keep reading

Consumer Group Warns Smartphone Facial Recognition Apps Are Vulnerable to Spoofing

Smartphone face biometrics from many leading brands are vulnerable to spoof attacks with 2D photographs, according to a new report from UK-based consumer testing and review group Which?, according to Yahoo Finance UK.

The group says the vulnerability is “unacceptable,” and has “worrying implications” for user’s security.

On-device biometrics are used for device unlocking and local authentication, while KYC processes for customer onboarding and strong remote identity verification is typically carried out with server-side biometrics and other signals, with a layer of liveness or presentation attack detection.

The phones tested include Honor, Motorola, Nokia, Oppo, Samsung, Vivo and Xiaomi handsets. Apple’s 3D FaceID biometrics were not fooled by the photos. The devices tested range in price from £89.99 to nearly £1,000 (approximately US$112 to $1,244), but the majority of phones that failed the test are lower-cost or mid-range models.

Out of 48 new smartphone models tested, 60 percent were not vulnerable to spoofing with a photograph.

Keep reading