Consumer Group Warns Smartphone Facial Recognition Apps Are Vulnerable to Spoofing

Smartphone face biometrics from many leading brands are vulnerable to spoof attacks with 2D photographs, according to a new report from UK-based consumer testing and review group Which?, according to Yahoo Finance UK.

The group says the vulnerability is “unacceptable,” and has “worrying implications” for user’s security.

On-device biometrics are used for device unlocking and local authentication, while KYC processes for customer onboarding and strong remote identity verification is typically carried out with server-side biometrics and other signals, with a layer of liveness or presentation attack detection.

The phones tested include Honor, Motorola, Nokia, Oppo, Samsung, Vivo and Xiaomi handsets. Apple’s 3D FaceID biometrics were not fooled by the photos. The devices tested range in price from £89.99 to nearly £1,000 (approximately US$112 to $1,244), but the majority of phones that failed the test are lower-cost or mid-range models.

Out of 48 new smartphone models tested, 60 percent were not vulnerable to spoofing with a photograph.

Keep reading

Smartphones With Popular Qualcomm Chip Secretly Share Private Information With US Chip-Maker

During our security research we found that smart phones with Qualcomm chip secretly send personal data to Qualcomm. This data is sent without user consent, unencrypted, and even when using a Google-free Android distribution. This is possible because of proprietary Qualcomm software which provides hardware support also sends the data. Affected smart phones are Sony Xperia XA2 and likely the Fairphone and many more Android phones which use popular Qualcomm chips.

The smartphone is a device we entrust with practically all of our secrets. After all, this is the most ubiquitous device we carry with us 24 hours per day. Both Apple and Android with their App Store and Google Play Store are spying on its paying customers. As a private alternative some tech-savy people install a Google-free version of Android on their ordinary smartphone. As an example we analyzed such setup with a Sony Xperia XA2 and found that this may not protect sufficiently because proprietary vendor software, different from the (open source) operating system, sends private information to the chip maker Qualcomm. This finding also applies to other smartphone with a Qualcomm chip such as the Fairphone.

Keep reading

This Soft Robot Unfurls Inside the Skull 

An octopus-like soft robot can unfurl itself inside the skull on top of the brain, a new study finds. The novel gadget may lead to minimally invasive ways to investigate the brain and implant brain-computer interfaces, researchers say.

In order to analyze the brain after traumatic injuries, help treat disorders such as seizures, and embed brain-computer interfaces, scientists at times lay grids of electrodes onto the surface of the brain. These electrocorticography grids can capture higher-quality recordings of brain signals than electroencephalography data gathered by electrodes on the scalp, but are also less invasive than probes stuck into the brain.

However, placing electrocorticography grids onto the brain typically involves creating openings in the skull at least as large as these arrays, leaving holes up to 100 square centimeters. These surgical operations may result in severe complications, such as inflammation and scarring.

Now scientists have developed a new soft robot they can place into the skull through a tiny hole. In experiments on a minipig, they showed the device could unfold like a ship in a bottle to deploy an electrocorticography grid 4 centimeters wide, all of it fitting into a space only roughly 1 millimeter wide. This “enabled the implant to navigate through the narrow gap between the skull and the brain,” says study senior author Stéphanie Lacour, a neural engineer and director of the Federal Polytechnic School of Lausanne’s Neuro-X Institute in Switzerland.

Keep reading

Neighborhood Watch Out: Cops Are Incorporating Private Cameras Into Their Real-Time Surveillance Networks

Police have their sights set on every surveillance camera in every business, on every porch, in all the cities and counties of the country. Grocery store trips, walks down the street, and otherwise minding your own business when outside your home could soon come under the ever-present eye of the government. In a quiet but rapid expansion of law enforcement surveillance, U.S. cities are buying and promoting products from Georgia-based company Fusus in order to access on-demand, live video from public and private camera networks.

The company sells police a cloud-based platform for creating real-time crime centers and a streamlined way for officers to interface with their various surveillance streams, including predictive policing, gunshot detection, license plate readers, and drones. For the public, Fusus also sells hardware that can be added to private cameras and convert privately-owned video into instantly-accessible parts of the police surveillance network. In AtlantaMemphisOrlando, and dozens of other locations, police officers have been asking the public to buy into a Fusus-fueled surveillance system, at times sounding like eager pitchmen trying to convince people and businesses to trade away privacy for a false sense of security.

The model expands police access to personal information collected by private cameras that would otherwise require warrants and community conversation. Because these cameras are privately owned, police can enjoy their use without having to create and follow records retention and deletion policies.

The Electronic Frontier Foundation has been collecting and reviewing documents about cities’ uses of Fusus, which counts nearly 150 jurisdictions as customers. You can access these records on DocumentCloud. EFF also shared these documents with the Thomson Reuters Foundation, which published its report today.

Police surveillance threatens constitutionally protected activities. It gives police the ability to surreptitiously spy on and track people of no real or alleged criminal concern. It creates caches of sensitive, personal information that can be retained indefinitely. Fusus is compounding these issues by expanding police access to surveillance cameras and integrating the cameras with a number of other surveillance services. This increases the ways police are able to record, track, and marginalize communities.

Deciding whether to expand police video surveillance to every corner of our lives should never happen without strong community conversation, transparency, and real respect for procurement rules and the public’s liberty. Yet cities’ responses to public records requests reveal a lack of clear guidance on when live access can be utilized, with very few locations able to provide policies regarding appropriate and specific police use of the system.

Keep reading

King Charles III’s coronation is a surveillance nightmare

On Wednesday, the London (Metropolitan) police appeared to still be considering using its live facial recognition system during the coronation of the UK’s new king, and only a short while later – in fact, the same day – they confirmed that this would actually be the case.

This form of mass surveillance will be used in central London during the ceremony and will mostly consist of technology provided by Hikvision, a controversial company due to its tech being used in labor camps in China.

Ahead of the confirmation of this news, UK civil liberties nonprofit Big Brother Watch said on Twitter the police were “testing public opinion” by making the announcement about the possible deployment of the tech.

“The Government’s decision to install 38 Hikvision cameras along the Coronation route shows a staggering lack of judgment, especially given that Hikvision is already banned from many Government sites. It is grossly inappropriate, deeply insensitive, and a stain on our country’s record that Chinese state-owned companies closely linked to grave human rights abuses will have their surveillance tech at the heart of this historic event,” Big Brother Watch said in a statement.

If that was the case, the “testing phase” was over quickly, as on Wednesday the London police website detailed all the actions they would be undertaking during the coronation.

Among those details was the statement that facial recognition would be used in central London. The plan to use the technology was explained as utilizing the “watch list” that will focus on persons whose presence “would raise public protection concerns.”

This “class” of citizen includes those with outstanding warrants against them, or those undergoing “relevant offender management programs.”

Keep reading

‘Godfather of AI’ quits Google — and says he regrets life’s work due to risks to humanity

A prominent artificial intelligence researcher known as the “Godfather of AI” has quit his job at Google – and says he now partly regrets his work advancing the burgeoning technology because of the risks it poses to society.

Dr. Geoffrey Hinton is a renowned computer scientist who is widely credited with laying the AI groundwork that eventually led to the creation of popular chatbots such as OpenAI’s ChatGPT and other advanced systems.

The 75-year-old told the New York Times that he left Google so that he can speak openly about the risks of unrestrained AI development – including the spread of misinformation, upheaval in the jobs market and other, more nefarious possibilities.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton said in an interview published on Monday.

“Look at how it was five years ago and how it is now,” Hinton added later in the interview. “Take the difference and propagate it forwards. That’s scary.”

Hinton fears that AI will only become more dangerous in the future — with “bad actors” potentially exploiting advanced systems “for bad things” that will be difficult to prevent.

Hinton informed Google of his plans to resign last month and personally spoke last Thursday with company CEO Sundar Pichai, according to the report. The computer scientist did not reveal what he and Pichai discussed during the phone call.

Keep reading

First Smart Gun With Fingerprint Unlocking Hits The Market

The first so-called “smart gun” that uses biometrics to unlock for shooting will hit the market at the end of the year.

Biofire Technologies announced this month that it is taking pre-orders for its home defense gun that is intended to prevent unwanted access to children and criminals. This is either a big step forward in gun safety or a gimmick with unreliable technology, depending on who you ask.

Smart guns, otherwise known as personalized handguns, have been in development for many years. The CEO and Founder of Biofire Technologies, Kai Kloepfer, told The Epoch Times in an interview that this is the first “major innovation in how a handgun has been designed or manufactured in 50 years.”

Kloepfer, 26, has been working on designing a smart gun since he was a teenager. “This is a new option for gun owners to give them peace of mind that their children or criminals won’t get their hands on it.”

The Biofire Smart Gun is a handgun that can be stored with fingerprints and 3D facial recognition to unlock it to shoot. The company says unlocking works in the dark. The data is stored in the gun in encrypted form. The gun can have biometrics for up to five total authorized users.

The Biofire gun has integrated infrared sensors in the grip to keep it armed while the user is holding it. As soon as the grip is released, the gun locks. It is powered by a rechargeable lithium-ion battery that Biofire says lasts several months with average use and can fire continuously for several hours. The firearm only comes in 9mm caliber, but buyers are given multiple choices for color and style and left- or right-handed

Keep reading

GEORGIA NATIONAL GUARD WILL USE PHONE LOCATION TRACKING TO RECRUIT HIGH SCHOOL CHILDREN

THE GEORGIA ARMY NATIONAL GUARD plans to combine two deeply controversial practices — military recruiting at schools and location-based phone surveillance — to persuade teens to enlist, according to contract documents reviewed by The Intercept.

The federal contract materials outline plans by the Georgia Army National Guard to geofence 67 different public high schools throughout the state, targeting phones found within a one-mile boundary of their campuses with recruiting advertisements “with the intent of generating qualified leads of potential applicants for enlistment while also raising awareness of the Georgia Army National Guard.” Geofencing refers generally to the practice of drawing a virtual border around a real-world area and is often used in the context of surveillance-based advertising as well as more traditional law enforcement and intelligence surveillance. The Department of Defense expects interested vendors to deliver a minimum of 3.5 million ad views and 250,000 clicks, according to the contract paperwork.

While the deadline for vendors attempting to win the contract was the end of this past February, no public winner has been announced.

Keep reading

Clearview Facial Recognition: A Perpetual Police Lineup

Clearview AI CEO Hoan Ton-That admitted that the company scraped 30 billion photos from Facebook and other social media platforms and used them in its massive facial recognition database accessible by law enforcement agencies across the U.S. Critics call the company’s database a “perpetual police lineup.” 

This is an example of the growing cooperation between private companies and government agencies in the ever-growing U.S. surveillance state.

The photos were collected from social media platforms without users’ permission or knowledge.

Clearview AI markets its facial recognition database as a tool allowing law enforcement to rapidly generate leads “to help identify suspects, witnesses and victims to close cases faster and keep communities safe.” According to Ton-That, law enforcement agencies across the U.S. have accessed the company’s database over 1 million times since 2017.

According to a CNN report last year, more than 3,100 U.S. agencies use Clearview AI, including the FBI and the Department of Homeland Security.

In a statement, Ton-That said, “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

While photo scraping might be legal, Facebook sent Clearview AI a cease and desist order in 2020 for violation of the platform’s terms of service. In an email to Insider, a Meta spokesperson said, “Clearview AI’s actions invade people’s privacy, which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services.”

Fight for the Future director of campaigns Caitlin Seeley George called Clearview “a total affront to peoples’ rights, full stop,” and said, “Police should not be able to use this tool.”

Keep reading