Neighborhood Watch Out: Cops Are Incorporating Private Cameras Into Their Real-Time Surveillance Networks

Police have their sights set on every surveillance camera in every business, on every porch, in all the cities and counties of the country. Grocery store trips, walks down the street, and otherwise minding your own business when outside your home could soon come under the ever-present eye of the government. In a quiet but rapid expansion of law enforcement surveillance, U.S. cities are buying and promoting products from Georgia-based company Fusus in order to access on-demand, live video from public and private camera networks.

The company sells police a cloud-based platform for creating real-time crime centers and a streamlined way for officers to interface with their various surveillance streams, including predictive policing, gunshot detection, license plate readers, and drones. For the public, Fusus also sells hardware that can be added to private cameras and convert privately-owned video into instantly-accessible parts of the police surveillance network. In AtlantaMemphisOrlando, and dozens of other locations, police officers have been asking the public to buy into a Fusus-fueled surveillance system, at times sounding like eager pitchmen trying to convince people and businesses to trade away privacy for a false sense of security.

The model expands police access to personal information collected by private cameras that would otherwise require warrants and community conversation. Because these cameras are privately owned, police can enjoy their use without having to create and follow records retention and deletion policies.

The Electronic Frontier Foundation has been collecting and reviewing documents about cities’ uses of Fusus, which counts nearly 150 jurisdictions as customers. You can access these records on DocumentCloud. EFF also shared these documents with the Thomson Reuters Foundation, which published its report today.

Police surveillance threatens constitutionally protected activities. It gives police the ability to surreptitiously spy on and track people of no real or alleged criminal concern. It creates caches of sensitive, personal information that can be retained indefinitely. Fusus is compounding these issues by expanding police access to surveillance cameras and integrating the cameras with a number of other surveillance services. This increases the ways police are able to record, track, and marginalize communities.

Deciding whether to expand police video surveillance to every corner of our lives should never happen without strong community conversation, transparency, and real respect for procurement rules and the public’s liberty. Yet cities’ responses to public records requests reveal a lack of clear guidance on when live access can be utilized, with very few locations able to provide policies regarding appropriate and specific police use of the system.

Keep reading

King Charles III’s coronation is a surveillance nightmare

On Wednesday, the London (Metropolitan) police appeared to still be considering using its live facial recognition system during the coronation of the UK’s new king, and only a short while later – in fact, the same day – they confirmed that this would actually be the case.

This form of mass surveillance will be used in central London during the ceremony and will mostly consist of technology provided by Hikvision, a controversial company due to its tech being used in labor camps in China.

Ahead of the confirmation of this news, UK civil liberties nonprofit Big Brother Watch said on Twitter the police were “testing public opinion” by making the announcement about the possible deployment of the tech.

“The Government’s decision to install 38 Hikvision cameras along the Coronation route shows a staggering lack of judgment, especially given that Hikvision is already banned from many Government sites. It is grossly inappropriate, deeply insensitive, and a stain on our country’s record that Chinese state-owned companies closely linked to grave human rights abuses will have their surveillance tech at the heart of this historic event,” Big Brother Watch said in a statement.

If that was the case, the “testing phase” was over quickly, as on Wednesday the London police website detailed all the actions they would be undertaking during the coronation.

Among those details was the statement that facial recognition would be used in central London. The plan to use the technology was explained as utilizing the “watch list” that will focus on persons whose presence “would raise public protection concerns.”

This “class” of citizen includes those with outstanding warrants against them, or those undergoing “relevant offender management programs.”

Keep reading

‘Godfather of AI’ quits Google — and says he regrets life’s work due to risks to humanity

A prominent artificial intelligence researcher known as the “Godfather of AI” has quit his job at Google – and says he now partly regrets his work advancing the burgeoning technology because of the risks it poses to society.

Dr. Geoffrey Hinton is a renowned computer scientist who is widely credited with laying the AI groundwork that eventually led to the creation of popular chatbots such as OpenAI’s ChatGPT and other advanced systems.

The 75-year-old told the New York Times that he left Google so that he can speak openly about the risks of unrestrained AI development – including the spread of misinformation, upheaval in the jobs market and other, more nefarious possibilities.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton said in an interview published on Monday.

“Look at how it was five years ago and how it is now,” Hinton added later in the interview. “Take the difference and propagate it forwards. That’s scary.”

Hinton fears that AI will only become more dangerous in the future — with “bad actors” potentially exploiting advanced systems “for bad things” that will be difficult to prevent.

Hinton informed Google of his plans to resign last month and personally spoke last Thursday with company CEO Sundar Pichai, according to the report. The computer scientist did not reveal what he and Pichai discussed during the phone call.

Keep reading

First Smart Gun With Fingerprint Unlocking Hits The Market

The first so-called “smart gun” that uses biometrics to unlock for shooting will hit the market at the end of the year.

Biofire Technologies announced this month that it is taking pre-orders for its home defense gun that is intended to prevent unwanted access to children and criminals. This is either a big step forward in gun safety or a gimmick with unreliable technology, depending on who you ask.

Smart guns, otherwise known as personalized handguns, have been in development for many years. The CEO and Founder of Biofire Technologies, Kai Kloepfer, told The Epoch Times in an interview that this is the first “major innovation in how a handgun has been designed or manufactured in 50 years.”

Kloepfer, 26, has been working on designing a smart gun since he was a teenager. “This is a new option for gun owners to give them peace of mind that their children or criminals won’t get their hands on it.”

The Biofire Smart Gun is a handgun that can be stored with fingerprints and 3D facial recognition to unlock it to shoot. The company says unlocking works in the dark. The data is stored in the gun in encrypted form. The gun can have biometrics for up to five total authorized users.

The Biofire gun has integrated infrared sensors in the grip to keep it armed while the user is holding it. As soon as the grip is released, the gun locks. It is powered by a rechargeable lithium-ion battery that Biofire says lasts several months with average use and can fire continuously for several hours. The firearm only comes in 9mm caliber, but buyers are given multiple choices for color and style and left- or right-handed

Keep reading

GEORGIA NATIONAL GUARD WILL USE PHONE LOCATION TRACKING TO RECRUIT HIGH SCHOOL CHILDREN

THE GEORGIA ARMY NATIONAL GUARD plans to combine two deeply controversial practices — military recruiting at schools and location-based phone surveillance — to persuade teens to enlist, according to contract documents reviewed by The Intercept.

The federal contract materials outline plans by the Georgia Army National Guard to geofence 67 different public high schools throughout the state, targeting phones found within a one-mile boundary of their campuses with recruiting advertisements “with the intent of generating qualified leads of potential applicants for enlistment while also raising awareness of the Georgia Army National Guard.” Geofencing refers generally to the practice of drawing a virtual border around a real-world area and is often used in the context of surveillance-based advertising as well as more traditional law enforcement and intelligence surveillance. The Department of Defense expects interested vendors to deliver a minimum of 3.5 million ad views and 250,000 clicks, according to the contract paperwork.

While the deadline for vendors attempting to win the contract was the end of this past February, no public winner has been announced.

Keep reading

Clearview Facial Recognition: A Perpetual Police Lineup

Clearview AI CEO Hoan Ton-That admitted that the company scraped 30 billion photos from Facebook and other social media platforms and used them in its massive facial recognition database accessible by law enforcement agencies across the U.S. Critics call the company’s database a “perpetual police lineup.” 

This is an example of the growing cooperation between private companies and government agencies in the ever-growing U.S. surveillance state.

The photos were collected from social media platforms without users’ permission or knowledge.

Clearview AI markets its facial recognition database as a tool allowing law enforcement to rapidly generate leads “to help identify suspects, witnesses and victims to close cases faster and keep communities safe.” According to Ton-That, law enforcement agencies across the U.S. have accessed the company’s database over 1 million times since 2017.

According to a CNN report last year, more than 3,100 U.S. agencies use Clearview AI, including the FBI and the Department of Homeland Security.

In a statement, Ton-That said, “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

While photo scraping might be legal, Facebook sent Clearview AI a cease and desist order in 2020 for violation of the platform’s terms of service. In an email to Insider, a Meta spokesperson said, “Clearview AI’s actions invade people’s privacy, which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services.”

Fight for the Future director of campaigns Caitlin Seeley George called Clearview “a total affront to peoples’ rights, full stop,” and said, “Police should not be able to use this tool.”

Keep reading

LIE DETECTOR FIRM LOBBIES CIA, DOD ON AUTOMATED EYE-SCANNING TECH

April 7 2023, 10:20 a.m.

A UTAH-BASED OUTFIT overseen by a former CIA consultant has spent hundreds of thousands of dollars lobbying intelligence and defense agencies, including the CIA and DHS, to adopt its automated lie detection technology, public lobbying disclosures reviewed by The Intercept show. Converus, Inc., boasts on its website that its technology has already been used for job screenings at American law enforcement agencies, corporate compliance and loss prevention in Latin America, and document verification in Ukraine. The company’s management team includes chief scientist John Kircher, a former consultant for the CIA and Department of Defense; Todd Mickelson, former director of product management at Ancestry.com; and Russ Warner, former CEO of the content moderation firm ContentWatch.

Warner told The Intercept that lobbying efforts have focused on changing federal regulations to allow the use of technologies other than the polygraph for lie detection. “The Department of Defense National Center of Credibility Assessment (NCCA) is in charge of oversight of validation and pilot projects throughout the U.S. government of new deception detection technologies,” Warner wrote in an email. “DoD Directive 5210.91 and ODNI Security Agent Directive 2 currently prohibit the use of any credibility assessment solution other than polygraph. For this reason, we have contacted government agencies to consider the use of EyeDetect and other new technologies.”

After finding success in corporate applications and sheriff’s offices, Converus has set its sights on large federal agencies that could apply its EyeDetect technology to a host of uses, including employee clearance screenings and border security. Unlike a polygraph, a device which relies on an operator asking questions and measuring physiological responses like heart rate and perspiration, Converus’s technology measures “cognitive load” with an algorithm that processes eye movement.

Keep reading

Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a ‘perpetual police line-up’

A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company’s CEO recently admitted, creating what critics called a “perpetual police line-up,” even for people who haven’t done anything wrong. 

The company, Clearview AI, boasts of its potential for identifying rioters at the January 6 attack on the Capitol, saving children being abused or exploited, and helping exonerate people wrongfully accused of crimes. But critics point to privacy violations and wrongful arrests fueled by faulty identifications made by facial recognition, including cases in Detroit and New Orleans, as cause for concern over the technology. 

Clearview took photos without users’ knowledge, its CEO Hoan Ton-That acknowledged in an interview last month with the BBC. Doing so allowed for the rapid expansion of the company’s massive database, which is marketed on its website to law enforcement as a tool “to bring justice to victims.”

Ton-That told the BBC that Clearview AI’s facial recognition database has been accessed by US police nearly a million times since the company’s founding in 2017, though the relationships between law enforcement and Clearview AI remain murky and that number could not be confirmed by Insider. 

In a statement emailed Insider, Ton-That said “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

The company’s CEO added: “Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

Keep reading