Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a ‘perpetual police line-up’

A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company’s CEO recently admitted, creating what critics called a “perpetual police line-up,” even for people who haven’t done anything wrong. 

The company, Clearview AI, boasts of its potential for identifying rioters at the January 6 attack on the Capitol, saving children being abused or exploited, and helping exonerate people wrongfully accused of crimes. But critics point to privacy violations and wrongful arrests fueled by faulty identifications made by facial recognition, including cases in Detroit and New Orleans, as cause for concern over the technology. 

Clearview took photos without users’ knowledge, its CEO Hoan Ton-That acknowledged in an interview last month with the BBC. Doing so allowed for the rapid expansion of the company’s massive database, which is marketed on its website to law enforcement as a tool “to bring justice to victims.”

Ton-That told the BBC that Clearview AI’s facial recognition database has been accessed by US police nearly a million times since the company’s founding in 2017, though the relationships between law enforcement and Clearview AI remain murky and that number could not be confirmed by Insider. 

In a statement emailed Insider, Ton-That said “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

The company’s CEO added: “Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

Keep reading

‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says

A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. 

The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man’s widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting. 

As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante. 

Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself. 

“Without Eliza, he would still be here,” she told the outlet.  

The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google’s Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. 

Keep reading

Critics Warn of ‘a Dragnet of Surveillance’ as U.S. Pushes Ahead With Plans for More ‘Smart’ Cities

U.S. Transportation Secretary Pete Buttigieg last week announced $94 million in grant awards to fund 59 smart city technology projects across the country.

Despite widespread and mounting pushback against biometric surveillance and control systems associated with smart city technologies and the failure of the U.S. Department of Transportation’s (DOT) previous attempt to grant-fund smart city transformation in Columbus, Ohio, Buttigieg told The Verge he thinks “smart city technologies matter more than ever.”

Cities just need to take a different approach — experimenting with and testing out different technologies first, rather than implementing a “grand unified system” all at once, Buttigieg said.

The new grants, part of the Strengthening Mobility and Revolutionizing Transportation (SMART) Grants Program, are the first round of $500 million in funding that will be awarded for smaller smart mobility projects over the next five years, authorized under the 2021 Bipartisan Infrastructure Law.

In this funding round, DOT awarded smart grants for a range of projects, including drone surveillance or delivery, smart traffic signals, connected vehicles, autonomous vehicles, smart grid development, intelligent sensors and other Internet of Things (IoT) infrastructure. Some cities, including Los Angeles (LA), received multiple grants.

Smart city development typically focuses on the implementation of technologies like the IoT, 5G, cloud and edge computing, and biometric surveillance to track, manage, control and extract profit from an array of urban processes.

Whitney Webb, an investigative journalist and smart cities critic, said the smart city infrastructure is meant to facilitate the development of cities “micromanaged by technocrats via an all-encompassing system of mass surveillance and a vast array of ‘internet of things’ devices that provide a constant and massive stream of data that is analyzed by artificial intelligence (AI).”

Keep reading

Germany uses AI to target online content for removal, send data to police

Germany has a “porn police” – regulators, that is, who are using an “AI” tool called KIVI to find adult content across the internet – on sites and apps like TwitterYouTubeTelegram, and TikTok.

And when they do, those creating and/or posting this content could wind up in prison or pay fines, and they are notified of their transgression by the actual police.

Porn is not KIVI’s only target – the tool also scans for “political extremism, Holocaust denial, and violence.”

Reports mention a couple, dabbling in amateur porn, who received one such letter from the police in Berlin, that said they had posted pornography online unlawfully. However, the letter was not big on detail, neither when it comes to where the content in question was shared, nor why the action was illegal.

In this case, it eventually turned out that the system found the content while scanning Twitter, providing the police with screenshots.

The policy of suppressing porn seems to be picking up speed recently in Germany, as over a hundred people were sent the same type of letter and could now stand accused in criminal cases.

Even though pornography itself is not illegal to access in Germany for those over 18, there has been a push to introduce age verification using this particular industry as the obvious choice to promote the implementation of the technology.

As ever, age verification is touted as a way to protect those under 18 from inappropriate content, but in reality, to try to achieve that, every internet user is exposed to the age verification process (typically involving presenting government-issued IDs to sites or third parties).

And the authorities seem determined to have their way, since they are now ordering Twitter to block contentious accounts and have even tried imposing a blanket ban on a major porn site that would affect every user in Germany, Wired writes.

Keep reading

Governance By Artificial Intelligence: The Ultimate Unaccountable Tyranny

It’s no secret that globalist institutions are obsessed with artificial intelligence as some kind of technological prophecy. They treat it as if it is almost supernatural in its potential and often argue that every meaningful industrial and social innovation in the near future will owe its existence to AI. The World Economic Forum cites AI as the singular key to the rise of what they call the “Fourth Industrial Revolution.” In their view, there can be no human progress without the influence of AI algorithms, making human input almost obsolete.

This delusion is often promoted by globalist propagandists. For example, take a look at the summarized vision of WEF member Yuval Harari, who actually believes that AI has creative ability that will replace human imagination and innovation.  Not only that, but Harari has consistently argued in the past that AI will run the world much better than human beings ever could.

Harari’s examples of AI creativity might sound like extreme naivety to many of us, but he knows exactly what he is doing in misrepresenting the capabilities of algorithms.  Games like chess and Go are games of patterns restricted by rules, there only so many permutations of these patterns in any given scenario and AI is simply faster at spotting them than most humans because that is what it is designed to do by software creators.  This is no different that solving a mathematical equation; just because a calculator is faster than you does not mean it is “creative.”

There is a big difference between cognitive automation and cognitive autonomy.  AI is purely automation; it will play the games it is programmed to play and will learn to play them well, but it will never have an epiphany one day and create a new and unique game from scratch unless it is coded to do so.  AI will never have fun playing this new game it made, nor feel the joy of sharing that game with others, so why would it bother?  It will never seek to contribute to the world any more than it is pre-programmed to do.

The manner in which globalists hype AI is very tactical, however.  When Harari claims that many people will become part of the “useless class” once AI takes over the economy, he is hinting at another globalist ideology based on elitism – Transhumanism.  The goal of transhumanism is to one day merge human bodies and human minds with technology and AI, and only a limited group of people will have the resources to accomplish this (the globalists).

Keep reading

MICROSOFT LAID OFF ITS ENTIRE AI “ETHICS AND SOCIETY” DIVISION

Microsoft got rid of its entire company division devoted to AI “ethics and society” during its January layoffs, according to a report from Platformer.

Though the company still has an office of responsible AI, it was the job of the ethics and society staff to address how AI technology is likely to impact human society in context and communicate with product teams accordingly.

“People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies,'” one former ethicist told Platformer. “Our job was to show them and to create rules in areas where there were none.”

But now, as Microsoft races to jam OpenAI software into seemingly every product that it can, the ethics and society department is gone — a telling sign that Microsoft is more focused on profitability and getting AI-driven products to market than ensuring that those products remain a positive force for society as a whole.

Keep reading

Secretive “Wearable Camera” Startup With Ex-Apple Execs Secures $100 Million, Partners With OpenAI

WSJ reports that a new investment round of $100 million has been secured by a secretive startup called Humane Inc., founded by ex-Apple executives, along with a partnership with OpenAI, the creator of ChatGPT. Although the startup’s device is shrouded in mystery, it seems to be developing a wearable camera powered by artificial intelligence.

“This Series C round presented an opportunity to raise money through equity, and to bring on board great VCs and strategic partners who would like to participate in equity as the company grows,” founder Chaudhri Bongiorno told TechCrunch in an email interview.

“At Humane, we’re building a first-of-its-kind device and services platform — we’re growing fast, and we’ve been focused on innovation, research and development,” he said. 

According to The Verge, the startup is developing a wearable camera device that would allow the user to have access to AI technology at all times. It sounds like this is ChatGPT on steroids…

A leaked investor pitch deck from 2021 revealed by tech blog Daring Fireball describes the device as a camera that “captures moments you didn’t think to capture” or “moments you want to recall.”

Keep reading

Pentagon, FBI Collaborated On AI, Facial Recognition Tech For Federal Agencies, Documents Show

The Department of Defense (DOD) and the FBI collaborated on an artificial intelligence-driven facial recognition technology program provided to at least six federal agencies and a Pentagon agency that supports civilian police forces, The Washington Post reported.

The facial recognition software could be used to identify individuals whose features were captured by drones and CCTV cameras, the Post reported, citing documents obtained through a Freedom of Information Act request as part of an ongoing lawsuit by the American Civil Liberties Union (ACLU) filed against the FBI. The documents reveal federal authorities were more deeply involved in development of the technology than was previously known, sparking concerns over Americans’ privacy rights.

“Americans’ ability to navigate our communities without constant tracking and surveillance is being chipped away at an alarming pace,” Democratic Sen. Ed Markey of Massachusetts told the Post. “We cannot stand by as the tentacles of the surveillance state dig deeper into our private lives, treating every one of us like suspects in an unbridled investigation that undermines our rights and freedom.”

Keep reading

This AI Knows Who You Are and Who All Your Friends Are (And Is Telling the IRS)

A Bay Area tech company wants to sell AI (artificial intelligence) surveillance software to determine not just who you are but track who your friends are, too.

Vintra is a San Jose-based firm whose “co-appearance” or “correlation analysis” software can, “with a few clicks,” according to the Los Angeles Times, take any individual on a surveillance camera and backtrace him to those he’s seen with most often. From there, the software can take people deemed “likely associates” and locate them on a searchable calendar.

The Times reports that AI-enabled co-appearance technology is already in use in Communist China as part of that country’s Orwellian “social credit” digital report-and-control scheme, but Vintra appears to be the first company to market it in the West.

It’s already in use by the U.S. government:

The firm boasts on its website about relationships with the San Francisco 49ers and a Florida police department. The Internal Revenue Service and additional police departments across the country have paid for Vintra’s services, according to a government contracting database.

The IRS needs to know who your friends are because reasons. Creepy, authoritarian reasons.

Back in December, I wrote about the time facial-recognition software got a New Jersey woman forcibly removed from a Rockettes show at Radio City Music Hall around Thanksgiving because she works for a law firm engaged in a suit against a restaurant owned by the same parent company, MSG Entertainment, that owns Radio City. The lawyer, Kelly Conlon, was not in any way engaged in the long-running suit.

Keep reading