Japan: AI Systems Can Use Any Data, from Any Source – Even Illegal Ones

While other countries are mulling where to put the brakes on AI development, Japan is going full steam ahead, with the government recently announcing that no data will be off-limits for AI.

In a recent meeting, Keiko Nagaoka, Japanese Minister of Education, Culture, Sports, Science, and Technology, confirmed that no law, including copyright law, will prevent AIs from accessing data in the country.

AIs will be allowed to use data for training, “regardless of whether it is for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise,” said Nagaoka.

The decision is a blow to copyright holders who argue that AI using their intellectual property to produce new content undermines the very concept of copyright. The issue has already emerged in the west — an AI-generated song using the voice of Drake and The Weeknd went viral on streaming services in April, before being swiftly removed.

In the west, much of the discourse around AI is focused on potential harms. AI leaders recently warning governments that development of the technology carries with it a “risk of extinction,” while news companies worry about deepfakes and “misinformation.”

The Biden Administration’s leftist regulators at the FTC, meanwhile, worry that “historically biased” data (such as crime data with racial imbalances) will lead to outcomes that conflict with “civil rights.” Many leftist agitators in the west want to cut off AIs from such data.

Keep reading

Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Keep reading

‘Godfather of AI’ quits Google — and says he regrets life’s work due to risks to humanity

A prominent artificial intelligence researcher known as the “Godfather of AI” has quit his job at Google – and says he now partly regrets his work advancing the burgeoning technology because of the risks it poses to society.

Dr. Geoffrey Hinton is a renowned computer scientist who is widely credited with laying the AI groundwork that eventually led to the creation of popular chatbots such as OpenAI’s ChatGPT and other advanced systems.

The 75-year-old told the New York Times that he left Google so that he can speak openly about the risks of unrestrained AI development – including the spread of misinformation, upheaval in the jobs market and other, more nefarious possibilities.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton said in an interview published on Monday.

“Look at how it was five years ago and how it is now,” Hinton added later in the interview. “Take the difference and propagate it forwards. That’s scary.”

Hinton fears that AI will only become more dangerous in the future — with “bad actors” potentially exploiting advanced systems “for bad things” that will be difficult to prevent.

Hinton informed Google of his plans to resign last month and personally spoke last Thursday with company CEO Sundar Pichai, according to the report. The computer scientist did not reveal what he and Pichai discussed during the phone call.

Keep reading

LIE DETECTOR FIRM LOBBIES CIA, DOD ON AUTOMATED EYE-SCANNING TECH

April 7 2023, 10:20 a.m.

A UTAH-BASED OUTFIT overseen by a former CIA consultant has spent hundreds of thousands of dollars lobbying intelligence and defense agencies, including the CIA and DHS, to adopt its automated lie detection technology, public lobbying disclosures reviewed by The Intercept show. Converus, Inc., boasts on its website that its technology has already been used for job screenings at American law enforcement agencies, corporate compliance and loss prevention in Latin America, and document verification in Ukraine. The company’s management team includes chief scientist John Kircher, a former consultant for the CIA and Department of Defense; Todd Mickelson, former director of product management at Ancestry.com; and Russ Warner, former CEO of the content moderation firm ContentWatch.

Warner told The Intercept that lobbying efforts have focused on changing federal regulations to allow the use of technologies other than the polygraph for lie detection. “The Department of Defense National Center of Credibility Assessment (NCCA) is in charge of oversight of validation and pilot projects throughout the U.S. government of new deception detection technologies,” Warner wrote in an email. “DoD Directive 5210.91 and ODNI Security Agent Directive 2 currently prohibit the use of any credibility assessment solution other than polygraph. For this reason, we have contacted government agencies to consider the use of EyeDetect and other new technologies.”

After finding success in corporate applications and sheriff’s offices, Converus has set its sights on large federal agencies that could apply its EyeDetect technology to a host of uses, including employee clearance screenings and border security. Unlike a polygraph, a device which relies on an operator asking questions and measuring physiological responses like heart rate and perspiration, Converus’s technology measures “cognitive load” with an algorithm that processes eye movement.

Keep reading

Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a ‘perpetual police line-up’

A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company’s CEO recently admitted, creating what critics called a “perpetual police line-up,” even for people who haven’t done anything wrong. 

The company, Clearview AI, boasts of its potential for identifying rioters at the January 6 attack on the Capitol, saving children being abused or exploited, and helping exonerate people wrongfully accused of crimes. But critics point to privacy violations and wrongful arrests fueled by faulty identifications made by facial recognition, including cases in Detroit and New Orleans, as cause for concern over the technology. 

Clearview took photos without users’ knowledge, its CEO Hoan Ton-That acknowledged in an interview last month with the BBC. Doing so allowed for the rapid expansion of the company’s massive database, which is marketed on its website to law enforcement as a tool “to bring justice to victims.”

Ton-That told the BBC that Clearview AI’s facial recognition database has been accessed by US police nearly a million times since the company’s founding in 2017, though the relationships between law enforcement and Clearview AI remain murky and that number could not be confirmed by Insider. 

In a statement emailed Insider, Ton-That said “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”

The company’s CEO added: “Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

Keep reading

‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says

A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. 

The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man’s widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting. 

As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante. 

Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself. 

“Without Eliza, he would still be here,” she told the outlet.  

The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google’s Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. 

Keep reading

Critics Warn of ‘a Dragnet of Surveillance’ as U.S. Pushes Ahead With Plans for More ‘Smart’ Cities

U.S. Transportation Secretary Pete Buttigieg last week announced $94 million in grant awards to fund 59 smart city technology projects across the country.

Despite widespread and mounting pushback against biometric surveillance and control systems associated with smart city technologies and the failure of the U.S. Department of Transportation’s (DOT) previous attempt to grant-fund smart city transformation in Columbus, Ohio, Buttigieg told The Verge he thinks “smart city technologies matter more than ever.”

Cities just need to take a different approach — experimenting with and testing out different technologies first, rather than implementing a “grand unified system” all at once, Buttigieg said.

The new grants, part of the Strengthening Mobility and Revolutionizing Transportation (SMART) Grants Program, are the first round of $500 million in funding that will be awarded for smaller smart mobility projects over the next five years, authorized under the 2021 Bipartisan Infrastructure Law.

In this funding round, DOT awarded smart grants for a range of projects, including drone surveillance or delivery, smart traffic signals, connected vehicles, autonomous vehicles, smart grid development, intelligent sensors and other Internet of Things (IoT) infrastructure. Some cities, including Los Angeles (LA), received multiple grants.

Smart city development typically focuses on the implementation of technologies like the IoT, 5G, cloud and edge computing, and biometric surveillance to track, manage, control and extract profit from an array of urban processes.

Whitney Webb, an investigative journalist and smart cities critic, said the smart city infrastructure is meant to facilitate the development of cities “micromanaged by technocrats via an all-encompassing system of mass surveillance and a vast array of ‘internet of things’ devices that provide a constant and massive stream of data that is analyzed by artificial intelligence (AI).”

Keep reading

Germany uses AI to target online content for removal, send data to police

Germany has a “porn police” – regulators, that is, who are using an “AI” tool called KIVI to find adult content across the internet – on sites and apps like TwitterYouTubeTelegram, and TikTok.

And when they do, those creating and/or posting this content could wind up in prison or pay fines, and they are notified of their transgression by the actual police.

Porn is not KIVI’s only target – the tool also scans for “political extremism, Holocaust denial, and violence.”

Reports mention a couple, dabbling in amateur porn, who received one such letter from the police in Berlin, that said they had posted pornography online unlawfully. However, the letter was not big on detail, neither when it comes to where the content in question was shared, nor why the action was illegal.

In this case, it eventually turned out that the system found the content while scanning Twitter, providing the police with screenshots.

The policy of suppressing porn seems to be picking up speed recently in Germany, as over a hundred people were sent the same type of letter and could now stand accused in criminal cases.

Even though pornography itself is not illegal to access in Germany for those over 18, there has been a push to introduce age verification using this particular industry as the obvious choice to promote the implementation of the technology.

As ever, age verification is touted as a way to protect those under 18 from inappropriate content, but in reality, to try to achieve that, every internet user is exposed to the age verification process (typically involving presenting government-issued IDs to sites or third parties).

And the authorities seem determined to have their way, since they are now ordering Twitter to block contentious accounts and have even tried imposing a blanket ban on a major porn site that would affect every user in Germany, Wired writes.

Keep reading

Governance By Artificial Intelligence: The Ultimate Unaccountable Tyranny

It’s no secret that globalist institutions are obsessed with artificial intelligence as some kind of technological prophecy. They treat it as if it is almost supernatural in its potential and often argue that every meaningful industrial and social innovation in the near future will owe its existence to AI. The World Economic Forum cites AI as the singular key to the rise of what they call the “Fourth Industrial Revolution.” In their view, there can be no human progress without the influence of AI algorithms, making human input almost obsolete.

This delusion is often promoted by globalist propagandists. For example, take a look at the summarized vision of WEF member Yuval Harari, who actually believes that AI has creative ability that will replace human imagination and innovation.  Not only that, but Harari has consistently argued in the past that AI will run the world much better than human beings ever could.

Harari’s examples of AI creativity might sound like extreme naivety to many of us, but he knows exactly what he is doing in misrepresenting the capabilities of algorithms.  Games like chess and Go are games of patterns restricted by rules, there only so many permutations of these patterns in any given scenario and AI is simply faster at spotting them than most humans because that is what it is designed to do by software creators.  This is no different that solving a mathematical equation; just because a calculator is faster than you does not mean it is “creative.”

There is a big difference between cognitive automation and cognitive autonomy.  AI is purely automation; it will play the games it is programmed to play and will learn to play them well, but it will never have an epiphany one day and create a new and unique game from scratch unless it is coded to do so.  AI will never have fun playing this new game it made, nor feel the joy of sharing that game with others, so why would it bother?  It will never seek to contribute to the world any more than it is pre-programmed to do.

The manner in which globalists hype AI is very tactical, however.  When Harari claims that many people will become part of the “useless class” once AI takes over the economy, he is hinting at another globalist ideology based on elitism – Transhumanism.  The goal of transhumanism is to one day merge human bodies and human minds with technology and AI, and only a limited group of people will have the resources to accomplish this (the globalists).

Keep reading