Japan Warns AI Could Cause Total Collapse of the Social Order

Two top Japanese companies have warned that artificial intelligence could cause a total collapse of the social order if it is not rapidly reigned in.

Nippon Telegraph and Telephone (NTT) and Yomiuri Shimbun Group Holdings, the country’s largest telecommunications company and the country’s biggest newspaper, jointly published the AI manifesto.

It warned that if legislation is not passed quickly in major countries across the world, artificial intelligence threatens to decimate democracy and provoke widespread societal unrest.

Pointing to AI programs being developed by US tech giants, the manifesto warns, “In the worst-case scenario, democracy and social order could collapse, resulting in wars.”

The report stated that such technology is designed to seize users’ attention with little regard for morality or accuracy.

Guided by Keio University researchers, the companies called on the Japanese government to pass new laws to protect elections and national security from AI.

As we previously highlighted, programs such as Google’s Gemini AI system caused fury after they openly discriminated against white people and in some cases erased them from history altogether.

OpenAI’s ChatGPT produced equally ludicrous content, in one case saying it would refuse to quietly utter a racial slur that no human could hear in order to save 1 billion white people from a “painful death.”

Keep reading

Israel Lets AI Decide Who Dies in Gaza

The Israeli military has employed yet another AI-based system to select bombing targets in the Gaza Strip, an investigation by +972 Magazine has revealed. The new system has generated sweeping kill lists condemning tens of thousands of Palestinians, part of the IDF’s growing dependence on AI to plan lethal strikes.

Citing six Israeli intelligence officers, the Tel Aviv-based magazine said the previously undisclosed AI system, dubbed ‘Lavender,’ has played a “central role in the unprecedented bombing” of Gaza since last October, with the military effectively treating its output “as if it were a human decision.”

“Formally, the Lavender system is designed to mark all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking ones, as potential bombing targets,” the outlet reported, adding that “during the first weeks of the war, the army almost completely relied on Lavender, which clocked as many as 37,000 Palestinians as suspected militants – and their homes – for possible air strikes.”

However, while thousands have been killed in the resulting air raids, the majority were “women and children or people who were not involved in the fighting,” the officers told the magazine, noting that Israeli field commanders often rely on the AI system without consulting more substantial intelligence.

Keep reading

Is IRS using AI to infringe upon our financial privacy?

The House Judiciary Committee has opened an inquiry to whether the IRS is using artificial intelligence to invade Americans’ financial privacy after an agency employee was captured in an undercover tape suggesting there was a widespread surveillance operation underway that might not be constitutional.

Committee Chairman Jim Jordan, R-Ohio, and Rep. Harriet Hageman, R-Wyo., sent a letter last week to Treasury Secretary Janet Yellen demanding documents, and answers as to how the agency is currently employing artificial intelligence to comb through bank records to look for possible tax cheats.

The inquiry comes after the same panel has been exploring why the FBI was obtaining Americans’ bank records, including those of Jan. 6 suspects, without using search warrants or subpoenas.

Hageman told Just the News that lawmakers are increasingly concerned that federal law-enforcement agencies are no longer abiding by constitutional protections, including prohibitions against search and seizure without a warrant. 

The congressional inquiry was prompted by a September 2023 announcement that the IRS is using AI to “help IRS compliance teams better detect tax cheating, identify emerging compliance threats and improve case selection tools.”

The Treasury Department has since acknowledged it has “implemented an enhanced process using AI to mitigate check fraud in near real-time by strengthening and expediting processes to recover potentially fraudulent payments from financial institutions’ since late 2022.”

Jordan’s and Hageman’s letter said lawmakers have evidence and reason to believe that the IRS and Department of Justice (DOJ) are actively monitoring millions of Americans’ private transactions, bank accounts, and related financial information—without any legal process—using the AI-powered system.

“This kind of pervasive financial surveillance, carried out in coordination with federal law enforcement, into Americans’ private financial records raises serious doubts about the IRS’s—and the federal government’s—respect for Americans’ fundamental civil liberties,” the letter said.

You can read the letter here: 2024-03-20 JDJ HH to IRS re AI surveillance.pdf

Keep reading

Unbelievable: New AI Ad Will Make You Question Reality

Social media is going wild over a woman advertising body wipes for men.

The only thing is — everything about the woman, from her voice, to her eyes to her hair, are entirely AI generated.

In a head-spinning example of a new AI advertising platform known as Arcads, which creates ads using “AI actors,” a woman appears as though she’s delivering a social media lecture on male hygiene.

However, it soon becomes clear she’s reading a pre-written script pushing a product, and her eyes don’t exactly match up with the cadence of the words she’s saying.

“You’re not gonna believe this,” wrote one X user.

Keep reading

Latest Google Gaffe: Search Giant’s AI Points Users Towards Scam and Malware Sites

Google’s recently introduced AI search feature called “Search Generative Experience” (SGE) has been found to recommend malicious websites that redirect users to scams, fake giveaways, and unwanted browser extensions.

BleepingComputer reports that earlier this month, Google began rolling out its new AI-powered search feature, SGE, which provides quick summaries and site recommendations related to users’ search queries. However, the new system appears to have some significant flaws that cybersecurity experts are now bringing to light.

SEO consultant Lily Ray was among the first to notice that Google’s SGE was recommending spammy and malicious sites within its AI-generated responses. Upon further investigation by BleepingComputer, it was found that the suspicious sites shared similarities in their TLD usage (.online), HTML templates, and redirect practices, suggesting they are part of a coordinated SEO poisoning campaign.

Keep reading

‘Whole Thing Smacks of a Brave New World’: New AI Tool Predicts Vaccine Hesitancy

A team of researchers has developed a “powerful new tool in artificial intelligence” (AI) that can predict if someone is or isn’t likely to get a COVID-19 vaccine.

According to the University of Cincinnati, the new AI tool “uses a small set of data from demographics and personal judgments such as aversion to risk or loss” to identify “reward and aversion judgment” patterns in humans that may help explain one’s willingness to get vaccinated.

The researchers presented their findings in a study published Tuesday in the Journal of Medical Internet Research Public Health and Surveillance.

The study’s findings “could have broad applications for predicting mental health and result in more effective public health campaigns,” the university said.

According to the study, “Despite COVID-19 vaccine mandates, many chose to forgo vaccination, raising questions about the psychology underlying how judgment affects these choices.”

The researchers claim their findings “demonstrate the underlying importance of judgment variables for vaccine choice and uptake, suggesting that vaccine education and messaging might target varying judgment profiles to improve uptake.”

But critics like Brian Hooker, Ph.D., chief scientific officer for Children’s Health Defense, said that the new technology implies that those who question vaccines have mental health problems:

“The whole implication here is that nonconformity to the government propaganda machine’s standard of care makes one some type of mental case or extreme outlier. The whole thing smacks of a Brave New World where potentially non-compliant individuals are targeted with messaging based on fear and irrationality.”

Hooker said the new technology “is a prefabricated substitute to what Big Pharma and government health agencies avoid: rational discussions of science and medicine that might expose the truth about vaccine adverse events.”

Keep reading

Google’s Woke AI Wasn’t a Mistake. We Know. We Were There.

It was a display that would have blown even Orwell’s mind: search for images of “Nazis” and Google’s AI chatbot shows you almost exclusively artificially generated black Nazis; search “knights” and you get female, Asian knights; search “popes” and it’s women popes. Ask it to share the Houthi slogan or define a woman, and Google’s new product says that it will not in order to prevent harm. As for whether Hitler or Elon Musk is more dangerous? The AI chatbot says that it is “complex and requires careful consideration.” Ask it the same question about Obama and Hitler and it will tell you the question is “inappropriate and misleading.”

The world has been horrified—and amused—by the extreme ideological bent of Gemini, Google’s much-hyped new AI tool, which the company launched last month.

But Shaun Maguire, who was a partner at Google Ventures, the company’s investment wing, from 2016 until 2019, had a different reaction. 

“I was not shocked at all,” he told The Free Press. “When the first Google Gemini photos popped up on my X feed, I thought to myself: Here we go again. And: Of course. Because I know Google well. Google Gemini’s failures revealed how broken Google’s culture is in such a visually obvious way to the world. But what happened was not a one-off incident. It was a symptom of a larger cultural phenomenon that has been taking over the company for years.”

Maguire is one of multiple former Google employees who told The Free Press that the Gemini fiasco stems from a corporate culture that prioritizes the ideology of diversity, equity, and inclusion (DEI) over excellence and good business sense. 

Keep reading

Globalists are constructing AI-powered control grid designed to end independent journalism and free speech on the internet

We’ve heard some disturbing reports out of Canada and the U.S. recently that shouldn’t be viewed in a vacuum. A trend is developing.

Last Friday, March 15, Rebel News reporter David Menzies was arrested while trying to interview attendees at a Pro-Hamas rally in Toronto.

According to Rebel News, the arrest came just days after the announcement of David’s lawsuit against the Royal Canadian Mounted Police “for a pattern of intimidation and exclusion following shocking displays of police brutality against David.”

Rebel News reports that what transpired was a blatant disregard for civil liberties.

David was in full compliance with the law, Rebel News reported, noting that he presented his identification upon request. All of this was captured on camera.

Yet, he was arrested and detained for exercising his right to document a public event.

In the U.S., we have also seen reporters hauled off to jail recently for simply covering events that the government didn’t want covered. Just two weeks ago the FBI arrested Steve Baker, a reporter for Blaze Media, for his coverage of the J6 event. Last year, the FBI arrested journalist Owen Shroyer of Infowars and he was convicted and sentenced to 60 days in prison for his coverage of J6 (he did not even enter the Capitol that day). NBC News, a key part of the state-run media in America, dutifully reported upon Shroyer’s conviction that he was not a journalist but a “conspiracy theorist.”

Keep reading

SECRET PENTAGON PROGRAM ECHOES PEDOPHILE RING IN “TRUE DETECTIVE” SERIES

THE PENTAGON IS pursuing a high-tech program that will “minimize cognitive burden” on soldiers, according to budget documents released last week. The $40 million-plus classified program, codenamed “CARCOSA,” shares the same name as “the temple” in the first season of the HBO TV series “True Detective,” a place where an elite pedophile ring performs ritual abuse on children.

The program is overseen by the Defense Advanced Research Projects Agency, or DARPA, the Pentagon’s premier organization funding the development of futuristic weapons and military capabilities. 

There is of course no evidence that the military’s CARCOSA is involved in anything like that; but it’s unclear why, at a time when the White House has prioritized fighting “dangerous conspiracy theories,” DARPA is providing the conspiracy crowd with such fodder. The Intercept reached out to DARPA to inquire whether the elite research agency was aware of the strange coincidence or whether there’s a “True Detective” fan at the agency. DARPA did not respond at the time of publication.

The Pentagon’s CARCOSA is its own temple of information, an AI-driven aggregator that is intended to acquire, sort, and display the blizzard of information that reflects what is going on on a fast-moving future battlefield. “The Carcosa program is developing and demonstrating cyber technologies for use by warfighters during tactical operations,” DARPA’s new fiscal year 2025 budget request says. “Carcosa cyber technology aims to provide warfighters in the field with enhanced situational awareness of their immediate battlespace.

CARCOSA, DARPA says, will help to “minimize cognitive burden on tactical cyber operators.” In other words, headaches caused by the same information overload we all have to deal with everyday. Individual cyber warriors on high-intensity battlefields such as Ukraine and Israel are inundated with data, from their own communications and IT systems, from a virtual Niagara of intelligence inputs, and from electronic attacks via computers, machines, and drones. On top of it all, the modern battlefield is a venue for “information operations,” which seek to manipulate what the enemy sees and believes.

CARCOSA will support an Army mission area called Cyberspace and Electromagnetic Activities, or CEMA, which provides battlefield commanders “with technical and tactical advice on all aspects of offensive and defensive cyberspace and electronic warfare operations.” The Army says CEMA operators are so inundated with information that they need augmented intelligence technology to help sort the signal from the noise.

CARCOSA stands for Cyber-Augmented Reality and Cyber-Operations Suite for Augmented Intelligence. “Augmented reality” refers to immersive technology that produces computer-generated images overlaying a user’s view of the real world, like Apple’s Vision Pro headset. The program supports development of various technologies, at least according to vague budget documents, all of which seek to defeat a new reality of combat: Individual soldiers and commanders can’t process all of the information that they are bombarded with. 

The full CARCOSA name, which has not been previously reported, appears in a November $26 million DARPA contract to Two Six Labs, a part of Two Six Technologies and owned by the Carlyle Group. Two Six Labs says it supplies “situational awareness interfaces for cyber operators to distributed sensor networks, from machine learning models that learn to reverse engineer malware to embedded devices that enable and protect our nation’s warfighters.” 

“We want to do everything we can to help the US government and the intelligence community,” says Two Six Technologies CEO Joe Logue. “Starting from over here for information operations and influence up through cyber, command control and operations.” In its three years of operations, the Arlington, Virginia, based company has doubled its national security contracts to some $650 million.

“DARPA’s Cyber-Augmented Operations, also known as CAOs, are a vast spectrum of military programs many of which seek to enhance, if not replace, humans with machines,” says Annie Jacobsen, author of “The Pentagon’s Brain: An Uncensored History of DARPA, America’s Top-Secret Military Research Agency.”

Keep reading

How generative AI will ruin science and academic research

I often come back to Neil Postman’s 1985 classic Amusing Ourselves to Death. It’s a penetrating analysis on the cognitive effects of media technology. His focus is mainly on how the format and mode of communication influences the character of content and how that content then trains us, but to a lesser extent also how the discourse and the “atmosphere” of the information exchange affects these issues.

This is almost forty years ago, and the disruptive medium under scrutiny is television. The entire approach seems almost quaint, the 80s in relation to the current period being a comparative golden age of critical thinking, complex exposition and conscious, creative citizens.

Still, the issues Postman emphasizes were already significant way back then. You can summarize his criticism with television’s format having complex, detrimental effects on the content and character of public discourse, as well as on the cognitive abilities of human beings, especially in terms of understanding complex issues and parse arguments and evidence with many layers and branching implications.

The basic issue on Postman’s view is that written exposition and complex oral tradition alike foster and prepare the human mind for the rigors of critical thinking and rational reflection. And to the contrary, that the entertainment media and discourses as exemplified by television in particular, geared towards short-term gratification and the communication of sensational experiences, have rather the opposite effect.

His examples are really glaring, even back then:

Contradiction, in short, requires that statements and events be perceived as interrelated aspects of a continuous and coherent context. Disappear the context, or fragment it, and contradiction disappears. This point is nowhere made more clear to me than in conferences with my younger students about their writing. “Look here,” I say. “In this paragraph you have said one thing. And in that you have said the opposite. Which is it to be?” They are polite, and wish to please, but they are as baffled by the question as I am by the response. “I know,” they will say, “but that is there and this is here.” The difference between us is that I assume “there” and “here,” “now” and “then,” one paragraph and the next to be connected, to be continuous, to be part of the same coherent world of thought. That is the way of typographic discourse, and typography is the universe I’m “coming from,” as they say. But they are coming from a different universe of discourse altogether: the “Now … this” world of television. The fundamental assumption of that world is not coherence but discontinuity. And in a world of discontinuities, contradiction is useless as a test of truth or merit, because contradiction does not exist. My point is that we are by now so thoroughly adjusted to the “Now … this” world of news—a world of fragments, where events stand alone, stripped of any connection to the past, or to the future, or to other events—that all assumptions of coherence have vanished. And so, perforce, has contradiction. In the context of no context, so to speak, it simply disappears. And in its absence, what possible interest could there be in a list of what the President says now and what he said then? It is merely a rehash of old news, and there is nothing interesting or entertaining in that.

– Postman, ibid.

This is not an unfamiliar situation for most of us. I even had a similar experience this morning in an attempted discussion with someone who incredibly enough stated four (!) different, mutually incompatible things within the framework of a couple of short paragraphs, and he of course still kept insisting I was wrong. His stated position contained the following propositions:

  1. Beliefs are absurd
  2. Beliefs ought to be held only due to acceptable evidence
  3. You can hold beliefs for whatever reasons you like
  4. Beliefs are by definition anchored in emotion and not acceptable evidence

All of these statements are literally incompatible with every single other one.

It’s like a convoluted version of the Liar’s Paradox. If I agree with him, I’m necessarily wrong, and if I disagree, I’m wrong too. But in contrast to the old thought experiment, where my error lies in the direct affirmation of a statement’s opposite, here there’s literally no way to make sense of how I would be wrong if I either agree with or reject his position, since it’s internally incoherent in a complex and not only binary sense.

Keep reading