The Military Dangers of AI Are Not Hallucinations

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes. But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film WarGames, a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The Terminator movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Keep reading

U.S. Committee Examines Role of AI in Warfare

Richard Moore, chief of the United Kingdom’s Secret Intelligence Service (SIS), claimed in a rare public speech on Wednesday that “artificial intelligence [AI] will change the world of espionage, but it won’t replace the need for human spies,” while admitting that British spies are already using AI to disrupt the supply of weapons to Russia.  

According to AP News, in his speech Moore painted AI as a “potential asset and major threat” and called China the “single most important strategic focus” for SIS, commonly known as MI6. He added, “We will increasingly be tasked with obtaining intelligence on how hostile states are using AI in damaging, reckless and unethical ways.” 

Moore shared that “’the unique characteristics of human agents in the right places will become still more significant,’ highlighting spies’ ability to ‘influence decisions inside a government or terrorist group.’” 

While speaking to an audience at the British ambassador’s residence in Prague, Moore urged Russians who oppose the invasion of Ukraine to spy for Britain. “I invite them to do what others have already done this past 18 months and join hands with us,” he said, assuring prospective defectors that “their secrets will always be safe with us” and that “our door is always open.”  

While the MI6 chief spent more time talking about the Russia-Ukraine conflict, it was his comments on the West potentially “falling behind rivals in the AI race” that stood out. Moore declared that, “Together with our allies, [SIS] intends to win the race to master the ethical and safe use of AI.”  

Being quite aware of AI and how it is being used by hostile states, the House Armed Services Subcommittee on Cyber, Information Technologies, and Innovation heard testimony from AI experts at Tuesday’s hearing, “Man and Machine: Artificial Intelligence on the Battlefield.” 

The subcommittee’s goal was to discuss “the barriers that prevent the Department of Defense [DOD] from adopting and deploying artificial intelligence (AI) effectively and safely, the Department’s role in AI adoption, and the risks to the Department from adversarial AI.” 

Alexandr Wang, founder and CEO of Scale AI, testified that during an investor trip to China, he witnessed first-hand the “progress that China was making toward developing computer vision technology and other forms of AI.” Wang was troubled at the time, “because this technology was also being used for domestic repression, such as persecuting the Uyghur population.” 

Keep reading

Japan To Deploy Pre-Crime Style “Behavior Detection” Technology

The Japan National Police Agency has decided to adopt AI-enhanced pre-crime surveillance cameras to bolster the security measures surrounding VIPs.

This step comes in response to the commemoration of the shocking assassination of former Prime Minister Shinzo Abe and the rising threats posed by what the government called “lone offenders.”

The use of AI in law enforcement is becoming commonplace globally. A 2019 study by the Carnegie Endowment for International Peace revealed that 52 out of the 176 nations surveyed were incorporating AI tools into their policing strategies, Nikkei Asia reported.

Keep reading

MIT Makes Probability-Based Computing a Bit Brighter

In a noisy and imprecise world, the definitive 0s and 1s of today’s computers can get in the way of accurate answers to messy real-world problems. So says an emerging field of research pioneering a kind of computing called probabilistic computing. And now a team of researchers at MIT have pioneered a new way of generating probabilistic bits (p-bits) at much higher rates—using photonics to harness random quantum oscillations in empty space.

The deterministic way in which conventional computers operate is not well suited to dealing with the uncertainty and randomness found in many physical processes and complex systems. Probabilistic computing promises to provide a more natural way to solve these kinds of problems by building processors out of components that behave randomly themselves.

The approach is particularly well suited to complicated optimization problems with many possible solutions or to doing machine learning on very large and incomplete datasets where uncertainty is an issue. Probabilistic computing could unlock new insights and findings in meteorology and climate simulations, for instance, or spam detection and counterterrorism software, or next-generation AI.

Keep reading

ChatGPT’s Evil Twin “WormGPT” Is Silently Entering Emails And Raiding Banks

A malicious copy of OpenAI’s ChatGPT has been created by a bad actor and its aim is to take your money.

The evil AI is called WormGPT, and it was created by a hacker for sophisticated email phishing attacks.

Cybersecurity firm SlashNext confirmed the artificially intelligent language bot had been created purely for malicious purposes.

The firm explained in a report:

Our team recently gained access to a tool known as ‘WormGPT’ through a prominent online forum that’s often associated with cybercrime.

This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.

The cyber experts experimented with WormGPT to see just how dangerous it could be.

They asked it to create phishing emails and found the results disturbing.

“The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” the experts wrote.

SlashNext says WormGPT is an example of the threat that language-generative AI models pose.

Experts think the tool could be damaging even in the hands of a novice cybercriminal.

With AI like this out there, it’s best to be extra vigilant when it comes to checking your email inbox.

That especially applies to any email that asks for money, banking details, or other personal information.

Keep reading

CEO of Worldcoin Says “Something Like World ID Will Eventually Exist…Whether You Like It Or Not”

Right now, it’s about those who voluntarily surrender their biometric data and receive “small sums” in Worldcoin in return for signing up to the World ID scheme.

But if Open AI CEO Sam Altman has anything to say about how Worldcoin, a project within his company, develops – everyone who wants to use the internet will eventually be required to use World ID – or “something like it.”

And right now, it seems that people in several southern European countries, notably Spain and Portugal, are simply itching to give away their iris biometrics as proof of identity and right to a cryptocurrency transfer wallet.

The signup process involves exposing your eyes to what’s known as Worldcoin’s Orb iris scanners. If reports are to be believed, the uptake in Spain, where the scheme first became available a year ago, is better than elsewhere – 150,000 participants in total, 20,000 new ones each day, and Barcelona is the place where a number of Orb scanners will be installed.

Portugal is not far behind, with 120,000 participants, and Germany is said to also be warming up to the project, ever since it started expanding two months ago.

All in all, some 2 million “biometric credentials” are now operated by Worldcoin. Why do people sign up for it?

“Something like World ID will eventually exist, meaning that you will need to verify [you are human] on the internet, whether you like it or not,” Blania said.

“Whether you like it or not” are the “sweet” words everyone does (not) like to hear in connections with something like that, but that is what Worldcoin CEO Alex Blania decided to go for when describing the future.

In it, according to Blania, digital ID will be so prevalent that it will become inevitable, and there will be no escaping verifying the quality of being human (and likely, quite a few more things) online – if one wants to be online at all.

And whether one “likes it or not.” Blania links it to “progress” in “AI,” and predicts this will be happening as soon as within a couple of years.

Keep reading

The US Military Is Taking Generative AI Out for a Spin

Matthew Strohmeyer is sounding a little giddy. The US Air Force colonel has been running data-based exercises inside the US Defense Department for years. But for the first time, he tried a large-language model to perform a military task.

“It was highly successful. It was very fast,” he tells me a couple of hours after giving the first prompts to the model. “We are learning that this is possible for us to do.”

Large-language models, LLMs for short, are trained on huge swaths of internet data to help artificial intelligence predict and generate human-like responses to user prompts. They are what power generative AI tools such as OpenAI’s ChatGPT and Google’s Bard.

Five of these are being put through the paces as part of a broader series of Defense Department experiments that are focused on developing data integration and digital platforms across the military. The exercises are run by the Pentagon’s digital and AI office and military top brass, with participation from US allies. The Pentagon won’t say which LLMs are in testing, though Scale AI, a San Francisco-based startup, says its new Donovan product is among the LLM platforms being tested.

The use of LLMs would represent a major shift for the military, where so little is digitized or connected. Currently, making a request for information to a specific part of the military can take several staffers hours or even days to complete, as they jump on phones or rush to make slide decks, Strohmeyer says.

In one test, one of the AI tools completed a request in 10 minutes.

“That doesn’t mean it’s ready for primetime right now. But we just did it live. We did it with secret-level data,” he says of the experiment, adding it could be deployed by the military in the very near term. 

Strohmeyer says they have fed the models with classified operational information to inform sensitive questions. The long-term aim of such exercises is to update the US warhorse so it can use AI-enabled data in decision-making, sensors and ultimately firepower.

Dozens of companies, including Palantir Technologies Inc., co-founded by Peter Thiel, and Anduril Industries Inc. are developing AI-based decision platforms for the Pentagon.

Microsoft Corp. recently announced users of the Azure Government cloud computer service could access AI models from OpenAI. The Defense Department is among Azure Government’s customers.

The military exercise, which runs until July 26, will also serve as a test of whether military officials can use LLMs to generate entirely new options they’ve never considered. 

For now, the US military team will experiment by asking LLMs for help planning the military’s response to an escalating global crisis that starts small and then shifts into the Indo-Pacific region.

Keep reading

Lab Administers A.I.-Designed Drug to First Patient

Hong Kong- and New York-based Insilico Medicine on Tuesday announced a drug for treating idiopathic pulmonary fibrosis (IPF) designed by generative artificial intelligence (A.I.) has advanced to Phase 2 clinical trials, which means the drug has been administered to its first human patient.

IPF is a chronic lung disease that makes it more difficult to breathe, starving the body of much-needed oxygen. IPF is currently regarded as incurable, but treatable. 

“Generative A.I.” is the level of artificial intelligence that can accept fairly broad commands from a human user and create a complex finished product. Such A.I. systems grow more powerful and useful as they “learn” by accumulating information. DALL-E, the computer art program that can fulfill instructions like “Show me what the Peanuts characters would look like if Picasso drew them” is a popular example.

Creating a new medicine is a daunting task. The design stage includes a great deal of labor-intensive research that could hopefully be completed more quickly by A.I.

Keep reading

AI Artist Creates Satanic Panic About Hobby Lobby

People on social media are sharing pictures of what they think are Satanic-seeming displays from Hobby Lobby stores and vowing never to shop there again much like many people refuse to drink Bud Light or shop at Target for bigoted reasons. Aside from the fact that Americans are currently eager to boycott any company that feigns tolerance at marginalized people, there’s one big problem with these Hobby Lobby store pictures: They’re not real. 

These pictures of Satanic merchandise on the shelves of Hobby Lobby were made by Jennifer Vinyard using the AI image generating tool Midjourney. That didn’t stop people from credulously sharing the photos on Facebook and TikTok as if they were real and expressing their shock and horror that Hobby Lobby, which bills itself as a Christian company, was selling giant statues of Baphomet.

Vinyard, an Austin-area pharmacist in training, generated the pictures with Midjourney and posted them to her personal Facebook, Reddit, and an AI art group on Facebook on June 5. The public post in AI Art Universe went viral and, as of this writing, has been shared more than six thousand times. The post gained more than 100 comments before the page shut them down.

Keep reading

The world’s top H.P. Lovecraft expert weighs in on a monstrous viral meme in the A.I. world

Artificial intelligence is scary to a lot of people, even within the tech world. Just look at how industry insiders have co-opted a tentacled monster called a shoggoth as a semi-tongue-in-cheek symbol for their rapidly advancing work.

But their online memes and references to that creature — which originated in influential late author H.P. Lovecraft’s novella “At the Mountains of Madness” — aren’t quite perfect, according to the world’s leading Lovecraft scholar, S.T. Joshi.

If anyone knows Lovecraft and his wretched menagerie, which includes the ever-popular Cthulhu, it’s Joshi. He’s edited reams of Lovecraft collections, contributed scores of essays about the author and written more than a dozen books about him, including the monumental two-part biography “I Am Providence.”

So, after The New York Times recently published a piece from tech columnist Kevin Roose explaining that the shoggoth had caught on as “the most important meme in A.I.,” CNBC reached out to Joshi to get his take — and find out what he thought Lovecraft would say about the squirmy homage from the tech world.

“While I’m sure Lovecraft would be grateful (and amused) by the application of his creation to AI, the parallels are not very exact,” Joshi wrote. “Or, I should say, it appears that AI creators aren’t entirely accurate in their understanding of the shoggoth.”

Keep reading