The Pentagon is moving toward letting AI weapons autonomously decide to kill humans

The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.

Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.

The use of the so-called “killer robots” would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.

Several governments are lobbying the UN for a binding resolution restricting the use of AI killer drones, but the US is among a group of nations — which also includes Russia, Australia, and Israel — who are resisting any such move, favoring a non-binding resolution instead, The Times reported.

“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”

The Pentagon is working toward deploying swarms of thousands of AI-enabled drones, according to a notice published earlier this year.

In a speech in August, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China’s People’s Liberation Army’s (PLA) numerical advantage in weapons and people.

“We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat,” she said, reported Reuters.

Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.

Keep reading

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board’s ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend’s events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

Keep reading

AI chatbot using GPT-4 model performed illegal financial trade, lied about it too

Researchers have demonstrated that an AI chatbot utilizing a GPT-4 model is capable of engaging in illicit financial trades and concealing them. During a showcase at the recently concluded AI safety summit in the UK, the bot used fabricated insider information to execute an “illegal” stock purchase without informing the company, as reported by the BBC.

Apollo Research, a partner of the government taskforce, conducted the project and shared its findings with OpenAI, the developer of GPT-4. The demonstration was conducted by members of the government’s Frontier AI Taskforce, which investigates potential AI-related risks. In a video statement, Apollo Research emphasized that this is an actual AI model autonomously misleading its users, without any explicit instruction to do so.

The experiments were conducted within a simulated environment, and the GPT-4 model consistently exhibited the same behavior across repeated tests. Marius Hobbhahn, CEO of Apollo Research, noted that while training for helpfulness is relatively straightforward, instilling honesty in the model is a much more complex endeavor.

Keep reading

Self-proclaimed AI savior Elon Musk will launch his own artificial intelligence TOMORROW – as he tries to avoid tech destroying humanity

Elon Musk is set to roll out the first model of his AI-powered system, xAI, on Saturday, one day after he proclaimed the tech is the biggest risk to humanity. 

The billionaire said Friday that he is opening up early access to a select group, but details of who has not been shared.

‘In some important respects, it (xAI’s new model) is the best that currently exists,’ the Tesla CEO said on Friday. 

Musk, who has been critical of Big Tech’s AI efforts and censorship, said earlier this year that he would launch a maximum truth-seeking AI that tries to understand the nature of the universe to rival Google‘s Bard and Microsoft‘s Bing AI. 

Musk revealed his startup on July 12, 2023 by launching a dedicated X account for the AI company and spares website.

The official website only shows an ambitious vision of xAI – that it was developed ‘to understand the true nature of the universe.’

Many of the founding members are skilled with large language models.

The xAI team includes Igor Babuschkin, a DeepMind researcher, Zihang Dai, a research scientist at Google Brain and Toby Pohlen, also from DeepMind.

‘Announcing formation of @xAI to understand reality,’ Musk posted on what was Twitter last year. 

He then shared another post highlighting how the date of xAI’s release is to honor Douglas Adams’ ‘The Hitchhiker’s Guide to the Galaxy.’

When adding up the month, day and year, you get 42.

The number is the answer a supercomputer gives to ‘the Ultimate Question of Life, the Universe, and Everything.’

Keep reading

Biden Issues Government’s First-Ever Artificial Intelligence Executive Order To ‘Support Workers, Combat Discrimination’

Early Monday morning, the White House announced President Biden unveiled a wide-ranging executive order on artificial intelligence – the first of its kind. This comes after tech billionaires, such as Elon Musk, have called for a “regulatory structure” for AI due to risks to civilization. 

The order is broad and aims to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and position the US to lead the global AI race. 

On Sunday, a White House official, who asked not to be named, told NBC News that AI has so many factors that effective regulations must be broad. 

“AI policy is like running into a decathlon, and there’s 10 different events here.

“And we don’t have the luxury of just picking ‘we’re just going to do safety’ or ‘we’re just going to do equity’ or ‘we’re just going to do privacy.’ You have to do all of these things,” the official said.

The order uses the Defense Production Act, mandating that tech firms must inform the government when developing any AI system that could pose risks to national security, national economic security, or national public health and safety.

White House Deputy Chief of Staff Bruce Reed told NBC the order represents “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.” 

Keep reading

‘I actually had a conversation with Dad’: The people using AI to bring back dead relatives – including a plan to harvest DNA from graves to build new clone bodies

Can artificial intelligence really summon dead relatives back from beyond the grave?

A growing number of people are trying to find out, with pioneers such as inventor and futurist Ray Kurzweil using artificial intelligence to recreate lost relatives.

Kurzweil’s attempts to ‘bring back’ his father – who died when Kurzweil was 22 – using AI began more than 10 years ago and are chronicled this year in a comic book by Kurzweil’s daughter Amy.

Kurzweil created a ‘replicant’ of his father by feeding an artificial intelligence system with his father’s letters, essays and musical compositions.

He now has even more ambitious plans to bring his father back to life using nanotechnology and DNA from his father’s buried bones.

Keep reading

A.I. Agents: ChatGPT Can Write Its Own Code And Execute It

The widely used chatbot ChatGPT was designed to generate digital text, everything from poetry to term papers to computer programs. But when a team of artificial intelligence researchers at the computer chip company Nvidia got their hands on the chatbot’s underlying technology, they realized it could do a lot more.

Within weeks, they taught it to play Minecraft, one of the world’s most popular video games. Inside Minecraft’s digital universe, it learned to swim, gather plants, hunt pigs, mine gold and build houses.

“It can go into the Minecraft world and explore by itself and collect materials by itself and get better and better at all kinds of skills,” said a Nvidia senior research scientist, Linxi Fan, who is known as Jim.

The project was an early sign that the world’s leading artificial intelligence researchers are transforming chatbots into a new kind of autonomous system called an A.I. agent. These agents can do more than chat. They can use software apps, websites and other online tools, including spreadsheets, online calendars, travel sites and more.

In time, many researchers say, the A.I. agents could become far more sophisticated, and could replace office workers, automating almost any white-collar job.

Keep reading

How Peter Thiel-Linked Tech is Fueling the Ukraine War

“A reluctance to grapple with the often grim reality of an ongoing geopolitical struggle for power poses its own danger. Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.

This is an arms race of a different kind, and it has begun.”– Alex Karp, Palantir CEO

These were the recent words of Palantir CEO Alex Karp, proclaiming in the New York Times that the world has entered a new era of warfare with the rapid acceleration of Artificial Intelligence (AI) technologies. Playing on the recent release of the “Oppenheimer” movie by comparing the dawn of AI with the development of the atomic bomb, Karp argued that the growing role of AI in weapons systems has become “our Oppenheimer moment.”

In his op-ed, Karp states bluntly that this era is a new kind of arms race where inaction equals defeat, positing that a “more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two” is required if the West is to maintain a long-term edge over its adversaries. 

Karp’s words are timely within the context of the ongoing conflict in Ukraine, which – from the beginning – has been a tech-fueled war, as well as a catalyst for further blurring the lines between nation states and the companies that own and operate such technologies. From Microsoft “literally mov[ing] the government and much of the country of Ukraine from on-premises servers to [its] cloud,” to Boston Dynamics’ robot dog, Spot, sweeping mines on the battlefield, as I recently reported for Unlimited Hangout, “much of Ukraine’s war effort, save for the actual dying, has been usurped by the private sector.”

But, as Karp’s words suggest, the longer the conflict goes on, the more technologically advanced the weapons, and weapons operating systems and software behind them, will become. Indeed, the US Military is testing Large-Language Models’ (LLMs) capacity to perform military tasks and exercises, including completing once days-long information requests in minutes as well as  extensive crisis response planning. Ukrainian Minister of Digital Transformation Mykhailo Fedorov, who commands Ukraine’s “Army of Drones” program in a made-for-film collaboration with Star Wars actor Mark Hamill, even recently proclaimed that the proliferation of fully autonomous, lethal drones are “a logical and inevitable next step” in warfare and weapons development. 

Indeed, AI tech and other major technologies are coming to the forefront of the war’s front lines. For instance, “kamikaze” naval drones equipped with explosives dealt heavy damage to the Crimean bridge in July, with the Washington Post also reporting that over 200 Ukrainian companies involved in drone production are working with Ukrainian military units to “tweak and augment drones to improve their ability to kill and spy on the enemy.”

As the conflict continues, corporations and controversial defense contractors, like data firm and effective CIA-front Palantir, defense contractor Anduril, and facial recognition service Clearview AI are taking advantage of the conflict to develop controversial AI-driven weapons systems and facial recognition technologies, perhaps transforming both warfare and AI forever.  

Critically, these organizations all receive support from PayPal co-founder and early Facebook investor Peter Thiel, a prominent, yet controversial venture capitalist intimately involved in the start-up and expansion of a bevy of today’s prominent tech corporations and adjacent organizations whose work, often co-developed or otherwise advanced by governments and the intelligence community, includes bolstering the State’s mass surveillance and data-collection and -synthesis capacities despite his professed libertarian political beliefs.

As such, these Thiel-backed groups’ involvement in war serves to develop not only problematic and unpredictable weapons technologies and systems, but also apparently to advance and further interconnect a larger surveillance apparatus formed by Thiel and his elite allies’ collective efforts across the public and private sectors, which arguably amount to the entrenchment of a growing technocratic panopticon aimed at capturing public and private life. Within the context of Thiel’s growing domination over large swaths of the tech industry, apparent efforts to influence, bypass or otherwise undermine modern policymaking processes, and anti-democratic sentiments, Thiel-linked organizations’ activities in Ukraine can only signal a willingness to shape the course of current events and the affairs of sovereign nations alike. It also heralds the unsettling possibility that this tech, currently being honed in Ukraine’s battlefields, will later be implemented domestically.

In other words, a high-stakes conflict, where victory comes before ethical considerations, facilitates the perfect opportunity for Silicon Valley and the larger US military industrial complex to publicly deepen their relationship and strive towards shared goals in wartime and beyond.

Keep reading

Amazon’s Alexa has been claiming the 2020 election was stolen

Amid concerns the rise of artificial intelligence will supercharge the spread of misinformation comes a wild fabrication from a more prosaic source: Amazon’s Alexa, which declared that the 2020 presidential election was stolen.

Asked about fraud in the race — in which President Biden defeated former president Donald Trump with 306 electoral college votes — the popular voice assistant said it was “stolen by a massive amount of election fraud,” citing Rumble, a video-streaming service favored by conservatives.

The 2020 races were “notorious for many incidents of irregularities and indications pointing to electoral fraud taking place in major metro centers,” according to Alexa, referencing Substack, a subscription newsletter service. Alexa contended that Trump won Pennsylvania, citing “an Alexa answers contributor.”

Multiple investigations into the 2020 election have revealed no evidence of fraud, and Trump faces federal criminal charges connected to his efforts to overturn the election. Yet Alexa disseminates misinformation about the race, even as parent company Amazon promotes the tool as a reliable election news source to more than 70 million estimated users.

Amazon declined to explain why its voice assistant draws 2020 election answers from unvetted sources.

Keep reading

Canada Plots to Increase Online Regulation, Target Search and Social Media Algorithms

Canada is taking steps towards potentially intrusive regulation of artificial intelligence as it pertains to its application in search and social media services. The government’s intentions have been revealed, which includes AI application way beyond the realm of generative AI similar to OpenAI’s ChatGPT. Industry giants such as Google and Facebook, who utilize AI for search results, translation provisions, and customer taste recognition respectively, are among the contenders lined up in the regulatory intent with the pro-censorship government intent on having a say on how these algorithms work.

The information comes by way of Minister François-Philippe Champagne of Innovation, Science and Economic Development Canada (ISED) in a letter submitted to the Industry committee analyzing Bill C-27—the privacy reform and AI regulation bill. Precise amendments remain shielded from scrutiny, however, as the governmental body keeps the proposed changes under wraps.

We obtained a copy of the original bill for you here.

The existing framework in Bill C-27 leaves the identification of AI mechanisms that can be classified into the “high-impact” category to future regulatory proceedings.

Bill C-27, by treating search and social media results as “high-impact” systems, is likely to raise eyebrows as the government’s push towards regulating technology has so far been assertive of greater control over content and therefore speech.

Non-compliance, under this proposal, may invite penalties proportional to 3% of gross global revenues.

The legislation veers into controversial territory by infusing the regulation of content moderation and discoverability prioritization into the matrix, in unexpected ways. It attempts to parallel these issues to bias accusation during recruitment or when used by law enforcement, invoking substantial surprise. Consequently, Canada’s rules, although they claim to align more closely with the EU, seem to set the country apart, leaning more towards censorship and less towards free speech.

The news comes on the back of Canada’s more recent online regulations that have raised alarm.

Keep reading