If AI Companies Are Trying To Build God, Shouldn’t They Get Our Permission First?

AI companies are on a mission to radically change our world. They’re working on building machines that could outstrip human intelligence and unleash a dramatic economic transformation on us all.

Sam Altman, the CEO of ChatGPT-maker OpenAI, has basically told us he’s trying to build a god — or “magic intelligence in the sky,” as he puts it. OpenAI’s official term for this is artificial general intelligence, or AGI. Altman says that AGI will not only “break capitalism” but also that it’s “probably the greatest threat to the continued existence of humanity.”

There’s a very natural question here: Did anyone actually ask for this kind of AI? By what right do a few powerful tech CEOs get to decide that our whole world should be turned upside down?

As I’ve written before, it’s clearly undemocratic that private companies are building tech that aims to totally change the world without seeking buy-in from the public. In fact, even leaders at the major companies are expressing unease about how undemocratic it is.

Jack Clark, the co-founder of the AI company Anthropic, told Vox last year that it’s “a real weird thing that this is not a government project.” He also wrote that there are several key things he’s “confused and uneasy” about, including, “How much permission do AI developers need to get from society before irrevocably changing society?” Clark continued:

Technologists have always had something of a libertarian streak, and this is perhaps best epitomized by the ‘social media’ and Uber et al era of the 2010s — vast, society-altering systems ranging from social networks to rideshare systems were deployed into the world and aggressively scaled with little regard to the societies they were influencing. This form of permissionless invention is basically the implicitly preferred form of development as epitomized by Silicon Valley and the general ‘move fast and break things’ philosophy of tech. Should the same be true of AI?

I’ve noticed that when anyone questions that norm of “permissionless invention,” a lot of tech enthusiasts push back. Their objections always seem to fall into one of three categories. Because this is such a perennial and important debate, it’s worth tackling each of them in turn — and why I think they’re wrong.

Keep reading

Invisible text that AI chatbots understand and humans can’t? Yep, it’s a thing.

What if there was a way to sneak malicious instructions into Claude, Copilot, or other top-name AI chatbots and get confidential data out of them by using characters large language models can recognize and their human users can’t? As it turns out, there was—and in some cases still is.

The invisible characters, the result of a quirk in the Unicode text encoding standard, create an ideal covert channel that can make it easier for attackers to conceal malicious payloads fed into an LLM. The hidden text can similarly obfuscate the exfiltration of passwords, financial information, or other secrets out of the same AI-powered bots. Because the hidden text can be combined with normal text, users can unwittingly paste it into prompts. The secret content can also be appended to visible text in chatbot output.

The result is a steganographic framework built into the most widely used text encoding channel.

“Mind-blowing”

“The fact that GPT 4.0 and Claude Opus were able to really understand those invisible tags was really mind-blowing to me and made the whole AI security space much more interesting,” Joseph Thacker, an independent researcher and AI engineer at Appomni, said in an interview. “The idea that they can be completely invisible in all browsers but still readable by large language models makes [attacks] much more feasible in just about every area.”

To demonstrate the utility of “ASCII smuggling”—the term used to describe the embedding of invisible characters mirroring those contained in the American Standard Code for Information Interchange—researcher and term creator Johann Rehberger created two proof-of-concept (POC) attacks earlier this year that used the technique in hacks against Microsoft 365 Copilot. The service allows Microsoft users to use Copilot to process emails, documents, or any other content connected to their accounts. Both attacks searched a user’s inbox for sensitive secrets—in one case, sales figures and, in the other, a one-time passcode.

When found, the attacks induced Copilot to express the secrets in invisible characters and append them to a URL, along with instructions for the user to visit the link. Because the confidential information isn’t visible, the link appeared benign, so many users would see little reason not to click on it as instructed by Copilot. And with that, the invisible string of non-renderable characters covertly conveyed the secret messages inside to Rehberger’s server. Microsoft introduced mitigations for the attack several months after Rehberger privately reported it. The POCs are nonetheless enlightening.

Keep reading

Open AI Wants to Build Data Centres That Would Consume More Electricity Per Year Than the Whole of the U.K.

Over the past few months, the newswires have been hot with stories about the large-scale data centres that will be required to meet the needs of the forthcoming revolution in Artificial Intelligence (AI). How much electricity will these new data centres consume and what does that mean for the electricity demand forecasts underpinning the plans for Net Zero?

Recent Date Centre Announcements

To give a flavour of the scale of data centre developments that are coming, it is helpful to look at recent announcements from large tech companies. Back in March, it was announced that Amazon had bought a 960MW data centre that is powered by an adjacent nuclear power station. In April, Mark Zuckerberg CEO of Meta that owns Facebook and Instagram said energy requirements may hold back the build out of AI data centres. He also talked about building data centres that would consume 1GW of power.

Last month, Oracle chairman Larry Ellison announced that Oracle was designing a data centre that would consume more than 1GW that would be powered by three small modular nuclear reactors (SMRs). Then Microsoft also got in on the act when it announced it had done a deal with U.S. utility Constellation to restart the 835MW Three Mile Island (TMI) Unit 1 nuclear power plant to power its data centres. Anxious not to be left out, Sundar Pichai, CEO of Google said they too were working on 1GW data centres and saw money being invested in SMRs.

Finally, Sam Altman of OpenAI, the creator of ChatGPT has trumped them all by pitching the idea of 5GW data centres to the White House. Altman has been heard talking of building five to seven of these leviathans.

Bloomberg, usually a driver of the Net Zero band wagon, has reluctantly admitted that there’s not enough clean energy – nuclear or otherwise – to satisfy AI’s voracious appetite and gas will have to fill the gap. America’s national security and energy security is eclipsing climate concerns.

This admission is important because it acknowledges that such important infrastructure cannot rely upon the vicissitudes of the weather. These data centres need a source of electricity that is reliable and always available. It would be absurd to think that a ChatGPT user will want to receive a message saying their request cannot be processed because it is not windy enough.

Keep reading

How the Army is using AI during Hurricane Helene relief

The Army’s 18th Airborne Corps is for the first time using a battlefield capability to map road closures, cellular outages, supply needs and other data in real time to help the Federal Emergency Management Agency and U.S. Northern Command help people whose homes and communities were battered by Hurricane Helene late last month.

The Army is using its Maven Smart System to provide responders with the information needed to make quick, on-the-ground decisions, such as where to send medical supplies or how many truckloads of water to take into certain storm-ravaged areas, defense officials told reporters Monday.

Weeks after the deadly hurricane tore a path from Florida’s Gulf Coast into the Appalachian Mountains, some residents in the southeast are still sifting through the wreckage caused by floods and landslides that destroyed entire towns.

More damage is feared as Hurricane Milton bears down on Florida this week as well.

Keep reading

The Big Tech Think Tank Campaigning to Censor Satire

The Brookings Institution, seems to believe it has solved the problem faced by those who would like to censor memes. The problem is that memes are a form of satire, and censoring them while claiming to be a democracy is a difficult task.

But now, senior Brookings Institution fellow Nicol Turner Lee and Isabella Panico Hernandez, a project assistant, have revealed their thinking: AI memes should be treated as election disinformation “manifested” through satire.

One could use a similar form of mental gymnastics to say that this kind of argument represents a call for censorship manifested through supposed concern about disinformation.

The Brookings, meanwhile, is not just any foot soldier in the “war on memes”: it is a powerful think tank funded by the likes of Amazon, Google, Meta, Microsoft, but also massive financial institutions like JPMorgan Chase (via its philanthropic foundation) and that of Mastercard, Impact Fund.

Brookings speaks about memes, particularly those AI-generated (adding some AI panic into the mix can only help the cause), as an extremely dangerous phenomenon hidden behind humor, and perceived as humor by pretty much everyone.

But the think tank, and others going after memes, present themselves as smarter and able to understand the true nature of this clearly humorous and often satirical imagery, which they say only “seem harmless” and “appear innocuous.”

Instead, the authors of the article say memes can influence how voters perceive candidates and other election-related information, “could potentially lead to violence” – and are “globally perceived” as being capable to “fuel extremist behavior” – which is in contrast to the US, supposedly because of the lack of appropriate regulation.

And so, less than a month before the presidential election, these according to the authors insidious messages use humor merely as a vehicle to spread dangerous influence, but are not properly tackled in the US.

Keep reading

Meta’s new “Movie Gen” AI system can deepfake video from a single photo

On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand.

The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to “enhance their inherent creativity” rather than replace human artists and animators. The company envisions future applications such as easily creating and editing “day in the life” videos for social media platforms or generating personalized animated birthday greetings.

Movie Gen builds on Meta’s previous work in video synthesis, following 2022’s Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos.

Keep reading

Klaus Schwab Announces ‘Collaboration for the Intelligent Age’ will be Theme of Next WEF Meeting in Davos

World Economic Forum (WEF) founder Klaus Schwab announces that the theme for next year’s Annual Meeting in Davos will be “Collaboration for the Intelligent Age.”

Schwab made the announcement on the WEF Agenda blog on September 24, where he also declared, “We have already crossed the threshold into the Intelligent Age.

It is up to us to determine whether it will lead to a future of greater equality, sustainability and collaboration — or if it will deepen divides that already exist.”

Schwab may declare that we are now in the Intelligent Age, but what type of intelligence is he actually talking about?

As we shall see, this Intelligent Age is more about the dumbing of humanity and the rise of smart technologies for mass surveillance and censorship that limits our decision-making capabilities.

“The Intelligent Age is also transforming how we live. Cities are becoming smarter, with sensors and AI managing everything from traffic flow to energy usage. These smart cities, and the smart homes within them, are not just more efficient, they are designed to be more sustainable, reducing carbon emissions and improving quality of life”

Klaus Schwab, “The Intelligent Age: A time for cooperation,” September 2024

In his latest post on the WEF Agenda blog, Schwab lists several examples of how AI and automation is outperforming human capabilities.

Keep reading

AI’s Ominous Split Away From Human Thinking

AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is now starting to take a much more experimental approach that could catapult machine learning way past humans.

Remember Deepmind’s AlphaGo? It represented a fundamental breakthrough in AI development, because it was one of the first game-playing AIs that took no human instruction and read no rules.

Instead, it used a technique called self-play reinforcement learning (RL) to build up its own understanding of the game. Pure trial and error across millions, even billions of virtual games, starting out more or less randomly pulling whatever levers were available, and attempting to learn from the results.

Within two years of the start of the project in 2014, AlphaGo had beaten the European Go champion 5-0 – and by 2017 it had defeated the world’s #1 ranked human player.

At this point, Deepmind unleashed a similar AlphaZero model on the chess world, where models like Deep Blue, trained on human thinking, knowledge and rule sets, had been beating human grandmasters since the 90s. AlphaZero played 100 matches against the reigning AI champion, Stockfish, winning 28 and tying the rest.

Keep reading

Minnesota ‘Acting as a Ministry of Truth’ With Anti-Deep Fake Law, Says Lawsuit

A new lawsuit takes aim at a Minnesota law banning the “use of deep fake technology to influence an election.” The measure—enacted in 2023 and amended this year—makes it a crime to share AI-generated content if a person “knows or acts with reckless disregard about whether the item being disseminated is a deep fake” and the sharing is done without the depicted individual’s consent, intended to “injure a candidate or influence the result of an election,” and either within 90 days before a political party nominating convention or after the start of the absentee voting period prior to a presidential nomination primary, any state or local primary, or a general election.

Christopher Kohls, a content creator who goes by Mr. Reagan, and by Minnesota state Rep. Mary Franson (R–District 12B) argue that the law is an “impermissible and unreasonable restriction of protected speech.”

Violating Minnesota’s deep fake law is punishable by up to 90 days imprisonment and/or a fine of up to $1,000, with penalties increasing if the offender has a prior conviction within the past five years for the same thing or the deep fake is determined to have been shared with an “intent to cause violence or bodily harm.” The law also allows for the Minnesota attorney general, county or city attorneys, individuals depicted in the deep fake, or any candidate “who is injured or likely to be injured by dissemination” to sue for injunctive relief “against any person who is reasonably believed to be about to violate or who is in the course of violating” the law.

If a candidate for office is found guilty of violating this law, they must forfeit the nomination or office and are henceforth disqualified “from being appointed to that office or any other office for which the legislature may establish qualifications.”

There are obviously a host of constitutional problems with this measure, which defines “deep fake” very broadly: “any video recording, motion-picture film, sound recording, electronic image, or photograph, or any technological representation of speech or conduct substantially derivative thereof” that is realistic enough for a reasonable person to believe it depicts speech or conduct that did not occur and developed though “technical means” rather than “the ability of another individual to physically or verbally impersonate such individual.”

Keep reading

Judge blocks California deepfakes law that sparked Musk-Newsom row

A federal judge on Wednesday blocked a California measure restricting the use of digitally altered political “deepfakes” just two weeks after Gov. Gavin Newsom signed the bill into law.

The ruling is a blow to a push by the state’s leading Democrats to rein in misleading content on social media ahead of Election Day.

Chris Kohls, known as “Mr Reagan” on X, sued to prevent the state from enforcing the law after posting an AI-generated video of a Harris campaign ad on the social media site. He claimed the video was protected by the First Amendment because it was a parody.

The judge agreed.

“Most of [the law] acts as a hammer instead of a scalpel,” Senior U.S. District Judge John A. Mendez wrote, calling it “a blunt tool hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas.” He carved out an exception for a “not unduly burdensome” portion of the law that requires verbal disclosure of digitally altered content in audio-only recordings.

Theodore Frank, an attorney for Kohls, said in a statement they were “gratified that the district court agreed with our analysis.”

Keep reading