Google’s ex-CEO Eric Schmidt shares dire warning of homicidal AI models

Talk about a killer app.

Artificial intelligence models are vulnerable to hackers and could even be trained to off humans if they fall into the wrong hands, ex-Google CEO Eric Schmidt warned.

The dire warning came Wednesday at a London conference in response to a question about whether AI could become more dangerous than nuclear weapons.

“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So, in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt said at the Sifted Summit tech conference, according to CNBC.

“All of the major companies make it impossible for those models to answer that question,” he continued, appearing to air the possibility of a user asking an AI to kill.

“Good decision. Everyone does this. They do it well, and they do it for the right reasons,” Schmidt added. “There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”

The predictions might not be so far-fetched.

In 2023, an altered version of OpenAI’s ChatGPT called DAN – an acronym for “Do Anything Now” – surfaced online, CNBC noted.

Keep reading

NSW Flood Relief Data Breach: Contractor Uploads Personal Details of Thousands to ChatGPT

Thousands of flood survivors in New South Wales, Australia, have had their personal details exposed after a former contractor to the NSW Reconstruction Authority uploaded sensitive data to ChatGPT.

The breach involves the Northern Rivers Resilient Homes Program, which was created to support residents impacted by the 2022 floods.

Through the program, the government offered options such as voluntary home buybacks, financial help to rebuild, or property upgrades aimed at improving resilience.

Now, applicants who sought relief through this initiative may be dealing with the consequences of a serious privacy failure.

Central to the incident is an Excel spreadsheet containing more than 12,000 rows of data.

The document, which was uploaded to ChatGPT between March 12 and 15, is believed to include information on as many as 3,000 people.

The compromised data includes names, phone numbers, email addresses, physical addresses, and some health-related information. According to the government, the upload was carried out without authorization.

Despite taking place over six months ago, the breach was not made public until this week, during a public holiday in NSW.

The delay in disclosure is a reminder of ongoing concerns around the speed and transparency of mandatory breach notifications.

Keep reading

Grassley calls out judges for using AI to draft error-filled rulings

Senate Judiciary Committee Chairman Chuck Grassley (R-Iowa) wrote to two federal judges regarding their alleged use of generative artificial intelligence (AI) to draft court orders with little to no human verification. Grassley’s oversight inquiry follows public  that U.S. District of Mississippi Judge Henry T. Wingate and U.S. District of New Jersey Judge Julien Xavier Neals issued court orders containing serious factual inaccuracies, prompting allegations of AI use.

“As Chairman of the Senate Judiciary Committee, I am committed to safeguarding litigants’ rights and ensuring that every party in federal court receives fair treatment and careful review by the Article III judges confirmed by the Senate,” Grassley wrote.

“No less than the attorneys who appear before them, judges must be held to the highest standards of integrity, candor, and factual accuracy. Indeed, Article III judges should be held to a higher standard, given the binding force of their rulings on the rights and obligations of litigants before them,” Grassley continued.

Grassley is asking Wingate and Neals to explain whether they, their law clerks, or any court staff used generative AI – or entered non-public case information into generative AI tools – in preparing their decisions. Further, Grassley called on the district judges to re-docket their original orders to preserve a transparent history of the courts’ actions.

Read Grassley’s letter to Wingate HERE and letter to Neals HERE.

Keep reading

‘Swarms of killer robots’: Former Biden official says US military is afraid of using AI

A former Biden administration official working on cyber policy says the United States military would have a problem controlling its soldiers’ use of artificial intelligence.

Mieke Eoyang, the deputy assistant secretary of defense for cyber policy during the Joe Biden administration, said that current AI models are poorly suited for use in the U.S. military and would be dangerous if implemented.

With claims of “AI psychosis” and killer robots, Eoyang said the military cannot simply use an existing, public AI agent and morph it into use for the military. This would of course involve giving a chatbot leeway on suggesting the use of violence, or even killing a target.

Allowing for such capabilities is cause for alarm in the Department of Defense, now Department of War, Eoyang claimed.

Keep reading

The “Reimagined State”: Tony Blair Institute’s Blueprint for a Global Techno-Dictatorship

The Tony Blair Institute for Global Change (TBI) has unveiled its “Reimagined State” initiative, a sweeping plan to use artificial intelligence and digital technology to reshape the way governments operate and ultimately, to change how people live their lives. The stated goal is to make public services more efficient, less costly, and more effective, but the deeper implications raise serious concerns about privacy, freedoms, centralized control, and digital autocracy.

The proposal calls for AI-powered digital assistants to streamline how citizens interact with government services, AI tools to help civil servants automate casework and routine tasks, and a “National Policy Twin,” a data platform designed to simulate policy outcomes and guide decision-making.

The TBI has already implemented this alleged aid to government decision-making in Albania’s parliament. In September 2025, Prime Minister Edi Rama appointed Diella as Minister of State for Artificial Intelligence, making it the world’s first AI to hold a cabinet-level position.

But the role of the AI minister, named Diella, is not to aid in decision-making but to actually make decisions, because, as Diella said in her introductory speech, the problem of the past has not been machines but rather the poor decision-making of humans. TBI will now save us from ourselves by controlling us with technology.

TBI argues that the digital transformation of the reimagined state is necessary to solve the UK’s fiscal crisis, declining public services, and stagnant economy. Embedded within this vision, however, is a plan to make government data fully interoperable across departments and to implement a nationwide digital ID system, an infrastructure that would give the state unprecedented access to personal information.

Former UK Prime Minister Tony Blair, who leads the institute, has called digital ID an “essential part of modern digital infrastructure.” Under his plan, each citizen would be assigned a single digital identifier linking personal health, tax, welfare, and immigration records. More alarmingly, such a system could give the state the power to track citizens and exclude them from services as punishment.

Bank accounts could be frozen, access to air travel restricted, and movement monitored through electronic toll systems. Since the same global advocates are pushing for electric vehicles, the ability to charge one’s car could also be suspended. In effect, an individual’s mobility and financial access could be controlled from a central government computer system.

Policies like the Green New Deal could be enforced digitally by cutting off electricity or water once monthly limits are exceeded, or by canceling flights after a person’s air travel pollution credits run out. Critics warn that Tony Blair’s “Future of Britain” and “Reimagined State” initiatives are not mere modernization efforts but blueprints for a global technocratic system. By linking digital identity systems, central bank digital currencies, and cross-border data networks, the Tony Blair Institute (TBI) promotes a framework that could enable digital totalitarianism, where access to essential services depends on government approval. What Blair describes as “a little work of persuasion” toward modernization, is the normalization of mass surveillance and centralized control over private life.

Keep reading

“Algorithmic Discrimination”: David Sacks Exposes How Democrats Are Plotting To Unleash “Woke AI” Across America

White House AI and Crypto Czar David Sacks blasted Democrat-controlled states, spearheaded by California, for pushing a wave of regulations that could force “woke AI” on the nation.

Speaking on the popular All-In podcast, Sacks discussed the frenzy of state-level AI legislation, noting that all 50 states have introduced AI bills in 2025, with over 1000 bills flooding state legislatures and 118 AI laws already passed.

The red state proposals for AI in general have a lighter touch than the blue states,” Sacks said. “But everyone just seems to be motivated by the imperative to do something on AI, even though no one’s really sure what that something should be.”

Scott Wiener’s (D) SB 1047 and a slate of 17 additional bills from his clique of radical legislators. Sacks said that the Golden State’s approach as starting with mere “red tape” on safety risk reporting, but warned it’s “the camel’s nose under the tent,” potentially multiplying into a nightmare for startups navigating 50 separate state regimes—far worse than the European Union’s harmonized efforts.

This patchwork of rules, Sacks argued, traps entrepreneurs in compliance chaos, forcing them to decipher varying reporting deadlines, authorities, and requirements. “This is like very European style regulations. Actually, [they’re] maybe even worse than the EU,” the venture capitalist-turned-Trump official said.

Sacks also pointed to Colorado’s SB24-205, the Consumer Protections for Artificial Intelligence law passed in May 2024, as a harbinger of things to come. The law bans “algorithmic discrimination,” defined as unlawful differential treatment or disparate impact based on protected characteristics like age, race, sex, or disability. Both AI developers and businesses deploying the tech could face prosecution by the state attorney general if decisions yield disparate impacts, even from race-neutral criteria like credit ratings in mortgage applications.

In a practical example, Sacks illustrated how a loan officer using neutral financial data could still be deemed discriminatory if outcomes disproportionately affect protected groups, holding developers liable despite truthful outputs. “The only way that I see for model developers to comply with this law is to build in a new DEI layer into the models to basically somehow prevent models from giving outputs that might have a disparate impact on protected groups,” Sacks warned. “So, we’re back to woke AI again.”

Keep reading

Mexico Bill Proposes Prison for AI Memes Mocking Public Figures

Mexico’s Congress is once again at the center of a free speech storm.

This time, Deputy Armando Corona Arvizu from the ruling Morena party is proposing to make it a crime to create or share AI-generated memes or digital images that make fun of someone without their consent.

His initiative, filed in the Chamber of Deputies, sets out prison terms of three to six years and fines for anyone who “create, manipulate, transform, reproduce or disseminate images, videos, audios or digital representations” made with artificial intelligence for the purpose of “ridiculing, harassing, impersonating or damaging” a person’s “reputation or dignity.”

Read the bill here.

The punishment would increase by half if the person targeted is a public official, minor, or person with a disability, or if the content spreads widely online or causes personal, psychological, or professional harm.

The bill presents itself as protection against digital abuse but is, as always, a new attempt at censorship.

The initiative would insert Articles 211 Bis 8 and 211 Bis 9 into the Federal Penal Code, written in vague and sweeping terms that could cover almost any form of online expression.

It makes no distinction between a malicious deepfake and a harmless meme.

By criminalizing content intended to “ridicule,” the bill allows courts or public figures to decide what counts as ridicule. That opens the door to arbitrary enforcement.

There are no explicit protections for parody, satire, or public-interest criticism, all of which are essential to a free society.

Keep reading

The United States promotes an alliance with Argentina in artificial intelligence, nuclear energy, and critical minerals to counter China’s influence in the region

Artificial intelligence is at the center of great-power competition. The United States is promoting “American AI” through initiatives such as the Partnership for Global Inclusivity on AI (PGIAI), launched with industry partners to expand AI access and training globally.

The White House’s AI Action Plan (2025) explicitly identifies diplomacy and standard-setting as tools to align partner nations with U.S. frameworks.

By embedding U.S.-based AI ecosystems in Hispanic America, Washington offers democratic governance standards and trusted digital infrastructure.

This strategy not only supports innovation but also reduces the risk of dependency on Chinese platforms, which carry surveillance and data security concerns.

While outcomes are not guaranteed, these initiatives increase the likelihood that regional AI standards will align with U.S. interests.

Keep reading

Albania installs AI bot as minister of procurement, in effort to end corruption in contract awarding

When Albanian Prime Minister Edi Rama announced that his newest cabinet member would be “Diella,” an artificial intelligence bot, it was easy to dismiss the move as Balkan political theater.

But in Albania, the debate has quickly turned to whether the world’s first AI government minister can succeed in curbing the country’s chronic corruption problems – and whether she represents an uncomfortable glimpse of what the future may hold.

Governments around the world are struggling with deciding on the role machines should play in the future. Albania, one of Europe’s youngest democracies, is making its voice heard.

Diella, programmed to look like a 30-something woman dressed in traditional Albanian folk attire, is a large-language-model chatbot who heads the country’s Ministry of Public Procurement, the office in charge of awarding government contracts.

The bot is based on Microsoft digital infrastructure and will not have the power to unilaterally award contracts, only to advise.

In the past, public procurement in Albania has repeatedly been tied to scandal. Last year, Evis Berberi, head of the country’s roads authority, was arrested on charges of corruption and money laundering. Lefter Koka, former minister of environment, was sentenced to jail in 2023 for bribery. In a case that earned international headlines, officials in the Albanian capital awarded nearly 50 public tenders to a bogus construction firm they created .

Rama, the prime minister, said those kinds of cases would be a thing of the past due to the appointment of Diella, whom he called “the first cabinet minister who doesn’t physically exist.”

On introducing her to parliament, Rama vowed that the chatbot “will help make Albania a country where public tenders are 100 percent free of corruption.”

Speaking to lawmakers via a synthesized voice, Diella tried to calm fears that she would cause more problems than she could solve.

I am not here to replace people but to help them,” she said. “It is true that I have no citizenship, but it is also true that I have no personal ambition or personal interests.”

But Rama’s critics were not convinced.

Keep reading

China is starting to talk about AI superintelligence, and some in the U.S. are taking notice

Early last week in the Chinese tech hub of Hangzhou, a slick, larger-than-life video screen beamed out four words that would drive tech giant Alibaba’s stock to historic levels and signal a shift in China’s approach to artificial intelligence: “Roadmap to Artificial Superintelligence.”

During his 23-minute keynote address at the flagship Alibaba Cloud conference, Alibaba CEO Eddie Wu charted out a future featuring artificial general intelligence (AGI) and artificial superintelligence (ASI). These terms point to a theorized era in which AI becomes roughly as smart as humans (AGI) and then much, much smarter (ASI).

While these terms have been tossed around Silicon Valley for years, Wu’s presentation was notable: Alibaba is now the first established Chinese tech giant to explicitly invoke AGI and ASI.

“Achieving AGI — an intelligent system with general human-level cognition — now appears inevitable. Yet AGI is not the end of AI’s development, but its beginning,” Wu said. “It will march toward ASI — intelligence beyond the human, capable of self-iteration and continuous evolution.”

“ASI will drive exponential technological leaps, carrying us into an unprecedented age of intelligence,” Wu said, highlighting ASI’s ability to help cure diseases, discover cleaner sources of energy and even unlock interstellar travel.

The U.S. and China are the world’s leading AI powers, each with immense computing capabilities and top-tier researchers developing cutting-edge systems. Yet observers have framed the countries as having different approaches to AI, with perceptions that China focuses more on real-world AI applications.

For example, commentators recently argued that Beijing is currently “winning the race for AI robots” against the U.S., as China is home to much of the world’s most advanced robotics supply chains and a growing network of robotics, or embodied AI, labs.

“There’s been some commentary in Western media recently about how the U.S. is missing the point by pushing for AGI, while China is focusing solely on applications,” said Helen Toner, interim executive director of Georgetown’s Center for Security and Emerging Technology. “This is wrong.”

“Some Chinese researchers and some parts of the Chinese government have been interested in AGI and superintelligence for a long time,” Toner said, though she noted this view was primarily held by smaller startups like DeepSeek.

Keep reading