White House crafting executive order to thwart state AI laws

An executive order that could soon be signed by President Donald Trump would thwart states’ artificial intelligence laws by launching legal challenges and withholding federal funding, according to a draft of the order obtained by CNBC on Wednesday.

The draft surfaced shortly after Trump publicly called for a single federal standard on AI “instead of a patchwork of 50 State Regulatory Regimes.”

The draft order would give Attorney General Pam Bondi 30 days to establish an “AI Litigation Task Force” whose sole task is to challenge state AI laws.

Those challenges would be issued “on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment,” the draft says.

The order also directs Commerce Secretary Howard Lutnick to notify states with contested AI laws that they are ineligible for funds under the federal Broadband Equity Access and Deployment program. BEAD is a more than $42 billion program that has allocated funding to all U.S. states and territories.

The order, which was first reported by The Information, is not yet finalized. A White House official told CNBC that any discussion around it is just speculation until it is officially announced.

As written, the EO would be a major win for the burgeoning AI industry, whose leaders — including Sam Altman’s OpenAI, venture capital firm Andreessen Horowitz and other Silicon Valley titans — oppose an inconsistent state-by-state policy approach.

It would be just as big a blow to state lawmakers across the country who have sought to pass bills that would place guardrails on the nascent technology.

Keep reading

Microsoft Adds AI to Windows Despite ‘Novel Security Risks’

Microsoft’s recent introduction of Copilot Actions, an experimental AI agent integrated into Windows, has sparked criticism from security experts who question the safety of pushing new features before fully understanding and containing their potential risks

Ars Technica reports that Microsoft unveiled Copilot Actions this week, a set of “experimental agentic features” that allow AI to perform various tasks such as organizing files, scheduling meetings, and sending emails. While the company touted the AI agent as an active digital collaborator that enhances efficiency and productivity, it also issued a warning about the security implications of enabling the feature.

Microsoft’s warning reads:

As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.

Security concerns stem from known defects inherent in most large language models (LLMs), including Copilot. Researchers have repeatedly demonstrated that LLMs can provide factually erroneous and illogical answers, a behavior known as “hallucinations.” This means users cannot fully trust the output of AI assistants like Copilot, Gemini, or Claude, and must independently verify the information.

Another significant issue with LLMs is their vulnerability to prompt injections. Hackers can exploit this flaw by planting malicious instructions in websites, resumes, and emails, which the AI eagerly follows without discerning between valid user prompts and untrusted, third-party content. These vulnerabilities can lead to data exfiltration, malicious code execution, and cryptocurrency theft.

Keep reading

Microsoft Warns Experimental Windows 11 AI Could Install Malware on Your Computer

Microsoft said in an update on Nov. 17 that Windows 11 users who utilize “agentic features” from its AI services should be cautious because the AI agents could potentially download and install malware.

In an alert, Microsoft warned that its AI models could “occasionally hallucinate” and introduce “novel security risks” such as malware because large language models, a type of AI that processes data and generates human-like text, are susceptible to cyberattacks.

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA),” the warning stated. A prompt injection attack is a type of cyberattack where an attacker crafts an input to trick the AI into performing malicious actions.

Microsoft added that in the case of Windows 11’s “experimental” AI services, “malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

The AI features are turned off by default and operate only after the user opts into them, the company said.

The agentic AI setting “can only be enabled by an administrator user of the device and once enabled, it’s enabled for all users on the device including other administrators and standard users,” Microsoft said of the AI services.

Keep reading

New U.N. Treaty Decriminalizes AI Child Sexual Abuse Images

Anew United Nations treaty against cybercrime expressly allows for child sexting and the production and dissemination of virtual child pornography. The new treaty will normalize pedophilia and turbocharge pedophilic sexual content online via artificial intelligence and sexbots.

Reports vary on the exact number of initial signatories on Oct. 25 — the U.N. reports 65 signatories, including the European Union, China, and Russia. It will enter into force after 40 countries ratify it.

The treaty has been criticized widely because it is seen as undermining privacy and human rights by requiring countries to cooperate with dictatorial regimes. But the bigger problem is that the new treaty expressly allows virtual child pornography and child sexting, which until now were deemed criminal child pornography under the Optional Protocol of the Convention on the Rights of the Child on the sale of children, child prostitution, and child pornography (“Optional Protocol”), a treaty developed, funded, and backed by the U.S. government for three decades. The way the new treaty undermines the Optional Protocol is quite disingenuous.

The new treaty expressly establishes that countries who ratify the treaty “may” decriminalize some materials until now considered illegal child pornography. According to article 14.2 of the new treaty, countries may opt to decriminalize the production, distribution, and possession of sexualized visual, written, or audio content depicting children, so long as they do not represent an “existing person” or do not “visually depict child sexual abuse or child sexual exploitation.” Articles 14.4 and 15.4 also give states the option to decriminalize sexting between children under all circumstances as well as sexting by a child to an adult when the child is above the age of consent and the sexting is done privately. In such cases, the treaty only requires prosecution where images are shared “non-consensually.”

Keep reading

Google boss says trillion-dollar AI investment boom has ‘elements of irrationality’

Every company would be affected if the AI bubble were to burst, the head of Google’s parent firm Alphabet has told the BBC.

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an “extraordinary moment”, there was some “irrationality” in the current AI boom.

It comes amid fears in Silicon Valley and beyond of a bubble as the value of AI tech companies has soared in recent months and companies spend big on the burgeoning industry.

Asked whether Google would be immune to the impact of the AI bubble bursting, Mr Pichai said the tech giant could weather that potential storm, but also issued a warning.

“I think no company is going to be immune, including us,” he said.

In a wide-ranging exclusive interview at Google’s California headquarters, he also addressed energy needs, slowing down climate targets, UK investment, the accuracy of his AI models, and the effect of the AI revolution on jobs.

The interview comes as scrutiny on the state of the AI market has never been more intense.

Alphabet shares have doubled in value in seven months to $3.5tn (£2.7tn) as markets have grown more confident in the search giant’s ability to fend off the threat from ChatGPT owner OpenAI.

A particular focus is Alphabet’s development of specialised superchips for AI that compete with Nvidia, run by Jensen Huang, which recently reached a world first $5tn valuation.

As valuations rise, some analysts have expressed scepticism about a complicated web of $1.4tn of deals being done around OpenAI, which is expected to have revenues this year of less than one thousandth of the planned investment.

It has raised fears stock markets are heading for a repeat of the dotcom boom and bust of the late 1990s. This saw the values of early internet companies surge amid a wave of optimism for what was then a new technology, before the bubble burst in early 2000 and many share prices collapsed.

This led to some companies going bust, resulting in job losses. A drop in share prices can also hit the value of people’s savings including their pension funds.

In comments echoing those made by US Federal Reserve chairman Alan Greenspan in 1996, warning of “irrational exuberance” in the market well ahead of the dotcom crash, Mr Pichai said the industry can “overshoot” in investment cycles like this.

“We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound,” he said.

“I expect AI to be the same. So I think it’s both rational and there are elements of irrationality through a moment like this.”

His comments follow a warning from Jamie Dimon, the boss of US bank JP Morgan, who told the BBC last month that investment in AI would pay off, but some of the money poured into the industry would “probably be lost”.

But Mr Pichai said Google’s unique model of owning its own “full stack” of technologies – from chips to YouTube data, to models and frontier science – meant it was in a better position to ride out any AI market turbulence.

Keep reading

Pennsylvania School District Using AI-Enabled Wi-Fi To Search Students For Firearms

A Pennsylvania school district is using artificial intelligence to keep guns off its campuses. But civil liberties advocates have warned that the technology could lead to mass surveillance and violation of constitutional rights.

The Chartiers Valley School District in Allegheny County has implemented AI that harnesses the district’s Wi-Fi signals to determine whether people are carrying weapons as they enter the schools.

The technology, called Wi-AI, was developed by CurvePoint of Pittsburgh. CurvePoint grew out of AI research at Carnegie Mellon University.

According to the companyWi-AI uses “spatial intelligence” to find weapons such as guns before they enter a school.

The AI system analyzes a space and detects where potential weapons are located by interpreting “how Wi-Fi signals reflect off people and objects.”

Once a possible weapon is found, security personnel, school administrators, or others can go to the location to determine whether there is actually a threat.

It is now in use at Chartiers Valley School District high school, middle school, and primary school campuses. CurvePoint CEO Skip Smith said that in a recent test, the system found a pistol hidden in a backpack. He said the technology has a 95 percent success rate, failing only 4 percent of its searches.

Smith said the Wi-AI does not carry the same privacy concerns of other security systems because it does not rely on facial recognition or biometric data.

“We don’t know it’s you,“ Smith told The Epoch Times. ”We have no biometric information about you. Our system just sees a big bag of salt water.”

Darren Mariano, president of the Chartiers Valley Board of School Directors, said the district is excited to be the first in the country to adopt the technology.

The safety of our students and staff is always our top priority,” he said in a statement. “We’re thrilled to be the first district in the nation to implement this groundbreaking technology.”

Keep reading

Google Sued For Allegedly Using Gemini AI Tool To Track Users’ Private Communications

Google LLC is accused in a civil lawsuit of using its artificial intelligence program Gemini to collect data on users’ private communications in Gmail as well as Google’s instant messaging and video conference programs.

Until around Oct. 10, the Gemini AI assistant required the user to deliberately opt into its feature. After that date, the feature was allegedly “secretly” turned on by Google for all its users’ Gmail, Chat, and Meet accounts by default, enabling AI to track its users’ private data in those platforms “without the users’ knowledge or consent,” according to the complaint filed Nov. 11 in federal court in San Jose.

The class action lawsuit was filed in the U.S. District Court for the Northern District of California, alleging that Google is violating the California Invasion of Privacy Act, a 1967 law that prohibits surreptitious wiretapping and recording of confidential communications without the consent of all parties involved.

Although Google provides a way for users to turn off the feature, it requires users to look for it in the privacy settings to deactivate it, despite never having agreed to it in the first place, the complaint said.

The AI feature is categorized in “Google Workspace smart features” in Google settings. Once turned on, it means the user consents to the program using “Workspace content and activity” across Workspace or in other Google products.

When the feature is turned on, Gemini can “scan, read, and analyze every email (and email attachment), message, and conversation on those services,” according to the complaint.

Technology writer Ruben Circelli wrote in a PCMag article that Gemini is “downright creepy” in diving deep into his personal history, analyzing 16 years’ worth of emails after he signed up for a more advanced pro feature.

In a series of tests by Circelli, Gemini told him one of his character flaws and even knew who his first crush was in elementary school.

“This invasion of privacy wasn’t just disconcerting, though; it was unexpected,” Circelli wrote.

“Google didn’t explain what this integration would do before I signed up for its AI Pro plan, nor did it give me a way to opt out at the start.”

The Epoch Times reached out to Google for comment, but did not receive an immediate response.

“We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission,” the company has stated.

Keep reading

Bio-Digital Vaccine Passports and ‘On Patient Medical Recordkeeping’

Did you know that the only safe medical data is data that is stored inside your own body?

I didn’t know that either until Nic Hulscher recently discovered some very interesting research papers about ‘On Patient Medical Recordkeeping’ technology.

The quote below is from an article that was published in PubMed six years ago, in December 2019: “Accurate medical recordkeeping is a major challenge in many low-resource settings where well-maintained centralized databases do not exist, contributing to 1.5 million vaccine-preventable deaths annually.”

It took humans several hundred years to figure out that we are not able to maintain accurate medical records, but now we finally know.

And it’s a lucky thing that we only figured this out now, because we are finally reaching the stage where we are able to reliably record medical data: by encoding them into every living human body – in particular data about received vaccines.

There’s even a cute – no, more than cute: a heart warming acronym for this brilliant new record keeping method: OPMR.

The following quote is from an article in ‘Nature Materials’ from February 2025:

“We developed a robust on-patient medical record-keeping (OPMR) technology using a dissolvable microneedle patch (MNP) that delivers a quantum dot (QD)-based near-infrared (NIR) fluorescent dye encapsulated in poly(methyl methacrylate) (PMMA) microparticles into the skin to encode medical information. This dye, once deposited into the dermis, is invisible to the naked eye, offering patient data privacy and anonymity, but provides discrete NIR signals that can be detected using a NIR imaging system.”

Isn’t it wonderful that we have found a way to not only make it impossible to lose medical records but to keep our medical records truly private and anonymous – and especially the number of vaccine microneedle patches we got administered? Nobody will ever know – except all the folks who detect the oh so discrete Near Infrared signals with the help of the NIR imaging system. And maybe it won’t be folks much longer who detect them but some friendly AI agent. Which makes it even more sublime.

We can also stop stressing about our medical records being unavailable when China or some other country cuts the subsea cables to crash the internet:

“By depositing the dye in a predefined pattern that correlates to a specific set of information, the technology can be imaged by healthcare workers to support next-dose decisions without requiring internet connectivity or the use of centralized databases.”

See? Internet connectivity is not required. Marvelous. Life-saving ‘next-dose decisions’ won’t be blocked ever – internet or not.

Keep reading

‘Spider robot’ promises to build entire houses in less than 24 hours

A spider-shaped construction robot named Charlotte is being developed in Sydney to 3D print a full-size house in roughly a day. Developers say the machine can autonomously form structural walls for about a 2,150-square-foot home using locally sourced materials.

Instead of cement and bricks delivered by truck, Charlotte compacts sand, soil, and clean waste into layered walls on-site. Researchers argue that this single-machine approach removes long supply chains and many high-carbon steps.

Charlotte, the construction robot

Charlotte is a mobile, legged system that pairs robotics with additive manufacturing, building objects layer by layer with a printer.

The prototype shown in Sydney is not a finished product, but its architecture offers a clear view of where construction automation is heading.

The work was led by Clyde Webster, founding director at Crest Robotics in Sydney. His work centers on agile field robots for construction tasks that are hard, repetitive, or risky. Momentum comes from the housing crunch and a push to cut carbon. 

“The building materials that we use today – even a simple brick has so many processes involved and some of them very – very carbon-intensive,” said Dr. Jan Golembiewski, co-founder of Earthbuilt Technology.

From soil to structure

Crest describes an undercarriage-mounted system that gathers sand, earth, and crushed brick, binds the mix in textiles, and then compacts it into successive layers. 

At the core is extrusion – pushing a material through a nozzle to form layers. That lets a robot produce continuous courses without mortar joints, guided by digital plans.

“It will work at the speed of over 100 bricklayers,” said Dr. Golembiewski. The team stresses speed as much as simplicity. 

Range of motion matters, too, since legs can step over uneven ground where wheeled rigs bog down. A compact, folding frame also makes transport easier, which is essential for remote sites.

Charlotte robots can cut carbon

Buildings use a lot of energy and materials. A UN Environment Programme report notes that in 2022 the sector accounted for 37 percent of energy- and process-related carbon dioxide emissions.

Cutting the most carbon-heavy steps in early stages can have outsized effects later. That is where embodied carbon, the total emissions from making and moving materials, becomes a key metric for builders and regulators.

Charlotte’s creators claim their method avoids cement entirely while turning clean waste into durable walls. If performance and safety data hold up, that would cut both cost and emissions on the same job.

There is a workforce angle as well. Automation that handles repetitive, high-risk tasks could reduce injuries while letting smaller crews do more skilled work.

Keep reading

Google Softens Planned Android Sideloading Ban but Keeps Developer ID Verification

Google is slightly relaxing its controversial new Android policy on sideloading, but the shift does little to change its overall direction.

The company confirmed that it will still move ahead with mandatory developer identity verification for nearly all apps while introducing a limited “advanced flow” that lets “experienced users” continue installing software from outside the Play Store.

According to Google, the new system will feature multiple security warnings meant to deter casual users from downloading unverified apps.

“It will include clear warnings to ensure users fully understand the risks involved, but ultimately, it puts the choice in their hands,” the company said.

The process is still being developed, with feedback now underway before finalization in the coming months.

The adjustment follows backlash from developers and Android fans who criticized Google’s original plan to block apps created by unverified developers starting next year.

The community argued that the move would effectively close off Android’s long-standing openness by removing the ability to install software freely.

Despite the new language, Google’s latest policy maintains the same structure.

Developer ID verification will still be required for nearly all app distribution.

Only students and hobbyists will be allowed to share apps with a limited number of devices without providing identification, and businesses deploying internal software will remain exempt.

For everyone else, verification and a $25 registration fee will be mandatory, including for apps distributed outside Google Play. Previously, there was no charge for independent distribution.

The rollout schedule remains the same. Developers who distribute apps outside the Play Store began receiving early-access invitations on November 3, while Play Store developers will get theirs starting November 25.

The early-access period runs through March 2026, after which the verification program will open to all developers. The rules take effect in Brazil, Indonesia, Singapore, and Thailand in September 2026, and globally in 2027.

Google maintains that the new requirements are about security, not control.

Keep reading