Microsoft Warns Experimental Windows 11 AI Could Install Malware on Your Computer

Microsoft said in an update on Nov. 17 that Windows 11 users who utilize “agentic features” from its AI services should be cautious because the AI agents could potentially download and install malware.

In an alert, Microsoft warned that its AI models could “occasionally hallucinate” and introduce “novel security risks” such as malware because large language models, a type of AI that processes data and generates human-like text, are susceptible to cyberattacks.

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA),” the warning stated. A prompt injection attack is a type of cyberattack where an attacker crafts an input to trick the AI into performing malicious actions.

Microsoft added that in the case of Windows 11’s “experimental” AI services, “malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

The AI features are turned off by default and operate only after the user opts into them, the company said.

The agentic AI setting “can only be enabled by an administrator user of the device and once enabled, it’s enabled for all users on the device including other administrators and standard users,” Microsoft said of the AI services.

Keep reading

New U.N. Treaty Decriminalizes AI Child Sexual Abuse Images

Anew United Nations treaty against cybercrime expressly allows for child sexting and the production and dissemination of virtual child pornography. The new treaty will normalize pedophilia and turbocharge pedophilic sexual content online via artificial intelligence and sexbots.

Reports vary on the exact number of initial signatories on Oct. 25 — the U.N. reports 65 signatories, including the European Union, China, and Russia. It will enter into force after 40 countries ratify it.

The treaty has been criticized widely because it is seen as undermining privacy and human rights by requiring countries to cooperate with dictatorial regimes. But the bigger problem is that the new treaty expressly allows virtual child pornography and child sexting, which until now were deemed criminal child pornography under the Optional Protocol of the Convention on the Rights of the Child on the sale of children, child prostitution, and child pornography (“Optional Protocol”), a treaty developed, funded, and backed by the U.S. government for three decades. The way the new treaty undermines the Optional Protocol is quite disingenuous.

The new treaty expressly establishes that countries who ratify the treaty “may” decriminalize some materials until now considered illegal child pornography. According to article 14.2 of the new treaty, countries may opt to decriminalize the production, distribution, and possession of sexualized visual, written, or audio content depicting children, so long as they do not represent an “existing person” or do not “visually depict child sexual abuse or child sexual exploitation.” Articles 14.4 and 15.4 also give states the option to decriminalize sexting between children under all circumstances as well as sexting by a child to an adult when the child is above the age of consent and the sexting is done privately. In such cases, the treaty only requires prosecution where images are shared “non-consensually.”

Keep reading

Google boss says trillion-dollar AI investment boom has ‘elements of irrationality’

Every company would be affected if the AI bubble were to burst, the head of Google’s parent firm Alphabet has told the BBC.

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an “extraordinary moment”, there was some “irrationality” in the current AI boom.

It comes amid fears in Silicon Valley and beyond of a bubble as the value of AI tech companies has soared in recent months and companies spend big on the burgeoning industry.

Asked whether Google would be immune to the impact of the AI bubble bursting, Mr Pichai said the tech giant could weather that potential storm, but also issued a warning.

“I think no company is going to be immune, including us,” he said.

In a wide-ranging exclusive interview at Google’s California headquarters, he also addressed energy needs, slowing down climate targets, UK investment, the accuracy of his AI models, and the effect of the AI revolution on jobs.

The interview comes as scrutiny on the state of the AI market has never been more intense.

Alphabet shares have doubled in value in seven months to $3.5tn (£2.7tn) as markets have grown more confident in the search giant’s ability to fend off the threat from ChatGPT owner OpenAI.

A particular focus is Alphabet’s development of specialised superchips for AI that compete with Nvidia, run by Jensen Huang, which recently reached a world first $5tn valuation.

As valuations rise, some analysts have expressed scepticism about a complicated web of $1.4tn of deals being done around OpenAI, which is expected to have revenues this year of less than one thousandth of the planned investment.

It has raised fears stock markets are heading for a repeat of the dotcom boom and bust of the late 1990s. This saw the values of early internet companies surge amid a wave of optimism for what was then a new technology, before the bubble burst in early 2000 and many share prices collapsed.

This led to some companies going bust, resulting in job losses. A drop in share prices can also hit the value of people’s savings including their pension funds.

In comments echoing those made by US Federal Reserve chairman Alan Greenspan in 1996, warning of “irrational exuberance” in the market well ahead of the dotcom crash, Mr Pichai said the industry can “overshoot” in investment cycles like this.

“We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound,” he said.

“I expect AI to be the same. So I think it’s both rational and there are elements of irrationality through a moment like this.”

His comments follow a warning from Jamie Dimon, the boss of US bank JP Morgan, who told the BBC last month that investment in AI would pay off, but some of the money poured into the industry would “probably be lost”.

But Mr Pichai said Google’s unique model of owning its own “full stack” of technologies – from chips to YouTube data, to models and frontier science – meant it was in a better position to ride out any AI market turbulence.

Keep reading

Pennsylvania School District Using AI-Enabled Wi-Fi To Search Students For Firearms

A Pennsylvania school district is using artificial intelligence to keep guns off its campuses. But civil liberties advocates have warned that the technology could lead to mass surveillance and violation of constitutional rights.

The Chartiers Valley School District in Allegheny County has implemented AI that harnesses the district’s Wi-Fi signals to determine whether people are carrying weapons as they enter the schools.

The technology, called Wi-AI, was developed by CurvePoint of Pittsburgh. CurvePoint grew out of AI research at Carnegie Mellon University.

According to the companyWi-AI uses “spatial intelligence” to find weapons such as guns before they enter a school.

The AI system analyzes a space and detects where potential weapons are located by interpreting “how Wi-Fi signals reflect off people and objects.”

Once a possible weapon is found, security personnel, school administrators, or others can go to the location to determine whether there is actually a threat.

It is now in use at Chartiers Valley School District high school, middle school, and primary school campuses. CurvePoint CEO Skip Smith said that in a recent test, the system found a pistol hidden in a backpack. He said the technology has a 95 percent success rate, failing only 4 percent of its searches.

Smith said the Wi-AI does not carry the same privacy concerns of other security systems because it does not rely on facial recognition or biometric data.

“We don’t know it’s you,“ Smith told The Epoch Times. ”We have no biometric information about you. Our system just sees a big bag of salt water.”

Darren Mariano, president of the Chartiers Valley Board of School Directors, said the district is excited to be the first in the country to adopt the technology.

The safety of our students and staff is always our top priority,” he said in a statement. “We’re thrilled to be the first district in the nation to implement this groundbreaking technology.”

Keep reading

Google Sued For Allegedly Using Gemini AI Tool To Track Users’ Private Communications

Google LLC is accused in a civil lawsuit of using its artificial intelligence program Gemini to collect data on users’ private communications in Gmail as well as Google’s instant messaging and video conference programs.

Until around Oct. 10, the Gemini AI assistant required the user to deliberately opt into its feature. After that date, the feature was allegedly “secretly” turned on by Google for all its users’ Gmail, Chat, and Meet accounts by default, enabling AI to track its users’ private data in those platforms “without the users’ knowledge or consent,” according to the complaint filed Nov. 11 in federal court in San Jose.

The class action lawsuit was filed in the U.S. District Court for the Northern District of California, alleging that Google is violating the California Invasion of Privacy Act, a 1967 law that prohibits surreptitious wiretapping and recording of confidential communications without the consent of all parties involved.

Although Google provides a way for users to turn off the feature, it requires users to look for it in the privacy settings to deactivate it, despite never having agreed to it in the first place, the complaint said.

The AI feature is categorized in “Google Workspace smart features” in Google settings. Once turned on, it means the user consents to the program using “Workspace content and activity” across Workspace or in other Google products.

When the feature is turned on, Gemini can “scan, read, and analyze every email (and email attachment), message, and conversation on those services,” according to the complaint.

Technology writer Ruben Circelli wrote in a PCMag article that Gemini is “downright creepy” in diving deep into his personal history, analyzing 16 years’ worth of emails after he signed up for a more advanced pro feature.

In a series of tests by Circelli, Gemini told him one of his character flaws and even knew who his first crush was in elementary school.

“This invasion of privacy wasn’t just disconcerting, though; it was unexpected,” Circelli wrote.

“Google didn’t explain what this integration would do before I signed up for its AI Pro plan, nor did it give me a way to opt out at the start.”

The Epoch Times reached out to Google for comment, but did not receive an immediate response.

“We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission,” the company has stated.

Keep reading

Russia’s AI Enabled Drones: The Next Evolution of Warfare

Russia is now fielding long-range, fiber-optic–controlled drones while simultaneously accelerating development of fully autonomous, AI-driven attack drones that mark a dramatic shift in its battlefield strategy. Ukrainian officials confirm that Moscow is deploying jam-proof fiber-optic FPV platforms with a 50-kilometer range, forcing Ukrainian units to cover supply routes with netting to protect vehicles from incoming strikes.

At the same time, Russia is rolling out a new generation of AI-enabled systems powered by smuggled NVIDIA Jetson processors, which were supposed to be blocked by sanctions but are now appearing inside multiple Russian drones.

Ukrainian drone specialist Serhii “Flash” Beskrestnov reports that Russia has adapted Ukrainian innovations for its V2U autonomous strike drone, while other captured systems, like the Shahed MS001 and Tyuvik, show that Russia is rapidly building an arsenal of “digital predators” capable of finding and attacking targets without human control.

The V2U remains Russia’s most advanced autonomous platform. First deployed in Ukraine’s Sumy region in February 2025, the drone was being used 30–50 times per day by mid-May across multiple fronts. It navigates by using machine vision to compare live imagery with digital terrain maps stored on a solid-state drive, upgraded to 128 GB in later versions. A 14-megapixel camera, laser rangefinder, and NVIDIA Jetson Orin processor enable the drone to identify targets, conduct terrain analysis, and strike without GPS or human commands.

Russian forces update its onboard code almost weekly, training the AI directly through battlefield experience. The drone’s AI is powerful enough to fly up and down roads searching for targets, though its limited ability to distinguish targets has led to mistakes, including striking civilian infrastructure such as a public toilet instead of a vehicle.

To extend range, Russian forces deploy large “mother drones” that transport smaller V2Us deep into contested airspace before releasing them for individual strikes. Russia is also experimenting with coordinated swarms of seven or eight V2Us, each painted with distinct wing colors to visually identify one another.

According to Ukrainian analysts, these drones can maintain formation, coordinate attack order, and perform anti-air evasive maneuvers if a member of the group is shot down. This behavior resembles early machine-driven swarm intelligence and may be adapted for Russia’s long-range Shahed drones, creating fully autonomous loitering-munitions swarms capable of saturating Ukrainian defenses.

Captured drones reveal the scale of Russia’s dependence on foreign components. The V2U relies on an NVIDIA Jetson Orin for AI processing, Intel wireless adapters, Sony optical sensors, and numerous Chinese-made motors, drives, and batteries. The MS001, an upgraded Shahed variant, also carries a Jetson Orin paired with a thermal imager and digital modem, allowing it to recognize ground objects and strike moving targets rather than simply flying to preset coordinates.

A third autonomous drone, the Tyuvik, resembles a miniature Shahed and is now reportedly in mass production. Tyuvik can locate and strike moving armored vehicles despite using inexpensive commercial hardware, suggesting it also depends on smuggled Western or Chinese AI processors.

Keep reading

Data centers encounter local pushback amid major growth

At least 16 data center projects, worth a combined $64 billion, have been blocked or delayed as local opposition mounts to the developments, according to a new study.

Research collected by Data Center Watch shows that residents and politicians across seven states have stopped or stalled the data center projects.

In Arizona’s West Valley, development company Tract withdrew plans for a $14 billion project after city officials declined to approve required rezoning. Tract eventually announced a similar project in Buckeye, Ariz., where the development is proceeding.

In Peculiar, Mo., and Chesterton, Ind., residents and local officials also blocked data center developments worth billions.

In total, the study found that six data center developments have been fully blocked since May 2024. The backlash has also delayed 10 other data centers, including two from Amazon.

Nine of the documented data center blockages and delays have occurred in Virginia, the world’s unofficial data center capital, according to the research firm.

The study’s authors also found growing bipartisan aversion to the behemoth data center projects: about 55 percent of Republicans and 45 percent of Democrats in districts with large data center projects have taken public positions against the developments, according to the study.

“This cross-party resistance defies expectations and marks a rare area of bipartisan alignment in infrastructure politics,” the authors wrote.

The report also found that data centers were becoming an intensifying issue in local politics. As energy costs soar and affordability takes center stage, it’s likely more candidates and elected officials will take sides on the projects.

Keep reading

CEO of Palantir Says He Spends a Large Amount of Time Talking to Nazis

While you were busy wasting your time listening to podcasts and doomscrolling on your phone, one of America’s leading AI overlords was educating himself by talking to Nazis.

This was the startling admission made by Alex Karp, cofounder and CEO of the software company Palantir, a company that’s come under increasingly heavy scrutiny for its growing role as a provider of AI-powered surveillance technology to the military and government.

In an interview with podcaster Molly O’Shea published this week, Karp, who has Jewish heritage, was discussing German culture and his time in the country before going on a tangent about how outrageous it is that people online “laud the Nazis.” Then he fessed up to something even more eyebrow-raising.

“I spend a lot of time talking to Nazis,” Karp said, implying that this is an ongoing pastime of his. “Like, real Nazis,” he emphasized.

Karp explained that it was his way of “understanding what made them tick,” before making an ironic observation.

“Part of the crazy thing about people who laud the Nazis nowadays is there’s not a single Nazi that would ever have included them in their movement and would have shipped them off to the camps quicker maybe than they shipped me off to the camps!” he chuckled.

He then pulled off the smoothest segue of all time.

“Uh, but, um, and uh, and it’s like, it’s uh but” — the interview mercifully jumps cuts —  “the thing that’s crazy unique about America,” Karp began to muse.

Beyond his role as Palantir’s head honcho, Karp is known for his philosophical ramblings, his “eccentric” personality, and his affinity for German culture. He has a PhD in philosophy from Goethe University Frankfurt, and draws on his background to defend Western values — in particular American ones — as especially good for the world.

This year, for instance, he published a book about how the US needed to embrace having the most technologically advanced weapons possible to preserve its dominance. An excerpt of that book was published online as an essay under the headline “We Need a New Manhattan Project.”

Keep reading

A MASSIVE 97% of Listeners Fooled: Can YOU Tell If This Hit Song Is Human… or AI?

In an era where the boundaries between the synthetic and the sentient blur with alarming rapidity, a sobering revelation has emerged from the sonic realm: humanity’s capacity to discern the hand of the artist from the algorithm has all but evaporated. 

A recent survey commissioned by the French streaming platform Deezer, polling 9,000 individuals across eight nations, laid bare this disquieting truth. 

Respondents were tasked with listening to two clips of music wholly conjured by artificial intelligence and one crafted by human hands; astonishingly, 97 percent failed to differentiate between them. 

Deezer’s chief executive, Alexis Lanternier, observed, “The survey results clearly show that people care about music and want to know if they’re listening to AI or human made tracks or not.” 

Keep reading

ChatGPT’s Use Of Song Lyrics Violates Copyright, Munich Court Finds

  • Judges found GEMA’s claims valid, ordering OpenAI to cease reproduction and provide damages and disclosure.
  • The court said GPT-4 and GPT-4o “memorized” lyrics, amounting to reproduction under EU copyright rules.
  • The decision, not yet final, could set a major European precedent on AI training data.

Germany’s national music rights organization secured a partial but decisive win against OpenAI after a Munich court ruled that ChatGPT’s underlying models unlawfully reproduced copyrighted German song lyrics.

The ruling orders OpenAI to cease reproduction, disclose relevant training details, and compensate rights holders.

It is not yet final, and OpenAI may appeal.

If upheld, the decision could reshape how AI companies source and license creative material in Europe, as regulators weigh broader obligations for model transparency and training-data provenance.

The case marks the first time a European court has found that a large language model violated copyright by memorizing protected works.

In its decision, the 42nd Civil Chamber of the Munich I Regional Court said that GPT-4 and GPT-4o contained “reproducible” lyrics from nine well-known songs, including Kristina Bach’s “Atemlos” and Rolf Zuckowski’s “Wie schön, dass du geboren bist.”

The court held that such memorization constitutes a “fixation” of the original works in the model’s parameters, satisfying the legal definition of reproduction under Article 2 of the EU InfoSoc Directive and Germany’s Copyright Act.

“At least in individual cases, when prompted accordingly, the model produces an output whose content is at least partially identical to content from the earlier training dataset,” a translated copy of the written judgement provided by the Munich court to Decrypt reads.

The model “generates a sequence of tokens that appears statistically plausible because, for example, it was contained in the training process in a particularly stable or frequently recurring form,” the court wrote, adding that because this “token sequence appeared on a large number of publicly accessible websites“ it meant that it was “included in the training dataset more than once.”

In the pleadings, GEMA argued that the model’s output lyrics were almost verbatim when prompted, proving that OpenAI’s systems had retained and reproduced the works.

OpenAI countered that its models do not store training data directly and that any output results from user prompts, not from deliberate copying.

The company also invoked text-and-data-mining exceptions, which allow temporary reproductions for analytical use.

“We disagree with the ruling and are considering next steps,” a spokesperson for OpenAI told Decrypt. “The decision is for a limited set of lyrics and does not impact the millions of people, businesses, and developers in Germany that use our technology every day.” 

Keep reading