Gmail Explainer: How to Stop Google AI from Snooping Through Your Emails

Google has quietly started accessing Gmail users’ private emails and attachments to train its AI models, requiring manual opt-out to avoid participation. To make the process even trickier, Gmail users have to opt out in two separate places for the change to work. Follow these steps to protect your privacy from Google’s invasive AI endeavors.

Malwarebytes reports that Google has recently implemented changes that enable Gmail to access all private messages and attachments for the purpose of training its AI models. This means that unless users take action to opt out, their emails could be analyzed to improve Google’s AI assistants, such as Smart Compose or AI-generated replies.

The motivation behind this change is Google’s push to enhance Gmail’s features with the company’s Gemini AI, aiming to help users write emails more efficiently and manage their inboxes more effectively. To accomplish this, Google is utilizing real email content, including attachments, to train and refine its AI models. These settings are now reportedly switched on by default, rather than requiring explicit opt-in consent.

As a result, if users do not manually disable these settings, their private messages may be used for AI training without their knowledge. While Google assures strong privacy measures are in place, such as data anonymization and security during the AI training process, those handling sensitive or confidential information may find little comfort in these promises.

To fully opt out of Gmail’s AI training, users must change settings in two separate locations. This article features a guide and images for opting out on desktop, but the selections are very similar if accessing Gmail via the mobile app.

Keep reading

WaPo Defends Data Centers—With Few Disclosures That Amazon Depends on Them

US electricity prices, you may have noticed, keep going up. And in some parts of the country, like here in the DC region, they’re soaring. In Virginia, for example, electricity rates are up 13% this year, an issue Democrats highlighted as they swept back into power in Richmond earlier this month.

Burgeoning electric bills also factored into Democrats’ November wins in New Jersey and Georgia. But let’s stick with Virginia for a moment, where energy-sucking data centers are so plentiful that if northern Virginia’s DC suburbs were to secede, the new country would have more data center capacity than China.

As a result of these data centers, this new country would likely suffer from crippling electric bills. “Wholesale electricity [now] costs as much as 267% more than it did five years ago in areas near data centers. That’s being passed on to customers,” read a recent Bloomberg subhead.

Keep reading

White House crafting executive order to thwart state AI laws

An executive order that could soon be signed by President Donald Trump would thwart states’ artificial intelligence laws by launching legal challenges and withholding federal funding, according to a draft of the order obtained by CNBC on Wednesday.

The draft surfaced shortly after Trump publicly called for a single federal standard on AI “instead of a patchwork of 50 State Regulatory Regimes.”

The draft order would give Attorney General Pam Bondi 30 days to establish an “AI Litigation Task Force” whose sole task is to challenge state AI laws.

Those challenges would be issued “on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment,” the draft says.

The order also directs Commerce Secretary Howard Lutnick to notify states with contested AI laws that they are ineligible for funds under the federal Broadband Equity Access and Deployment program. BEAD is a more than $42 billion program that has allocated funding to all U.S. states and territories.

The order, which was first reported by The Information, is not yet finalized. A White House official told CNBC that any discussion around it is just speculation until it is officially announced.

As written, the EO would be a major win for the burgeoning AI industry, whose leaders — including Sam Altman’s OpenAI, venture capital firm Andreessen Horowitz and other Silicon Valley titans — oppose an inconsistent state-by-state policy approach.

It would be just as big a blow to state lawmakers across the country who have sought to pass bills that would place guardrails on the nascent technology.

Keep reading

Microsoft Adds AI to Windows Despite ‘Novel Security Risks’

Microsoft’s recent introduction of Copilot Actions, an experimental AI agent integrated into Windows, has sparked criticism from security experts who question the safety of pushing new features before fully understanding and containing their potential risks

Ars Technica reports that Microsoft unveiled Copilot Actions this week, a set of “experimental agentic features” that allow AI to perform various tasks such as organizing files, scheduling meetings, and sending emails. While the company touted the AI agent as an active digital collaborator that enhances efficiency and productivity, it also issued a warning about the security implications of enabling the feature.

Microsoft’s warning reads:

As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.

Security concerns stem from known defects inherent in most large language models (LLMs), including Copilot. Researchers have repeatedly demonstrated that LLMs can provide factually erroneous and illogical answers, a behavior known as “hallucinations.” This means users cannot fully trust the output of AI assistants like Copilot, Gemini, or Claude, and must independently verify the information.

Another significant issue with LLMs is their vulnerability to prompt injections. Hackers can exploit this flaw by planting malicious instructions in websites, resumes, and emails, which the AI eagerly follows without discerning between valid user prompts and untrusted, third-party content. These vulnerabilities can lead to data exfiltration, malicious code execution, and cryptocurrency theft.

Keep reading

Microsoft Warns Experimental Windows 11 AI Could Install Malware on Your Computer

Microsoft said in an update on Nov. 17 that Windows 11 users who utilize “agentic features” from its AI services should be cautious because the AI agents could potentially download and install malware.

In an alert, Microsoft warned that its AI models could “occasionally hallucinate” and introduce “novel security risks” such as malware because large language models, a type of AI that processes data and generates human-like text, are susceptible to cyberattacks.

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA),” the warning stated. A prompt injection attack is a type of cyberattack where an attacker crafts an input to trick the AI into performing malicious actions.

Microsoft added that in the case of Windows 11’s “experimental” AI services, “malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

The AI features are turned off by default and operate only after the user opts into them, the company said.

The agentic AI setting “can only be enabled by an administrator user of the device and once enabled, it’s enabled for all users on the device including other administrators and standard users,” Microsoft said of the AI services.

Keep reading

New U.N. Treaty Decriminalizes AI Child Sexual Abuse Images

Anew United Nations treaty against cybercrime expressly allows for child sexting and the production and dissemination of virtual child pornography. The new treaty will normalize pedophilia and turbocharge pedophilic sexual content online via artificial intelligence and sexbots.

Reports vary on the exact number of initial signatories on Oct. 25 — the U.N. reports 65 signatories, including the European Union, China, and Russia. It will enter into force after 40 countries ratify it.

The treaty has been criticized widely because it is seen as undermining privacy and human rights by requiring countries to cooperate with dictatorial regimes. But the bigger problem is that the new treaty expressly allows virtual child pornography and child sexting, which until now were deemed criminal child pornography under the Optional Protocol of the Convention on the Rights of the Child on the sale of children, child prostitution, and child pornography (“Optional Protocol”), a treaty developed, funded, and backed by the U.S. government for three decades. The way the new treaty undermines the Optional Protocol is quite disingenuous.

The new treaty expressly establishes that countries who ratify the treaty “may” decriminalize some materials until now considered illegal child pornography. According to article 14.2 of the new treaty, countries may opt to decriminalize the production, distribution, and possession of sexualized visual, written, or audio content depicting children, so long as they do not represent an “existing person” or do not “visually depict child sexual abuse or child sexual exploitation.” Articles 14.4 and 15.4 also give states the option to decriminalize sexting between children under all circumstances as well as sexting by a child to an adult when the child is above the age of consent and the sexting is done privately. In such cases, the treaty only requires prosecution where images are shared “non-consensually.”

Keep reading

Google boss says trillion-dollar AI investment boom has ‘elements of irrationality’

Every company would be affected if the AI bubble were to burst, the head of Google’s parent firm Alphabet has told the BBC.

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an “extraordinary moment”, there was some “irrationality” in the current AI boom.

It comes amid fears in Silicon Valley and beyond of a bubble as the value of AI tech companies has soared in recent months and companies spend big on the burgeoning industry.

Asked whether Google would be immune to the impact of the AI bubble bursting, Mr Pichai said the tech giant could weather that potential storm, but also issued a warning.

“I think no company is going to be immune, including us,” he said.

In a wide-ranging exclusive interview at Google’s California headquarters, he also addressed energy needs, slowing down climate targets, UK investment, the accuracy of his AI models, and the effect of the AI revolution on jobs.

The interview comes as scrutiny on the state of the AI market has never been more intense.

Alphabet shares have doubled in value in seven months to $3.5tn (£2.7tn) as markets have grown more confident in the search giant’s ability to fend off the threat from ChatGPT owner OpenAI.

A particular focus is Alphabet’s development of specialised superchips for AI that compete with Nvidia, run by Jensen Huang, which recently reached a world first $5tn valuation.

As valuations rise, some analysts have expressed scepticism about a complicated web of $1.4tn of deals being done around OpenAI, which is expected to have revenues this year of less than one thousandth of the planned investment.

It has raised fears stock markets are heading for a repeat of the dotcom boom and bust of the late 1990s. This saw the values of early internet companies surge amid a wave of optimism for what was then a new technology, before the bubble burst in early 2000 and many share prices collapsed.

This led to some companies going bust, resulting in job losses. A drop in share prices can also hit the value of people’s savings including their pension funds.

In comments echoing those made by US Federal Reserve chairman Alan Greenspan in 1996, warning of “irrational exuberance” in the market well ahead of the dotcom crash, Mr Pichai said the industry can “overshoot” in investment cycles like this.

“We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound,” he said.

“I expect AI to be the same. So I think it’s both rational and there are elements of irrationality through a moment like this.”

His comments follow a warning from Jamie Dimon, the boss of US bank JP Morgan, who told the BBC last month that investment in AI would pay off, but some of the money poured into the industry would “probably be lost”.

But Mr Pichai said Google’s unique model of owning its own “full stack” of technologies – from chips to YouTube data, to models and frontier science – meant it was in a better position to ride out any AI market turbulence.

Keep reading

Pennsylvania School District Using AI-Enabled Wi-Fi To Search Students For Firearms

A Pennsylvania school district is using artificial intelligence to keep guns off its campuses. But civil liberties advocates have warned that the technology could lead to mass surveillance and violation of constitutional rights.

The Chartiers Valley School District in Allegheny County has implemented AI that harnesses the district’s Wi-Fi signals to determine whether people are carrying weapons as they enter the schools.

The technology, called Wi-AI, was developed by CurvePoint of Pittsburgh. CurvePoint grew out of AI research at Carnegie Mellon University.

According to the companyWi-AI uses “spatial intelligence” to find weapons such as guns before they enter a school.

The AI system analyzes a space and detects where potential weapons are located by interpreting “how Wi-Fi signals reflect off people and objects.”

Once a possible weapon is found, security personnel, school administrators, or others can go to the location to determine whether there is actually a threat.

It is now in use at Chartiers Valley School District high school, middle school, and primary school campuses. CurvePoint CEO Skip Smith said that in a recent test, the system found a pistol hidden in a backpack. He said the technology has a 95 percent success rate, failing only 4 percent of its searches.

Smith said the Wi-AI does not carry the same privacy concerns of other security systems because it does not rely on facial recognition or biometric data.

“We don’t know it’s you,“ Smith told The Epoch Times. ”We have no biometric information about you. Our system just sees a big bag of salt water.”

Darren Mariano, president of the Chartiers Valley Board of School Directors, said the district is excited to be the first in the country to adopt the technology.

The safety of our students and staff is always our top priority,” he said in a statement. “We’re thrilled to be the first district in the nation to implement this groundbreaking technology.”

Keep reading

Google Sued For Allegedly Using Gemini AI Tool To Track Users’ Private Communications

Google LLC is accused in a civil lawsuit of using its artificial intelligence program Gemini to collect data on users’ private communications in Gmail as well as Google’s instant messaging and video conference programs.

Until around Oct. 10, the Gemini AI assistant required the user to deliberately opt into its feature. After that date, the feature was allegedly “secretly” turned on by Google for all its users’ Gmail, Chat, and Meet accounts by default, enabling AI to track its users’ private data in those platforms “without the users’ knowledge or consent,” according to the complaint filed Nov. 11 in federal court in San Jose.

The class action lawsuit was filed in the U.S. District Court for the Northern District of California, alleging that Google is violating the California Invasion of Privacy Act, a 1967 law that prohibits surreptitious wiretapping and recording of confidential communications without the consent of all parties involved.

Although Google provides a way for users to turn off the feature, it requires users to look for it in the privacy settings to deactivate it, despite never having agreed to it in the first place, the complaint said.

The AI feature is categorized in “Google Workspace smart features” in Google settings. Once turned on, it means the user consents to the program using “Workspace content and activity” across Workspace or in other Google products.

When the feature is turned on, Gemini can “scan, read, and analyze every email (and email attachment), message, and conversation on those services,” according to the complaint.

Technology writer Ruben Circelli wrote in a PCMag article that Gemini is “downright creepy” in diving deep into his personal history, analyzing 16 years’ worth of emails after he signed up for a more advanced pro feature.

In a series of tests by Circelli, Gemini told him one of his character flaws and even knew who his first crush was in elementary school.

“This invasion of privacy wasn’t just disconcerting, though; it was unexpected,” Circelli wrote.

“Google didn’t explain what this integration would do before I signed up for its AI Pro plan, nor did it give me a way to opt out at the start.”

The Epoch Times reached out to Google for comment, but did not receive an immediate response.

“We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission,” the company has stated.

Keep reading

Russia’s AI Enabled Drones: The Next Evolution of Warfare

Russia is now fielding long-range, fiber-optic–controlled drones while simultaneously accelerating development of fully autonomous, AI-driven attack drones that mark a dramatic shift in its battlefield strategy. Ukrainian officials confirm that Moscow is deploying jam-proof fiber-optic FPV platforms with a 50-kilometer range, forcing Ukrainian units to cover supply routes with netting to protect vehicles from incoming strikes.

At the same time, Russia is rolling out a new generation of AI-enabled systems powered by smuggled NVIDIA Jetson processors, which were supposed to be blocked by sanctions but are now appearing inside multiple Russian drones.

Ukrainian drone specialist Serhii “Flash” Beskrestnov reports that Russia has adapted Ukrainian innovations for its V2U autonomous strike drone, while other captured systems, like the Shahed MS001 and Tyuvik, show that Russia is rapidly building an arsenal of “digital predators” capable of finding and attacking targets without human control.

The V2U remains Russia’s most advanced autonomous platform. First deployed in Ukraine’s Sumy region in February 2025, the drone was being used 30–50 times per day by mid-May across multiple fronts. It navigates by using machine vision to compare live imagery with digital terrain maps stored on a solid-state drive, upgraded to 128 GB in later versions. A 14-megapixel camera, laser rangefinder, and NVIDIA Jetson Orin processor enable the drone to identify targets, conduct terrain analysis, and strike without GPS or human commands.

Russian forces update its onboard code almost weekly, training the AI directly through battlefield experience. The drone’s AI is powerful enough to fly up and down roads searching for targets, though its limited ability to distinguish targets has led to mistakes, including striking civilian infrastructure such as a public toilet instead of a vehicle.

To extend range, Russian forces deploy large “mother drones” that transport smaller V2Us deep into contested airspace before releasing them for individual strikes. Russia is also experimenting with coordinated swarms of seven or eight V2Us, each painted with distinct wing colors to visually identify one another.

According to Ukrainian analysts, these drones can maintain formation, coordinate attack order, and perform anti-air evasive maneuvers if a member of the group is shot down. This behavior resembles early machine-driven swarm intelligence and may be adapted for Russia’s long-range Shahed drones, creating fully autonomous loitering-munitions swarms capable of saturating Ukrainian defenses.

Captured drones reveal the scale of Russia’s dependence on foreign components. The V2U relies on an NVIDIA Jetson Orin for AI processing, Intel wireless adapters, Sony optical sensors, and numerous Chinese-made motors, drives, and batteries. The MS001, an upgraded Shahed variant, also carries a Jetson Orin paired with a thermal imager and digital modem, allowing it to recognize ground objects and strike moving targets rather than simply flying to preset coordinates.

A third autonomous drone, the Tyuvik, resembles a miniature Shahed and is now reportedly in mass production. Tyuvik can locate and strike moving armored vehicles despite using inexpensive commercial hardware, suggesting it also depends on smuggled Western or Chinese AI processors.

Keep reading