Microsoft Adds AI to Windows Despite ‘Novel Security Risks’

Microsoft’s recent introduction of Copilot Actions, an experimental AI agent integrated into Windows, has sparked criticism from security experts who question the safety of pushing new features before fully understanding and containing their potential risks

Ars Technica reports that Microsoft unveiled Copilot Actions this week, a set of “experimental agentic features” that allow AI to perform various tasks such as organizing files, scheduling meetings, and sending emails. While the company touted the AI agent as an active digital collaborator that enhances efficiency and productivity, it also issued a warning about the security implications of enabling the feature.

Microsoft’s warning reads:

As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.

Security concerns stem from known defects inherent in most large language models (LLMs), including Copilot. Researchers have repeatedly demonstrated that LLMs can provide factually erroneous and illogical answers, a behavior known as “hallucinations.” This means users cannot fully trust the output of AI assistants like Copilot, Gemini, or Claude, and must independently verify the information.

Another significant issue with LLMs is their vulnerability to prompt injections. Hackers can exploit this flaw by planting malicious instructions in websites, resumes, and emails, which the AI eagerly follows without discerning between valid user prompts and untrusted, third-party content. These vulnerabilities can lead to data exfiltration, malicious code execution, and cryptocurrency theft.

Keep reading

Microsoft Warns Experimental Windows 11 AI Could Install Malware on Your Computer

Microsoft said in an update on Nov. 17 that Windows 11 users who utilize “agentic features” from its AI services should be cautious because the AI agents could potentially download and install malware.

In an alert, Microsoft warned that its AI models could “occasionally hallucinate” and introduce “novel security risks” such as malware because large language models, a type of AI that processes data and generates human-like text, are susceptible to cyberattacks.

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA),” the warning stated. A prompt injection attack is a type of cyberattack where an attacker crafts an input to trick the AI into performing malicious actions.

Microsoft added that in the case of Windows 11’s “experimental” AI services, “malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

The AI features are turned off by default and operate only after the user opts into them, the company said.

The agentic AI setting “can only be enabled by an administrator user of the device and once enabled, it’s enabled for all users on the device including other administrators and standard users,” Microsoft said of the AI services.

Keep reading

Huge Microsoft cloud crash leaves half the world without internet AGAIN

Microsoft‘s Azure, one of the world’s biggest cloud service providers, is suffering outages, triggering widespread internet disruptions across major companies. 

According to Downdetector, problems began around 11:30am ET, with reports surging from users who could not access cloud-connected services, websites or apps. 

The outage appears to be affecting dozens of platforms that rely on these cloud networks, including Microsoft 365, Xbox, Outlook, Starbucks, Costco and Kroger. 

Even popular developer and data tools like Blackbaud and Minecraft are showing connectivity issues. 

Downdetector has received nearly 20,000 issue reports from Azure users in the US. 

The Microsoft outage comes just days after Amazon Web Services disrupted ‘half the internet.’ 

The incidents have raised concerns about how much of the global online infrastructure depends on these two companies, which host everything from retail and entertainment platforms to business operations and cloud storage. 

Frustrated users have flooded social media to vent, with one post on X reading: ‘First AWS, now Azure goes down. I love it when big companies own half the internet!!!’ 

Keep reading

Microsoft: Sharp Increase in Ai-Aided Cyberattacks From Russia, China

Foreign adversaries are increasingly using artificial intelligence (AI) in their cyber influence campaigns, with operations picking up “aggressively” this year, Microsoft said on Oct. 16.

In July, Microsoft identified more than 200 instances of AI-generated content from nation-state adversaries, more than four times the number in July 2024, and more than 10 times the number in July 2023, the company’s annual Digital Defense Report shows.

AI can create increasingly convincing emails and generate digital clones of senior government officials or news anchors, according to the report. The sophistication of AI tools has made the operations “easier to scale, more effective, and harder to trace,” and it is becoming increasingly difficult to differentiate state- and non-state actors, the report stated.

For scammers, AI is making it easier to quickly create more convincing websites, profiles, emails, and IDs, the report said. Microsoft said it blocked 1.6 million fake account creation attempts per hour on the company’s platforms.

“Everyone—from industry to government—must be proactive to keep pace with increasingly sophisticated attackers and to ensure that defenders keep ahead of adversaries,” said Amy Hogan-Burney, Microsoft’s vice president for customer security and trust, who oversaw the report.

Keep reading

Microsoft OneDrive Limits How Often Users Can Restrict Facial Recognition Setting

Microsoft is quietly testing a new facial-recognition feature in OneDrive that automatically sorts photos based on who appears in them.

The experimental version is being rolled out to some early-access users, and it turns on by default while placing strict limits on how often it can be disabled.

Those with access will see a new privacy notice in the app stating: “OneDrive uses AI to recognize faces in your photos.”

Microsoft’s support page, which still labels the option as “coming soon,” explains that “Microsoft collects, uses, and stores facial scans and biometric information from your photos through the OneDrive app for facial grouping technologies.”

The company says this is intended to “help you quickly and easily organize photos of friends and family.”

Microsoft insists that the face groupings remain private even when users share albums. It also claims that “Microsoft does not use any of your facial scans and biometric information to train or improve the AI model overall.”

When asked why the system is enabled by default instead of requiring consent first, a company spokesperson told Slashdot that “Microsoft OneDrive inherits privacy features and settings from Microsoft 365 and SharePoint, where applicable.”

Keep reading

Big Tech is paying millions to train teachers on AI, in a push to bring chatbots into classrooms

On a scorching hot Saturday in San Antonio, dozens of teachers traded a day off for a glimpse of the future. The topic of the day’s workshop: enhancing instruction with artificial intelligence.

After marveling as AI graded classwork instantly and turned lesson plans into podcasts or online storybooks, one high school English teacher raised a concern that was on the minds of many: “Are we going to be replaced with AI?”

That remains to be seen. But for the nation’s 4 million teachers to stay relevant and help students use the technology wisely, teachers unions have forged an unlikely partnership with the world’s largest technology companies. The two groups don’t always see eye to eye but say they share a common goal: training the future workforce of America.

Microsoft, OpenAI and Anthropic are providing millions of dollars for AI training to the American Federation of Teachers, the country’s second-largest teachers union. In exchange, the tech companies have an opportunity to make inroads into schools and win over students in the race for AI dominance.

AFT President Randi Weingarten said skepticism guided her negotiations, but the tech industry has something schools lack: deep pockets.

“There is no one else who is helping us with this. That’s why we felt we needed to work with the largest corporations in the world,” Weingarten said. “We went to them — they didn’t come to us.”

Weingarten first met with Microsoft CEO Brad Smith in 2023 to discuss a partnership. She later reached out to OpenAI to pursue an “agnostic” approach that means any company’s AI tools could be used in a training session.

Under the arrangement announced in July, Microsoft is contributing $12.5 million to AFT over five years. OpenAI is providing $8 million in funding and $2 million in technical resources, and Anthropic has offered $500,000.

Keep reading

Trump Calls for Microsoft to ‘Terminate’ Former Biden Deputy AG Lisa Monaco

President Donald Trump called for Microsoft to “immediately terminate” former Biden Deputy Attorney General Lisa Monaco, whom the company hired to be the President of Global Affairs.

In a post on Truth Social, Trump labeled Monaco “Corrupt and Totally Trump Deranged,” and noted that she had served under former President Barack Obama, former President Joe Biden, and former Attorney General Merrick Garland, “Who were all the architects of the worst ever Deep State Conspiracies” against the United States.

Trump continued to note that his administration had “stripped her of all Security Clearances” and had taken away Monaco’s access to National Security Intelligence.

“Monaco has been shockingly hired as the President of Global Affairs for Microsoft, in a very senior role with access to Highly Sensitive Information,” Trump said. “Monaco’s having that kind of access is unacceptable, and cannot be allowed to stand.”

Keep reading

Microsoft reduces Israel’s access to cloud and AI products over reports of mass surveillance in Gaza

Microsoft said Thursday it had disabled services to a unit within the Israeli military after a company review had determined its artificial intelligence and cloud computing products were being used to help carry out mass surveillance of Palestinians.

The action comes after The Associated Press and The Guardian published reports earlier this year revealing how the Israeli Ministry of Defense had been using Microsoft’s Azure platform to aid in the war in Gaza and occupation of the West Bank. Brad Smith, Microsoft’s vice chair and president, wrote in a blog post that the company was taking steps to enforce compliance with its terms of service.

An AP investigation in February showed that the Israeli military’s use of Microsoft products skyrocketed after a deadly surprise attack by Hamas militants on Oct. 7, 2023. The AP’s report cited internal Microsoft data showing the Israelis were using gigabytes of cloud storage and massive amounts of AI-enabled language translation services.

The AP also reported that Israel’s military used Microsoft Azure to compile information gathered through mass surveillance, which it transcribes and translates, including phone calls and text messages. That intelligence is then cross-checked with Israel’s in-house AI systems for targeting airstrikes.

AP reported that internal Microsoft data showed multiple Azure subscriptions were tied to Unit 8200, an elite cyber warfare unit within the Israeli Army responsible for clandestine operations, collecting signal intelligence and surveillance.

Following AP’s report, Microsoft acknowledged in May that it had sold advanced AI and cloud computing services to the Israeli military during the Gaza war and aided in efforts to locate and rescue Israeli hostages. But the company said an internal review found “no evidence” its Azure platform was used to target or harm people.

The Guardian, working in partnership with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, reported in August that the commander of Unit 8200 had met directly with Microsoft chairman and CEO Satya Nadella in 2021. The Israeli unit then used Microsoft products to aid in the development of an AI-powered mass surveillance system that was sweeping up, translating and analyzing millions of telephone calls per day made by Palestinian civilians. The report also revealed that data from the Israeli surveillance system was being stored at Microsoft cloud data centers in Europe.

Keep reading

Microsoft Word To Save New Files to the Cloud by Default

Microsoft is preparing to change how documents are saved in Word for Windows, shifting new file storage to the cloud by default.

Instead of asking users to activate AutoSave or select a cloud location manually, Word will now store all newly created documents directly in OneDrive or another designated cloud service automatically.

Raul Munoz, a product manager on Microsoft’s Office shared services and experiences team, described the change by saying, “We are modernizing the way files are created and stored in Word for Windows. Now you don’t have to worry about saving your documents: Anything new you create will be saved automatically to OneDrive or your preferred cloud destination.”

Currently being rolled out to Microsoft 365 Insiders, this new setup is presented as a way to prevent lost work and provide immediate access to files across mobile platforms and browsers.

However, for anyone working outside Microsoft’s cloud ecosystem, this change introduces additional steps to avoid online storage.

The update also comes with adjustments to how documents are named. Rather than appending sequential numbers to new files, Word will now assign file names based on the date of creation.

Users will have the option to set a preferred default save location or opt out of automatic cloud saves entirely, though doing so requires manual reconfiguration.

Microsoft has been steadily nudging its user base toward cloud reliance. AutoSave already defaults to cloud storage, and persistent prompts in Windows have encouraged, or pressured, users to turn on OneDrive backups.

These reminders have drawn complaints, especially from those who feel Microsoft is eroding straightforward local file access.

Keep reading

Microsoft Failed To Disclose Key Details About Use Of China-Based Engineers In U.S. Defense Work, Record Shows

Microsoft, as a provider of cloud services to the U.S. government, is required to regularly submit security plans to officials describing how the company will protect federal computer systems.

Yet in a 2025 submission to the Defense Department, the tech giant left out key details, including its use of employees based in China, the top cyber adversary of the U.S., to work on highly sensitive department systems, according to a copy obtained by ProPublica. In fact, the Microsoft plan viewed by ProPublica makes no reference to the company’s China-based operations or foreign engineers at all.

The document belies Microsoft’s repeated assertions that it disclosed the arrangement to the federal government, showing exactly what was left out as it sold its security plan to the Defense Department. The Pentagon has been investigating the use of foreign personnel by IT contractors in the wake of reporting by ProPublica last month that exposed Microsoft’s practice.

Our work detailed how Microsoft relies on “digital escorts” — U.S. personnel with security clearances — to supervise the foreign engineers who maintain the Defense Department’s cloud systems. The department requires that people handling sensitive data be U.S. citizens or permanent residents.

Microsoft’s security plan, dated Feb. 28 and submitted to the department’s IT agency, distinguishes between personnel who have undergone and passed background screenings to access its Azure Government cloud platform and those who have not. But it omits the fact that workers who have not been screened include non-U.S. citizens based in foreign countries. “Whenever non-screened personnel request access to Azure Government, an operator who has been screened and has access to Azure Government provides escorted access,” the company said in its plan.

The document also fails to disclose that the screened digital escorts can be contractors hired by a staffing company, not Microsoft employees. ProPublica found that escorts, in many cases former military personnel selected because they possess active security clearances, often lack the expertise needed to supervise engineers with far more advanced technical skills. Microsoft has told ProPublica that escorts “are provided specific training on protecting sensitive data” and preventing harm.

Microsoft’s reference to the escort model comes two-thirds of the way into the 125-page document, known as a “System Security Plan,” in several paragraphs under the heading “Escorted Access.” Government officials are supposed to evaluate these plans to determine whether the security measures disclosed in them are acceptable.

In interviews with ProPublica, Microsoft has maintained that it disclosed the digital escorting arrangement in the plan, and that the government approved it. But Defense Secretary Pete Hegseth and other government officials have expressed shock and outrage over the model, raising questions about what, exactly, the company disclosed as it sought to win and keep government cloud computing contracts.

Keep reading