Microsoft: Sharp Increase in Ai-Aided Cyberattacks From Russia, China

Foreign adversaries are increasingly using artificial intelligence (AI) in their cyber influence campaigns, with operations picking up “aggressively” this year, Microsoft said on Oct. 16.

In July, Microsoft identified more than 200 instances of AI-generated content from nation-state adversaries, more than four times the number in July 2024, and more than 10 times the number in July 2023, the company’s annual Digital Defense Report shows.

AI can create increasingly convincing emails and generate digital clones of senior government officials or news anchors, according to the report. The sophistication of AI tools has made the operations “easier to scale, more effective, and harder to trace,” and it is becoming increasingly difficult to differentiate state- and non-state actors, the report stated.

For scammers, AI is making it easier to quickly create more convincing websites, profiles, emails, and IDs, the report said. Microsoft said it blocked 1.6 million fake account creation attempts per hour on the company’s platforms.

“Everyone—from industry to government—must be proactive to keep pace with increasingly sophisticated attackers and to ensure that defenders keep ahead of adversaries,” said Amy Hogan-Burney, Microsoft’s vice president for customer security and trust, who oversaw the report.

Keep reading

Microsoft OneDrive Limits How Often Users Can Restrict Facial Recognition Setting

Microsoft is quietly testing a new facial-recognition feature in OneDrive that automatically sorts photos based on who appears in them.

The experimental version is being rolled out to some early-access users, and it turns on by default while placing strict limits on how often it can be disabled.

Those with access will see a new privacy notice in the app stating: “OneDrive uses AI to recognize faces in your photos.”

Microsoft’s support page, which still labels the option as “coming soon,” explains that “Microsoft collects, uses, and stores facial scans and biometric information from your photos through the OneDrive app for facial grouping technologies.”

The company says this is intended to “help you quickly and easily organize photos of friends and family.”

Microsoft insists that the face groupings remain private even when users share albums. It also claims that “Microsoft does not use any of your facial scans and biometric information to train or improve the AI model overall.”

When asked why the system is enabled by default instead of requiring consent first, a company spokesperson told Slashdot that “Microsoft OneDrive inherits privacy features and settings from Microsoft 365 and SharePoint, where applicable.”

Keep reading

Big Tech is paying millions to train teachers on AI, in a push to bring chatbots into classrooms

On a scorching hot Saturday in San Antonio, dozens of teachers traded a day off for a glimpse of the future. The topic of the day’s workshop: enhancing instruction with artificial intelligence.

After marveling as AI graded classwork instantly and turned lesson plans into podcasts or online storybooks, one high school English teacher raised a concern that was on the minds of many: “Are we going to be replaced with AI?”

That remains to be seen. But for the nation’s 4 million teachers to stay relevant and help students use the technology wisely, teachers unions have forged an unlikely partnership with the world’s largest technology companies. The two groups don’t always see eye to eye but say they share a common goal: training the future workforce of America.

Microsoft, OpenAI and Anthropic are providing millions of dollars for AI training to the American Federation of Teachers, the country’s second-largest teachers union. In exchange, the tech companies have an opportunity to make inroads into schools and win over students in the race for AI dominance.

AFT President Randi Weingarten said skepticism guided her negotiations, but the tech industry has something schools lack: deep pockets.

“There is no one else who is helping us with this. That’s why we felt we needed to work with the largest corporations in the world,” Weingarten said. “We went to them — they didn’t come to us.”

Weingarten first met with Microsoft CEO Brad Smith in 2023 to discuss a partnership. She later reached out to OpenAI to pursue an “agnostic” approach that means any company’s AI tools could be used in a training session.

Under the arrangement announced in July, Microsoft is contributing $12.5 million to AFT over five years. OpenAI is providing $8 million in funding and $2 million in technical resources, and Anthropic has offered $500,000.

Keep reading

Trump Calls for Microsoft to ‘Terminate’ Former Biden Deputy AG Lisa Monaco

President Donald Trump called for Microsoft to “immediately terminate” former Biden Deputy Attorney General Lisa Monaco, whom the company hired to be the President of Global Affairs.

In a post on Truth Social, Trump labeled Monaco “Corrupt and Totally Trump Deranged,” and noted that she had served under former President Barack Obama, former President Joe Biden, and former Attorney General Merrick Garland, “Who were all the architects of the worst ever Deep State Conspiracies” against the United States.

Trump continued to note that his administration had “stripped her of all Security Clearances” and had taken away Monaco’s access to National Security Intelligence.

“Monaco has been shockingly hired as the President of Global Affairs for Microsoft, in a very senior role with access to Highly Sensitive Information,” Trump said. “Monaco’s having that kind of access is unacceptable, and cannot be allowed to stand.”

Keep reading

Microsoft reduces Israel’s access to cloud and AI products over reports of mass surveillance in Gaza

Microsoft said Thursday it had disabled services to a unit within the Israeli military after a company review had determined its artificial intelligence and cloud computing products were being used to help carry out mass surveillance of Palestinians.

The action comes after The Associated Press and The Guardian published reports earlier this year revealing how the Israeli Ministry of Defense had been using Microsoft’s Azure platform to aid in the war in Gaza and occupation of the West Bank. Brad Smith, Microsoft’s vice chair and president, wrote in a blog post that the company was taking steps to enforce compliance with its terms of service.

An AP investigation in February showed that the Israeli military’s use of Microsoft products skyrocketed after a deadly surprise attack by Hamas militants on Oct. 7, 2023. The AP’s report cited internal Microsoft data showing the Israelis were using gigabytes of cloud storage and massive amounts of AI-enabled language translation services.

The AP also reported that Israel’s military used Microsoft Azure to compile information gathered through mass surveillance, which it transcribes and translates, including phone calls and text messages. That intelligence is then cross-checked with Israel’s in-house AI systems for targeting airstrikes.

AP reported that internal Microsoft data showed multiple Azure subscriptions were tied to Unit 8200, an elite cyber warfare unit within the Israeli Army responsible for clandestine operations, collecting signal intelligence and surveillance.

Following AP’s report, Microsoft acknowledged in May that it had sold advanced AI and cloud computing services to the Israeli military during the Gaza war and aided in efforts to locate and rescue Israeli hostages. But the company said an internal review found “no evidence” its Azure platform was used to target or harm people.

The Guardian, working in partnership with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, reported in August that the commander of Unit 8200 had met directly with Microsoft chairman and CEO Satya Nadella in 2021. The Israeli unit then used Microsoft products to aid in the development of an AI-powered mass surveillance system that was sweeping up, translating and analyzing millions of telephone calls per day made by Palestinian civilians. The report also revealed that data from the Israeli surveillance system was being stored at Microsoft cloud data centers in Europe.

Keep reading

Microsoft Word To Save New Files to the Cloud by Default

Microsoft is preparing to change how documents are saved in Word for Windows, shifting new file storage to the cloud by default.

Instead of asking users to activate AutoSave or select a cloud location manually, Word will now store all newly created documents directly in OneDrive or another designated cloud service automatically.

Raul Munoz, a product manager on Microsoft’s Office shared services and experiences team, described the change by saying, “We are modernizing the way files are created and stored in Word for Windows. Now you don’t have to worry about saving your documents: Anything new you create will be saved automatically to OneDrive or your preferred cloud destination.”

Currently being rolled out to Microsoft 365 Insiders, this new setup is presented as a way to prevent lost work and provide immediate access to files across mobile platforms and browsers.

However, for anyone working outside Microsoft’s cloud ecosystem, this change introduces additional steps to avoid online storage.

The update also comes with adjustments to how documents are named. Rather than appending sequential numbers to new files, Word will now assign file names based on the date of creation.

Users will have the option to set a preferred default save location or opt out of automatic cloud saves entirely, though doing so requires manual reconfiguration.

Microsoft has been steadily nudging its user base toward cloud reliance. AutoSave already defaults to cloud storage, and persistent prompts in Windows have encouraged, or pressured, users to turn on OneDrive backups.

These reminders have drawn complaints, especially from those who feel Microsoft is eroding straightforward local file access.

Keep reading

Microsoft Failed To Disclose Key Details About Use Of China-Based Engineers In U.S. Defense Work, Record Shows

Microsoft, as a provider of cloud services to the U.S. government, is required to regularly submit security plans to officials describing how the company will protect federal computer systems.

Yet in a 2025 submission to the Defense Department, the tech giant left out key details, including its use of employees based in China, the top cyber adversary of the U.S., to work on highly sensitive department systems, according to a copy obtained by ProPublica. In fact, the Microsoft plan viewed by ProPublica makes no reference to the company’s China-based operations or foreign engineers at all.

The document belies Microsoft’s repeated assertions that it disclosed the arrangement to the federal government, showing exactly what was left out as it sold its security plan to the Defense Department. The Pentagon has been investigating the use of foreign personnel by IT contractors in the wake of reporting by ProPublica last month that exposed Microsoft’s practice.

Our work detailed how Microsoft relies on “digital escorts” — U.S. personnel with security clearances — to supervise the foreign engineers who maintain the Defense Department’s cloud systems. The department requires that people handling sensitive data be U.S. citizens or permanent residents.

Microsoft’s security plan, dated Feb. 28 and submitted to the department’s IT agency, distinguishes between personnel who have undergone and passed background screenings to access its Azure Government cloud platform and those who have not. But it omits the fact that workers who have not been screened include non-U.S. citizens based in foreign countries. “Whenever non-screened personnel request access to Azure Government, an operator who has been screened and has access to Azure Government provides escorted access,” the company said in its plan.

The document also fails to disclose that the screened digital escorts can be contractors hired by a staffing company, not Microsoft employees. ProPublica found that escorts, in many cases former military personnel selected because they possess active security clearances, often lack the expertise needed to supervise engineers with far more advanced technical skills. Microsoft has told ProPublica that escorts “are provided specific training on protecting sensitive data” and preventing harm.

Microsoft’s reference to the escort model comes two-thirds of the way into the 125-page document, known as a “System Security Plan,” in several paragraphs under the heading “Escorted Access.” Government officials are supposed to evaluate these plans to determine whether the security measures disclosed in them are acceptable.

In interviews with ProPublica, Microsoft has maintained that it disclosed the digital escorting arrangement in the plan, and that the government approved it. But Defense Secretary Pete Hegseth and other government officials have expressed shock and outrage over the model, raising questions about what, exactly, the company disclosed as it sought to win and keep government cloud computing contracts.

Keep reading

Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness

AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn’t exactly make them conscious. It’s not like ChatGPT experiences sadness doing my tax return … right?

Well, a growing number of AI researchers at labs like Anthropic are asking when — if ever — AI models might develop subjective experiences similar to living beings, and if they do, what rights they should have.

The debate over whether AI models could one day be conscious — and merit legal safeguards — is dividing tech leaders. In Silicon Valley, this nascent field has become known as “AI welfare,” and if you think it’s a little out there, you’re not alone.

Microsoft’s CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is “both premature, and frankly dangerous.”

Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we’re just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots.

Furthermore, Microsoft’s AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a “world already roiling with polarized arguments over identity and rights.”

Suleyman’s views may sound reasonable, but he’s at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic’s AI welfare program gave some of the company’s models a new feature: Claude can now end conversations with humans who are being “persistently harmful or abusive.

Keep reading

AI Startup Backed by Microsoft Revealed to Be 700 Indian Employees Pretending to Be Chatbots

A once-hyped AI startup backed by Microsoft has filed for bankruptcy after it was revealed that its so-called artificial intelligence was actually hundreds of human workers in India pretending to be chatbots.

Builder.ai, a London-based company previously valued at $1.5 billion, marketed its platform as an AI-powered solution that made building apps as simple as ordering pizza. Its virtual assistant, “Natasha,” was supposed to generate software using artificial intelligence.

In reality, nearly 700 engineers in India were manually coding customer requests behind the scenes, the Times of India reported.

The ruse began to collapse in May when lender Viola Credit seized $37 million from the company’s accounts, uncovering that Builder.ai had inflated its 2024 revenue projections by 300%. An audit revealed the company generated just $50 million in revenue, far below the $220 million it claimed to investors.

A Wall Street Journal report from 2019 had already questioned Builder.ai’s AI claims, and a former executive sued the company that same year for allegedly misleading investors and overstating its technical capabilities.

Despite that, the company raised over $445 million from big names including Microsoft and the Qatar Investment Authority. Builder.ai’s collapse has triggered a federal investigation in the U.S., with prosecutors in New York requesting financial documents and customer records.

Founder Sachin Dev Duggal stepped down earlier this year and was replaced by Manpreet Ratia, who reportedly uncovered the company’s internal misrepresentations.

The company now owes millions to Amazon and Microsoft in cloud computing costs and has laid off around 1,000 employees. On LinkedIn, the company announced its entry into insolvency proceedings, citing “historic challenges and past decisions” that strained its finances.

The fallout is seen as one of the biggest failures of the post-ChatGPT AI investment boom and has renewed scrutiny of “AI washing”—the trend of rebranding manual services as artificial intelligence to secure funding.

Keep reading

Leaked documents reveal Microsoft provided Israel’s Unit 8200 with tools to spy on Palestinians

  • Microsoft provided its Azure cloud platform to Israel’s Unit 8200, enabling mass surveillance of Palestinian communications.
  • Unit 8200 used Azure data to identify bombing targets in Gaza, leading to civilian casualties.
  • Microsoft claims ignorance but refuses to terminate its contract with Israeli military intelligence.
  • Investigations reveal Microsoft profits from war crimes while publicly promoting ethical AI principles.
  • Critics warn unchecked surveillance turns corporations into silent partners in oppression and human rights violations.

Microsoft isn’t exactly known for being ethical, but a shocking new exposé has exposed just how deep their complicity in war crimes runs. The Big Tech firm has been secretly providing Israel’s elite military intelligence Unit 8200 with its Azure cloud platform since 2021, enabling the storage and analysis of massive troves of Palestinian communications data.

According to a damning investigation by +972 MagazineLocal Call, and The Guardian, Microsoft’s technology has been weaponized to surveil millions of daily phone calls from Palestinians in Gaza and the West Bank, with Unit 8200 sources confirming the data was used to identify bombing targets in densely populated civilian areas. While Microsoft feigns ignorance, claiming its CEO was unaware of the data’s lethal purpose, the tech giant’s actions reveal a disturbing pattern of prioritizing profits over human lives… even as Palestinian civilians pay the ultimate price.

Keep reading