Foreign adversaries are increasingly using artificial intelligence (AI) in their cyber influence campaigns, with operations picking up “aggressively” this year, Microsoft said on Oct. 16.
In July, Microsoft identified more than 200 instances of AI-generated content from nation-state adversaries, more than four times the number in July 2024, and more than 10 times the number in July 2023, the company’s annual Digital Defense Report shows.
AI can create increasingly convincing emails and generate digital clones of senior government officials or news anchors, according to the report. The sophistication of AI tools has made the operations “easier to scale, more effective, and harder to trace,” and it is becoming increasingly difficult to differentiate state- and non-state actors, the report stated.
For scammers, AI is making it easier to quickly create more convincing websites, profiles, emails, and IDs, the report said. Microsoft said it blocked 1.6 million fake account creation attempts per hour on the company’s platforms.
“Everyone—from industry to government—must be proactive to keep pace with increasingly sophisticated attackers and to ensure that defenders keep ahead of adversaries,” said Amy Hogan-Burney, Microsoft’s vice president for customer security and trust, who oversaw the report.