Radio trick secretly turns laptop into a spy speaker that talks through walls

Security researchers at the University of Florida and the University of Electro-Communications in Japan have revealed that modern digital microphones used in laptops and speakers can leak audio as electromagnetic signals.

This could lead to the creation of a new network of wireless eavesdropping without needing any malware, hacking, or even physical access to your device.

In the aftermath, this vulnerability could affect billions of devices worldwide, exposing private conversations to corporate spies and government surveillance.

How does this attack work?

All devices, such as speakers and laptops, have MEMS microphones, which are a tiny part of the system tasked with converting audio into digital pulses that contain remnants of the original speech. These pulses create weak radio emissions that can be captured by invisible broadcasts.

“With an FM radio receiver and a copper antenna, you can eavesdrop on these microphones. That’s how easy this can be,” said Sara Rampazzi, a professor of computer and information science and engineering at the University of Florida who co-authored the new study. “It costs maybe a hundred dollars, or even less.”

The experiment that proved it all

The team of researchers proved their theory using eerie sounds. A woman’s distorted voice emerged from the radio equipment as she spoke test sentences like “The birch canoe slid on the smooth planks.” and “Glue the sheet to the dark blue background.” Each transmission penetrated through concrete walls up to 10 inches thick.

Laptops proved to be the weakest link as their microphones are connected through long internal wires that act as antennas, amplifying the leaked signals.

Now comes the dangerous part. For the leak to happen, your microphone does not necessarily need to be in an active state. Simply having applications like Spotify, Amazon Music, or Google Drive – can enable the microphone to leak radio signals.

AI in the scenario

The researchers didn’t just stop at this stage. They went beyond and processed the intercepted signals with AI speech-to-text tools from OpenAI and Microsoft. These LLMs then cleaned the audio and converted the recordings into clear, searchable text.

Surprisingly, in tests, the attack had recognized spoken digits with 94.2% accuracy from up to 2 meters away, even through a concrete war. It kept a 14% transcription error rate, making majority of the conversations understandable.

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment