UK plans to criminalize digitally putting someone’s face on nude body

The UK’s controversial Online Safety Bill has a section that would make sharing “pornographic deepfakes” without consent a criminal offense in England and Wales. This would involve digitally putting someone’s face on a naked body.

The bill seeks to address the rise in manipulated explicit images, where a person’s face is superimposed on another person’s body.

Current legislation requires proving that the images were shared to “cause distress.”

However, the proposed law does not require the prosecution to prove that someone intended to cause harm, potentially leaving the door open for jokes to be prosecuted.

According to the government, one in 14 people in England and Wales have been threatened with their intimate images being shared online. It added that there is a global concern about deepfake technology being used to create fake pornographic images. A website that creates nudes from clothed photos had 38 million visits in 2021.

Keep reading

Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices

Imagine the following scenario. A phone rings. An office worker answers it and hears his boss, in a panic, tell him that she forgot to transfer money to the new contractor before she left for the day and needs him to do it. She gives him the wire transfer information, and with the money transferred, the crisis has been averted.

The worker sits back in his chair, takes a deep breath, and watches as his boss walks in the door. The voice on the other end of the call was not his boss. In fact, it wasn’t even a human. The voice he heard was that of an audio deepfake, a machine-generated audio sample designed to sound exactly like his boss.

Attacks like this using recorded audio have already occurred, and conversational audio deepfakes might not be far off.

Deepfakes, both audio and video, have been possible only with the development of sophisticated machine learning technologies in recent years. Deepfakes have brought with them a new level of uncertainty around digital media. To detect deepfakes, many researchers have turned to analyzing visual artifacts – minute glitches and inconsistencies – found in video deepfakes.

Keep reading

How to spot a deepfake? One simple trick is all you need

With criminals beginning to use deepfake video technology to spoof an identity in live online job interviews, security researchers have highlighted one simple way to spot a deepfake: just ask the person to turn their face sideways on. 

The reason for this as a potential handy authentication check is that deepfake AI models, while good at recreating front-on views of a person’s face, aren’t good at doing side-on or profile views like the ones you might see in a mug shot. 

Metaphysics.ai highlights the instability of recreating full 90° profile views in live deepfake videos, making the side profile check a simple and effective authentication procedure for companies conducting video-based online job interviews. 

Deepfakes or synthetic AI-enabled recreations of audio, image and video content of humans has been on the radar as a potential identity threat for several years.

Keep reading

Deepfake used to attack activist couple shows new disinformation frontier

Oliver Taylor, a student at England’s University of Birmingham, is a twenty-something with brown eyes, light stubble, and a slightly stiff smile.

Online profiles describe him as a coffee lover and politics junkie who was raised in a traditional Jewish home. His half dozen freelance editorials and blog posts reveal an active interest in anti-Semitism and Jewish affairs, with bylines in the Jerusalem Post and the Times of Israel.

The catch? Oliver Taylor seems to be an elaborate fiction.

His university says it has no record of him. He has no obvious online footprint beyond an account on the question-and-answer site Quora, where he was active for two days in March. Two newspapers that published his work say they have tried and failed to confirm his identity. And experts in deceptive imagery used state-of-the-art forensic analysis programs to determine that Taylor’s profile photo is a hyper-realistic forgery – a “deepfake.”

Who is behind Taylor isn’t known to Reuters. Calls to the U.K. phone number he supplied to editors drew an automated error message and he didn’t respond to messages left at the Gmail address he used for correspondence.

Reuters was alerted to Taylor by London academic Mazen Masri, who drew international attention in late 2018 when he helped launch an Israeli lawsuit against the surveillance company NSO on behalf of alleged Mexican victims of the company’s phone hacking technology.

In an article in U.S. Jewish newspaper The Algemeiner, Taylor had accused Masri and his wife, Palestinian rights campaigner Ryvka Barnard, of being “known terrorist sympathizers.”

Masri and Barnard were taken aback by the allegation, which they deny. But they were also baffled as to why a university student would single them out. Masri said he pulled up Taylor’s profile photo. He couldn’t put his finger on it, he said, but something about the young man’s face “seemed off.”   

Six experts interviewed by Reuters say the image has the characteristics of a deepfake.

“The distortion and inconsistencies in the background are a tell-tale sign of a synthesized image, as are a few glitches around his neck and collar,” said digital image forensics pioneer Hany Farid, who teaches at the University of California, Berkeley.  

Artist Mario Klingemann, who regularly uses deepfakes in his work, said the photo “has all the hallmarks.”

“I’m 100 percent sure,” he said.

Keep reading