Senators Call For Government Power To Hand Out Licenses To AI Companies, Curb “Deceptive” Election-Related AI and “Deepfakes”

This week, a Senate Judiciary hearing under the umbrella of the Privacy, Technology and Law Subcommittee became the stage for bipartisan senators to divulge plans aiming to focus on the allegedly looming threats of manipulative artificial intelligence, especially in the realm of elections. Visions for a framework proposed by Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), foresee a new government agency, tasked with issuing licenses to entities working with AI systems.

Simultaneously, Senator Amy Klobuchar (D-Minn.) unveiled preliminary details of upcoming legislation, crafted in tandem with Hawley, along with Senators Chris Coons (D-Del.), and Susan Collins (R-Maine). This new proposal targets the prospects of AI technologies pervading the electoral process.

Apprehension regarding deceptive generative AI undermining democratic elections took center stage during the Senate hearing, with Klobuchar expressing a sense of urgency given the rapidly approaching electoral calendar.

Specifically, the newly minted legislation, coined the Protect Elections from Deceptive AI Act, is envisioned to clamp down on AI-assisted impersonation of federal political aspirants in campaign ads.

Rendered as an amendment to the Federal Election Campaign Act of 1971, this legislation provides a legal recourse in federal court for targeted candidates to counter harmful AI-generated deceptive content.

Keep reading

Deepfake Fraud Surges More Than 1000%, Insiders Say It’s Just The Beginning

As the line between fact and fiction gets harder to distinguish, online criminals need just two hours to create a realistic, computer-generated “deepfake” product that can ruin someone’s life.

The surge in popularity of hyper-realistic photos, audio, and videos developed with artificial intelligence (AI)—commonly known as deepfakes—has become an internet sensation.

It’s also giving cyber villains an edge in the crime world.

Between 2022 and the first quarter of this year, deepfake use in fraud catapulted 1,200 percent in the United States alone.

Though it’s not just an American problem.

In the same analysis, deepfakes used for scam purposes exploded in Canada, Germany, and the United Kingdom. In the study, the United States accounted for 4.3 percent of global deepfake fraud cases.

Meanwhile, AI experts and cybercrime investigators say we’re just at the tip of the iceberg. The rabbit hole of deepfake fraud potential just keeps going.

“I believe the No. 1 incentive for cyber criminals to commit cybercrime is law enforcement and their inability to keep up,” Michael Roberts told The Epoch Times.

Mr. Roberts is a professional investigator and the founder of the pioneer company Rexxfield, which helps victims of web-based attacks.

He also started PICDO, a cyber crime disruption organization, and has run counter-hacking education for branches of the U.S. and Australian militaries as well as NATO.

Mr. Roberts said legal systems in the Western world are “hopelessly overwhelmed” by online fraud cases, many of which include deepfake attacks. Moreover, the cases that get investigated without hiring a private firm are cherry-picked.

And even then, it [the case] doesn’t get resolved,” he said.

The market for deepfake detection was valued at $3.86 billion dollars in 2020 and is expected to grow 42 percent annually through 2026, according to an HSRC report.

Keep reading

PhD Student Uses Deepfake to Pass Popular Voice Authentication and Spoof Detection System

University of Waterloo (UW) cybersecurity PhD student Andre Kassis published his findings after being granted access to an account protected with biometrics using deepfake AI-generated audio recordings.

A hacker can create a deepfake voice with five minutes of the target’s recorded voice, which can be taken from public posts on social media, the research shows. GitHub’s open source AI software can create deepfake audio that can surpass voice authentication.

He used the deepfake to expose a weakness in the Amazon Connect voice authentication system, a UW release reveals. Four-second attacks on Connect had a 10 percent success rate, and attacks closer to 30 seconds were successful 40 percent of the time.

In response, the company added biometric anti-spoofing software that could find digital markers on a voice recording, revealing if it was made by a machine or human. This worked until Kassis used free software to remove the digital markers from his deepfakes.

His method can bypass less sophisticated voice biometric authentication systems with a 99 percent success rate after six tries, according to the announcement.

Keep reading

SCIENTISTS IMPLANT SUBJECTS WITH FAKE MEMORIES USING DEEPFAKES

Researchers have found that they can incept false memories by showing subjects deepfaked clips of movie remakes that were never actually produced.

As detailed in a recent paper published in the journal PLOS One, deepfaked clips of made-up movies were convincing enough to trick participants into believing they were real. Some went as far as to rank the fake movies, which included purported remakes of real movies like Will Smith starring in a rebooted “The Matrix,” to be better than the originals.

But the study did have an important caveat.

“However, deepfakes were no more effective than simple text descriptions at distorting memory,” the paper reads, suggesting that deepfakes aren’t entirely necessary to trick somebody into accepting a false memory.

“We shouldn’t jump to predictions of dystopian futures based on our fears around emerging technologies,” lead study author Gillian Murphy, a misinformation researcher at University College Cork in Ireland, told The Daily Beast. “Yes there are very real harms posed by deep fakes, but we should always gather evidence for those harms in the first instance, before rushing to solve problems we’ve just assumed might exist.”

Keep reading

The US military plans to use deep fakes and take over appliances for propaganda

Can you create cutting edge “deep fake” videos, spy on people using household appliances, and make massive data dragnets? If so, the Pentagon wants to hear from you so it can amp up its manipulation efforts. 

US Special Operations Command (US SOCOM) has issued proposal requests for a whole host of dodgy services, according to new documents obtained by The Intercept

Specifically, the Pentagon is looking for “next generation capability to takeover’ Internet of Things (IoT) devices in order to collect data and information from local populaces to enable a breakdown of what messaging might be popular and accepted through sifting of data once received.” 

For what purpose? “This would enable MISO [Military Information Support Operations] to craft and promote messages that may be more readily received by the local populace in relevant peer/near peer environments,” according to the document.  

Keep reading

US Military Group Wants Weaponized Deepfakes, Better Biometric Tools

At least some in the U.S. military have heard enough about deepfakes and they want in.

Investigative-news publisher The Intercept has got hold of a lengthy technology wish list that its editors feel was created by the U.S. Special Operations Command. Two items in the document are biometric in nature.

The command, most often referred to as SOCOM, performs the United States’ most secret and daring military missions. And officers want to add the ability to create and deploy deepfakes against those outside the country.

They also want to better their game when it comes to biometrically identify individuals using, among other techniques, touchless fingerprint capture over long distances and in all environments. Officials also want rapid handheld DNA collection gear. This can be found in the document above under 4.1.2.3 Biometrics.

In all cases, SOCOM wants to cut false positives and the ability to compare scanned biometrics against watch lists on handheld devices or remote databases. Those handhelds will need to perform all common biometric analyses, including DNA comparisons.

But the showstopper is the unit’s deepfake ambitions (at 4.3.1.4. Military Information Support Operations in the document). The leaders of many advanced economies, including various agency heads in the United States, have publicly stated their wariness of deepfakes.

(Three years ago, a NATO panel about deepfakes dismissed concerns about deepfakes. Even last year, there were those telling people not to worry.)

Keep reading

UK plans to criminalize digitally putting someone’s face on nude body

The UK’s controversial Online Safety Bill has a section that would make sharing “pornographic deepfakes” without consent a criminal offense in England and Wales. This would involve digitally putting someone’s face on a naked body.

The bill seeks to address the rise in manipulated explicit images, where a person’s face is superimposed on another person’s body.

Current legislation requires proving that the images were shared to “cause distress.”

However, the proposed law does not require the prosecution to prove that someone intended to cause harm, potentially leaving the door open for jokes to be prosecuted.

According to the government, one in 14 people in England and Wales have been threatened with their intimate images being shared online. It added that there is a global concern about deepfake technology being used to create fake pornographic images. A website that creates nudes from clothed photos had 38 million visits in 2021.

Keep reading

Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices

Imagine the following scenario. A phone rings. An office worker answers it and hears his boss, in a panic, tell him that she forgot to transfer money to the new contractor before she left for the day and needs him to do it. She gives him the wire transfer information, and with the money transferred, the crisis has been averted.

The worker sits back in his chair, takes a deep breath, and watches as his boss walks in the door. The voice on the other end of the call was not his boss. In fact, it wasn’t even a human. The voice he heard was that of an audio deepfake, a machine-generated audio sample designed to sound exactly like his boss.

Attacks like this using recorded audio have already occurred, and conversational audio deepfakes might not be far off.

Deepfakes, both audio and video, have been possible only with the development of sophisticated machine learning technologies in recent years. Deepfakes have brought with them a new level of uncertainty around digital media. To detect deepfakes, many researchers have turned to analyzing visual artifacts – minute glitches and inconsistencies – found in video deepfakes.

Keep reading

How to spot a deepfake? One simple trick is all you need

With criminals beginning to use deepfake video technology to spoof an identity in live online job interviews, security researchers have highlighted one simple way to spot a deepfake: just ask the person to turn their face sideways on. 

The reason for this as a potential handy authentication check is that deepfake AI models, while good at recreating front-on views of a person’s face, aren’t good at doing side-on or profile views like the ones you might see in a mug shot. 

Metaphysics.ai highlights the instability of recreating full 90° profile views in live deepfake videos, making the side profile check a simple and effective authentication procedure for companies conducting video-based online job interviews. 

Deepfakes or synthetic AI-enabled recreations of audio, image and video content of humans has been on the radar as a potential identity threat for several years.

Keep reading

Deepfake used to attack activist couple shows new disinformation frontier

Oliver Taylor, a student at England’s University of Birmingham, is a twenty-something with brown eyes, light stubble, and a slightly stiff smile.

Online profiles describe him as a coffee lover and politics junkie who was raised in a traditional Jewish home. His half dozen freelance editorials and blog posts reveal an active interest in anti-Semitism and Jewish affairs, with bylines in the Jerusalem Post and the Times of Israel.

The catch? Oliver Taylor seems to be an elaborate fiction.

His university says it has no record of him. He has no obvious online footprint beyond an account on the question-and-answer site Quora, where he was active for two days in March. Two newspapers that published his work say they have tried and failed to confirm his identity. And experts in deceptive imagery used state-of-the-art forensic analysis programs to determine that Taylor’s profile photo is a hyper-realistic forgery – a “deepfake.”

Who is behind Taylor isn’t known to Reuters. Calls to the U.K. phone number he supplied to editors drew an automated error message and he didn’t respond to messages left at the Gmail address he used for correspondence.

Reuters was alerted to Taylor by London academic Mazen Masri, who drew international attention in late 2018 when he helped launch an Israeli lawsuit against the surveillance company NSO on behalf of alleged Mexican victims of the company’s phone hacking technology.

In an article in U.S. Jewish newspaper The Algemeiner, Taylor had accused Masri and his wife, Palestinian rights campaigner Ryvka Barnard, of being “known terrorist sympathizers.”

Masri and Barnard were taken aback by the allegation, which they deny. But they were also baffled as to why a university student would single them out. Masri said he pulled up Taylor’s profile photo. He couldn’t put his finger on it, he said, but something about the young man’s face “seemed off.”   

Six experts interviewed by Reuters say the image has the characteristics of a deepfake.

“The distortion and inconsistencies in the background are a tell-tale sign of a synthesized image, as are a few glitches around his neck and collar,” said digital image forensics pioneer Hany Farid, who teaches at the University of California, Berkeley.  

Artist Mario Klingemann, who regularly uses deepfakes in his work, said the photo “has all the hallmarks.”

“I’m 100 percent sure,” he said.

Keep reading