Congress pushes bill to let Americans SUE if fake porn images of them are published after Taylor Swift deep fake scandal

A group of lawmakers are stepping in to try and take down Taylor Swift ‘deep fake’ perpetrators with a bill that would allow Americans to sue if fake porn images of them are published. 

Popstar Taylor Swift became the latest target of nonconsensual deepfakes after artificial intelligence generated sexually explicit images of her flooded the internet this week. 

The dozens of graphic images showed Swift in a series of sexual acts while dressed in Kansas City Chief memorabilia after she became a regular at football games to support of her boyfriend Travis Kelce. 

Swift is now considering legal action against the deepfake porn website that posted the images amid calls from fans and even the White House for legislative action to combat the growing issue. 

Lawmakers decided to step in to combat the rise of nonconsensual deepfakes with a new bill that allows victims to take action against fake porn made in their likeness.

The DEFIANCE Act of 2024 was introduced by Senate Judiciary Committee Chairman Dick Durbin, D-Ill., Ranking Member Lindsey Graham, R-S.C., Senator Josh Hawley, R-Mo., and Senator Amy Klobuchar, R-Minn.

Keep reading

Congress Is Trying to Stop AI Nudes and Deepfake Scams Because Celebrities Are Mad

If you’ve been on TikTok lately, you may have noticed weird videos of celebrities promoting extremely shady products, such as a robotic-sounding Taylor Swift promising viewers a free cookware set. All of these videos are scams created with generative AI—the latest example of how the technology is being used to create disturbing virtual clones of people without their consent.

Needless to say, this kind of thing has pissed off a lot of famous people. And now, Congress is proposing new legislation that aims to combat AI deepfakes—specifically when it comes to things like fake celebrity endorsements and non-consensual AI-generated nudes, which have become a problem online and in high schools. Despite the surging popularity of websites and apps designed to generate deepfakes, there’s no comprehensive law on the books banning the creation of AI images. 

The new bill, called the No AI FRAUD Act and introduced by Rep. María Elvira Salazar (R-FL) and Rep. Madeleine Dean (D-PA), would establish legal definitions for “likeness and voice rights,” effectively banning the use of AI deepfakes to nonconsensually mimic another person, living or dead. The draft bill proclaims that “every individual has a property right in their own likeness and voice,” and cites several recent incidents where people have been turned into weird AI robots. It specifically mentions recent viral videos that featured AI-generated songs mimicking the voices of pop artists like Justin Bieber, Bad Bunny, Drake, and The Weeknd.

Keep reading

Sharing deepfake porn could lead to lengthy prison time under proposed law

The US seems to be getting serious about criminalizing deepfake pornography after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates last October.

On Tuesday, Rep. Joseph Morelle (D-NY) announced that he has re-introduced the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” Under the proposed law, anyone sharing deepfake pornography without an individual’s consent risks damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

The hope is that steep penalties will deter companies and individuals from allowing the disturbing images to be spread. It creates a criminal offense for sharing deepfake pornography “with the intent to harass, annoy, threaten, alarm, or cause substantial harm to the finances or reputation of the depicted individual” or with “reckless disregard” or “actual knowledge” that images will harm the individual depicted. It also provides a path for victims to sue offenders in civil court.

Rep. Tom Kean (R-NJ), who co-sponsored the bill, said that “proper guardrails and transparency are essential for fostering a sense of responsibility among AI companies and individuals using AI.”

“Try to imagine the horror of receiving intimate images looking exactly like you—or your daughter, or your wife, or your sister—and you can’t prove it’s not,” Morelle said. “Deepfake pornography is sexual exploitation, it’s abusive, and I’m astounded it is not already a federal crime.”

Joining Morelle in pushing to criminalize deepfake pornography was Dorota and Francesca Mani, who have spent the past two months meeting with lawmakers, The Wall Street Journal reported. The mother and daughter experienced the horror Morelle described firsthand when the New Jersey high school confirmed that 14-year-old Francesca was among the students targeted last year.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Francesca said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Morelle’s office told Ars that “advocacy from partners like the Mani family” is “critical to bringing attention to this issue” and getting the proposed law “to the floor for a vote.”

Morelle introduced the law in December 2022, but it failed to pass that year or in 2023. He’s re-introducing the law in 2024 after seemingly gaining more support during a House Oversight subcommittee hearing on “Advances in Deepfake Technology” last November.

At that hearing, many lawmakers warned of the dangers of AI-generated deepfakes, citing a study from the Dutch AI company Sensity, which found that 96 percent of deepfakes online are deepfake porn—the majority of which targets women.

But lawmakers also made clear that it’s currently hard to detect AI-generated images and distinguish them from real images.

According to a hearing transcript posted by the nonprofit news organization Tech Policy Press, David Doermann—currently interim chair of the University at Buffalo’s computer science and engineering department and former program manager at the Defense Advanced Research Projects Agency (DARPA)—told lawmakers that DARPA was already working on advanced deepfake detection tools but still had more work to do.

To support laws like Morelle’s, lawmakers have called for more funding for DARPA and the National Science Foundation to aid in ongoing efforts to create effective detection tools. At the same time, President Joe Biden—through a sweeping AI executive order—has pushed for solutions like watermarking deepfakes. Biden’s executive order also instructed the Department of Commerce to establish “standards and best practices for detecting AI-generated content and authenticating official content.”

Keep reading

Deepfake Society: 74% of Americans Can’t Tell What’s Real or Fake Online Anymore

Americans believe only 37 percent of the content they see on social media is “real,” or free of edits, filters, and Photoshop. Between AI and “deepfake” videos — a survey of 2,000 adults split evenly by generation reveals that almost three-quarters (74%) can’t even tell what’s real or fake anymore.

Americans are wary of both targeted ads (14%) and influencer content (18%), but a little more than half (52%) find themselves equally likely to question the legitimacy of either one. This goes beyond social media and what’s happening online. The survey finds that while 41 percent have more difficulty determining if an item they’re looking to purchase online is “real” or “a dupe,” another 36 percent find shopping in person to be just as challenging.

While the average respondent will spend about 15 minutes determining if an item is “real,” meaning a genuine model or a knockoff, millennials take it a step further and will spend upwards of 20 minutes trying to decide.

Conducted by OnePoll on behalf of De Beers Group, results reveal that Americans already own a plethora of both real and fake products.

Keep reading

Senators Call For Government Power To Hand Out Licenses To AI Companies, Curb “Deceptive” Election-Related AI and “Deepfakes”

This week, a Senate Judiciary hearing under the umbrella of the Privacy, Technology and Law Subcommittee became the stage for bipartisan senators to divulge plans aiming to focus on the allegedly looming threats of manipulative artificial intelligence, especially in the realm of elections. Visions for a framework proposed by Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), foresee a new government agency, tasked with issuing licenses to entities working with AI systems.

Simultaneously, Senator Amy Klobuchar (D-Minn.) unveiled preliminary details of upcoming legislation, crafted in tandem with Hawley, along with Senators Chris Coons (D-Del.), and Susan Collins (R-Maine). This new proposal targets the prospects of AI technologies pervading the electoral process.

Apprehension regarding deceptive generative AI undermining democratic elections took center stage during the Senate hearing, with Klobuchar expressing a sense of urgency given the rapidly approaching electoral calendar.

Specifically, the newly minted legislation, coined the Protect Elections from Deceptive AI Act, is envisioned to clamp down on AI-assisted impersonation of federal political aspirants in campaign ads.

Rendered as an amendment to the Federal Election Campaign Act of 1971, this legislation provides a legal recourse in federal court for targeted candidates to counter harmful AI-generated deceptive content.

Keep reading

Deepfake Fraud Surges More Than 1000%, Insiders Say It’s Just The Beginning

As the line between fact and fiction gets harder to distinguish, online criminals need just two hours to create a realistic, computer-generated “deepfake” product that can ruin someone’s life.

The surge in popularity of hyper-realistic photos, audio, and videos developed with artificial intelligence (AI)—commonly known as deepfakes—has become an internet sensation.

It’s also giving cyber villains an edge in the crime world.

Between 2022 and the first quarter of this year, deepfake use in fraud catapulted 1,200 percent in the United States alone.

Though it’s not just an American problem.

In the same analysis, deepfakes used for scam purposes exploded in Canada, Germany, and the United Kingdom. In the study, the United States accounted for 4.3 percent of global deepfake fraud cases.

Meanwhile, AI experts and cybercrime investigators say we’re just at the tip of the iceberg. The rabbit hole of deepfake fraud potential just keeps going.

“I believe the No. 1 incentive for cyber criminals to commit cybercrime is law enforcement and their inability to keep up,” Michael Roberts told The Epoch Times.

Mr. Roberts is a professional investigator and the founder of the pioneer company Rexxfield, which helps victims of web-based attacks.

He also started PICDO, a cyber crime disruption organization, and has run counter-hacking education for branches of the U.S. and Australian militaries as well as NATO.

Mr. Roberts said legal systems in the Western world are “hopelessly overwhelmed” by online fraud cases, many of which include deepfake attacks. Moreover, the cases that get investigated without hiring a private firm are cherry-picked.

And even then, it [the case] doesn’t get resolved,” he said.

The market for deepfake detection was valued at $3.86 billion dollars in 2020 and is expected to grow 42 percent annually through 2026, according to an HSRC report.

Keep reading

PhD Student Uses Deepfake to Pass Popular Voice Authentication and Spoof Detection System

University of Waterloo (UW) cybersecurity PhD student Andre Kassis published his findings after being granted access to an account protected with biometrics using deepfake AI-generated audio recordings.

A hacker can create a deepfake voice with five minutes of the target’s recorded voice, which can be taken from public posts on social media, the research shows. GitHub’s open source AI software can create deepfake audio that can surpass voice authentication.

He used the deepfake to expose a weakness in the Amazon Connect voice authentication system, a UW release reveals. Four-second attacks on Connect had a 10 percent success rate, and attacks closer to 30 seconds were successful 40 percent of the time.

In response, the company added biometric anti-spoofing software that could find digital markers on a voice recording, revealing if it was made by a machine or human. This worked until Kassis used free software to remove the digital markers from his deepfakes.

His method can bypass less sophisticated voice biometric authentication systems with a 99 percent success rate after six tries, according to the announcement.

Keep reading

SCIENTISTS IMPLANT SUBJECTS WITH FAKE MEMORIES USING DEEPFAKES

Researchers have found that they can incept false memories by showing subjects deepfaked clips of movie remakes that were never actually produced.

As detailed in a recent paper published in the journal PLOS One, deepfaked clips of made-up movies were convincing enough to trick participants into believing they were real. Some went as far as to rank the fake movies, which included purported remakes of real movies like Will Smith starring in a rebooted “The Matrix,” to be better than the originals.

But the study did have an important caveat.

“However, deepfakes were no more effective than simple text descriptions at distorting memory,” the paper reads, suggesting that deepfakes aren’t entirely necessary to trick somebody into accepting a false memory.

“We shouldn’t jump to predictions of dystopian futures based on our fears around emerging technologies,” lead study author Gillian Murphy, a misinformation researcher at University College Cork in Ireland, told The Daily Beast. “Yes there are very real harms posed by deep fakes, but we should always gather evidence for those harms in the first instance, before rushing to solve problems we’ve just assumed might exist.”

Keep reading

The US military plans to use deep fakes and take over appliances for propaganda

Can you create cutting edge “deep fake” videos, spy on people using household appliances, and make massive data dragnets? If so, the Pentagon wants to hear from you so it can amp up its manipulation efforts. 

US Special Operations Command (US SOCOM) has issued proposal requests for a whole host of dodgy services, according to new documents obtained by The Intercept

Specifically, the Pentagon is looking for “next generation capability to takeover’ Internet of Things (IoT) devices in order to collect data and information from local populaces to enable a breakdown of what messaging might be popular and accepted through sifting of data once received.” 

For what purpose? “This would enable MISO [Military Information Support Operations] to craft and promote messages that may be more readily received by the local populace in relevant peer/near peer environments,” according to the document.  

Keep reading

US Military Group Wants Weaponized Deepfakes, Better Biometric Tools

At least some in the U.S. military have heard enough about deepfakes and they want in.

Investigative-news publisher The Intercept has got hold of a lengthy technology wish list that its editors feel was created by the U.S. Special Operations Command. Two items in the document are biometric in nature.

The command, most often referred to as SOCOM, performs the United States’ most secret and daring military missions. And officers want to add the ability to create and deploy deepfakes against those outside the country.

They also want to better their game when it comes to biometrically identify individuals using, among other techniques, touchless fingerprint capture over long distances and in all environments. Officials also want rapid handheld DNA collection gear. This can be found in the document above under 4.1.2.3 Biometrics.

In all cases, SOCOM wants to cut false positives and the ability to compare scanned biometrics against watch lists on handheld devices or remote databases. Those handhelds will need to perform all common biometric analyses, including DNA comparisons.

But the showstopper is the unit’s deepfake ambitions (at 4.3.1.4. Military Information Support Operations in the document). The leaders of many advanced economies, including various agency heads in the United States, have publicly stated their wariness of deepfakes.

(Three years ago, a NATO panel about deepfakes dismissed concerns about deepfakes. Even last year, there were those telling people not to worry.)

Keep reading