What The Rise Of AI Videos Means For The Future

OpenAI, a company backed by Microsoft, is one of the world’s leading artificial intelligence research organizations paving the way for AI advancement across multiple technology sectors.

Best known for the creation of ChatGPT, on February 15th the company debuted previews of its new prompt-based AI video generation software, Sora. A text-to-video tool which allows users to input a text prompt into the software which then uses generative AI to create a video based on the input.

While Sora isn’t the first of its kind as far as generative AI video tools are concerned, industry insiders have noted the crispness and detail as well as the ability to generate complex scenes and longer length videos than previous models, at present totaling up to sixty seconds in length, as a “significant leap” for the development of the technology.

On its website, OpenAI has stated “We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.” While providing several brief examples of videos generated with the software and without any further modification.

The videos span a variety of scenes, including photorealistic clips of puppies playing in the snow, historic footage of California during the gold rush, a robot living in a cyberpunk world, woolly mammoths in the snow, an animation of a cute fluffy creature playing with fairies in an enchanted forest, and various cityscapes along with other animations.

Keep reading

Audio Cloning can Take Over a Phone Call in Real Time Without the Speakers Knowing

Generative AI could be listening to your phone calls and hijacking them with fake biometric audio for fraud or manipulation purposes, according to new research published by Security Intelligence. In the wake of a Hong Kong fraud case that saw an employee transfer US$25 million in funds to five bank accounts after a virtual meeting with what turned out to be audio-video deepfakes of senior management, the biometrics and digital identity world is on high alert, and the threats are growing more sophisticated by the day.

blog post by Chenta Lee, chief architect of threat intelligence at IBM Security, breaks down how researchers from IBM X-Force successfully intercepted and covertly hijacked a live conversation by using LLM to understand the conversation and manipulate it for malicious purposes – without the speakers knowing it was happening.

“Alarmingly,” writes Lee, “it was fairly easy to construct this highly intrusive capability, creating a significant concern about its use by an attacker driven by monetary incentives and limited to no lawful boundary.”

Keep reading

Congress pushes bill to let Americans SUE if fake porn images of them are published after Taylor Swift deep fake scandal

A group of lawmakers are stepping in to try and take down Taylor Swift ‘deep fake’ perpetrators with a bill that would allow Americans to sue if fake porn images of them are published. 

Popstar Taylor Swift became the latest target of nonconsensual deepfakes after artificial intelligence generated sexually explicit images of her flooded the internet this week. 

The dozens of graphic images showed Swift in a series of sexual acts while dressed in Kansas City Chief memorabilia after she became a regular at football games to support of her boyfriend Travis Kelce. 

Swift is now considering legal action against the deepfake porn website that posted the images amid calls from fans and even the White House for legislative action to combat the growing issue. 

Lawmakers decided to step in to combat the rise of nonconsensual deepfakes with a new bill that allows victims to take action against fake porn made in their likeness.

The DEFIANCE Act of 2024 was introduced by Senate Judiciary Committee Chairman Dick Durbin, D-Ill., Ranking Member Lindsey Graham, R-S.C., Senator Josh Hawley, R-Mo., and Senator Amy Klobuchar, R-Minn.

Keep reading

Congress Is Trying to Stop AI Nudes and Deepfake Scams Because Celebrities Are Mad

If you’ve been on TikTok lately, you may have noticed weird videos of celebrities promoting extremely shady products, such as a robotic-sounding Taylor Swift promising viewers a free cookware set. All of these videos are scams created with generative AI—the latest example of how the technology is being used to create disturbing virtual clones of people without their consent.

Needless to say, this kind of thing has pissed off a lot of famous people. And now, Congress is proposing new legislation that aims to combat AI deepfakes—specifically when it comes to things like fake celebrity endorsements and non-consensual AI-generated nudes, which have become a problem online and in high schools. Despite the surging popularity of websites and apps designed to generate deepfakes, there’s no comprehensive law on the books banning the creation of AI images. 

The new bill, called the No AI FRAUD Act and introduced by Rep. María Elvira Salazar (R-FL) and Rep. Madeleine Dean (D-PA), would establish legal definitions for “likeness and voice rights,” effectively banning the use of AI deepfakes to nonconsensually mimic another person, living or dead. The draft bill proclaims that “every individual has a property right in their own likeness and voice,” and cites several recent incidents where people have been turned into weird AI robots. It specifically mentions recent viral videos that featured AI-generated songs mimicking the voices of pop artists like Justin Bieber, Bad Bunny, Drake, and The Weeknd.

Keep reading

Sharing deepfake porn could lead to lengthy prison time under proposed law

The US seems to be getting serious about criminalizing deepfake pornography after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates last October.

On Tuesday, Rep. Joseph Morelle (D-NY) announced that he has re-introduced the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” Under the proposed law, anyone sharing deepfake pornography without an individual’s consent risks damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

The hope is that steep penalties will deter companies and individuals from allowing the disturbing images to be spread. It creates a criminal offense for sharing deepfake pornography “with the intent to harass, annoy, threaten, alarm, or cause substantial harm to the finances or reputation of the depicted individual” or with “reckless disregard” or “actual knowledge” that images will harm the individual depicted. It also provides a path for victims to sue offenders in civil court.

Rep. Tom Kean (R-NJ), who co-sponsored the bill, said that “proper guardrails and transparency are essential for fostering a sense of responsibility among AI companies and individuals using AI.”

“Try to imagine the horror of receiving intimate images looking exactly like you—or your daughter, or your wife, or your sister—and you can’t prove it’s not,” Morelle said. “Deepfake pornography is sexual exploitation, it’s abusive, and I’m astounded it is not already a federal crime.”

Joining Morelle in pushing to criminalize deepfake pornography was Dorota and Francesca Mani, who have spent the past two months meeting with lawmakers, The Wall Street Journal reported. The mother and daughter experienced the horror Morelle described firsthand when the New Jersey high school confirmed that 14-year-old Francesca was among the students targeted last year.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Francesca said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Morelle’s office told Ars that “advocacy from partners like the Mani family” is “critical to bringing attention to this issue” and getting the proposed law “to the floor for a vote.”

Morelle introduced the law in December 2022, but it failed to pass that year or in 2023. He’s re-introducing the law in 2024 after seemingly gaining more support during a House Oversight subcommittee hearing on “Advances in Deepfake Technology” last November.

At that hearing, many lawmakers warned of the dangers of AI-generated deepfakes, citing a study from the Dutch AI company Sensity, which found that 96 percent of deepfakes online are deepfake porn—the majority of which targets women.

But lawmakers also made clear that it’s currently hard to detect AI-generated images and distinguish them from real images.

According to a hearing transcript posted by the nonprofit news organization Tech Policy Press, David Doermann—currently interim chair of the University at Buffalo’s computer science and engineering department and former program manager at the Defense Advanced Research Projects Agency (DARPA)—told lawmakers that DARPA was already working on advanced deepfake detection tools but still had more work to do.

To support laws like Morelle’s, lawmakers have called for more funding for DARPA and the National Science Foundation to aid in ongoing efforts to create effective detection tools. At the same time, President Joe Biden—through a sweeping AI executive order—has pushed for solutions like watermarking deepfakes. Biden’s executive order also instructed the Department of Commerce to establish “standards and best practices for detecting AI-generated content and authenticating official content.”

Keep reading

Deepfake Society: 74% of Americans Can’t Tell What’s Real or Fake Online Anymore

Americans believe only 37 percent of the content they see on social media is “real,” or free of edits, filters, and Photoshop. Between AI and “deepfake” videos — a survey of 2,000 adults split evenly by generation reveals that almost three-quarters (74%) can’t even tell what’s real or fake anymore.

Americans are wary of both targeted ads (14%) and influencer content (18%), but a little more than half (52%) find themselves equally likely to question the legitimacy of either one. This goes beyond social media and what’s happening online. The survey finds that while 41 percent have more difficulty determining if an item they’re looking to purchase online is “real” or “a dupe,” another 36 percent find shopping in person to be just as challenging.

While the average respondent will spend about 15 minutes determining if an item is “real,” meaning a genuine model or a knockoff, millennials take it a step further and will spend upwards of 20 minutes trying to decide.

Conducted by OnePoll on behalf of De Beers Group, results reveal that Americans already own a plethora of both real and fake products.

Keep reading

Senators Call For Government Power To Hand Out Licenses To AI Companies, Curb “Deceptive” Election-Related AI and “Deepfakes”

This week, a Senate Judiciary hearing under the umbrella of the Privacy, Technology and Law Subcommittee became the stage for bipartisan senators to divulge plans aiming to focus on the allegedly looming threats of manipulative artificial intelligence, especially in the realm of elections. Visions for a framework proposed by Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), foresee a new government agency, tasked with issuing licenses to entities working with AI systems.

Simultaneously, Senator Amy Klobuchar (D-Minn.) unveiled preliminary details of upcoming legislation, crafted in tandem with Hawley, along with Senators Chris Coons (D-Del.), and Susan Collins (R-Maine). This new proposal targets the prospects of AI technologies pervading the electoral process.

Apprehension regarding deceptive generative AI undermining democratic elections took center stage during the Senate hearing, with Klobuchar expressing a sense of urgency given the rapidly approaching electoral calendar.

Specifically, the newly minted legislation, coined the Protect Elections from Deceptive AI Act, is envisioned to clamp down on AI-assisted impersonation of federal political aspirants in campaign ads.

Rendered as an amendment to the Federal Election Campaign Act of 1971, this legislation provides a legal recourse in federal court for targeted candidates to counter harmful AI-generated deceptive content.

Keep reading

Deepfake Fraud Surges More Than 1000%, Insiders Say It’s Just The Beginning

As the line between fact and fiction gets harder to distinguish, online criminals need just two hours to create a realistic, computer-generated “deepfake” product that can ruin someone’s life.

The surge in popularity of hyper-realistic photos, audio, and videos developed with artificial intelligence (AI)—commonly known as deepfakes—has become an internet sensation.

It’s also giving cyber villains an edge in the crime world.

Between 2022 and the first quarter of this year, deepfake use in fraud catapulted 1,200 percent in the United States alone.

Though it’s not just an American problem.

In the same analysis, deepfakes used for scam purposes exploded in Canada, Germany, and the United Kingdom. In the study, the United States accounted for 4.3 percent of global deepfake fraud cases.

Meanwhile, AI experts and cybercrime investigators say we’re just at the tip of the iceberg. The rabbit hole of deepfake fraud potential just keeps going.

“I believe the No. 1 incentive for cyber criminals to commit cybercrime is law enforcement and their inability to keep up,” Michael Roberts told The Epoch Times.

Mr. Roberts is a professional investigator and the founder of the pioneer company Rexxfield, which helps victims of web-based attacks.

He also started PICDO, a cyber crime disruption organization, and has run counter-hacking education for branches of the U.S. and Australian militaries as well as NATO.

Mr. Roberts said legal systems in the Western world are “hopelessly overwhelmed” by online fraud cases, many of which include deepfake attacks. Moreover, the cases that get investigated without hiring a private firm are cherry-picked.

And even then, it [the case] doesn’t get resolved,” he said.

The market for deepfake detection was valued at $3.86 billion dollars in 2020 and is expected to grow 42 percent annually through 2026, according to an HSRC report.

Keep reading

PhD Student Uses Deepfake to Pass Popular Voice Authentication and Spoof Detection System

University of Waterloo (UW) cybersecurity PhD student Andre Kassis published his findings after being granted access to an account protected with biometrics using deepfake AI-generated audio recordings.

A hacker can create a deepfake voice with five minutes of the target’s recorded voice, which can be taken from public posts on social media, the research shows. GitHub’s open source AI software can create deepfake audio that can surpass voice authentication.

He used the deepfake to expose a weakness in the Amazon Connect voice authentication system, a UW release reveals. Four-second attacks on Connect had a 10 percent success rate, and attacks closer to 30 seconds were successful 40 percent of the time.

In response, the company added biometric anti-spoofing software that could find digital markers on a voice recording, revealing if it was made by a machine or human. This worked until Kassis used free software to remove the digital markers from his deepfakes.

His method can bypass less sophisticated voice biometric authentication systems with a 99 percent success rate after six tries, according to the announcement.

Keep reading

SCIENTISTS IMPLANT SUBJECTS WITH FAKE MEMORIES USING DEEPFAKES

Researchers have found that they can incept false memories by showing subjects deepfaked clips of movie remakes that were never actually produced.

As detailed in a recent paper published in the journal PLOS One, deepfaked clips of made-up movies were convincing enough to trick participants into believing they were real. Some went as far as to rank the fake movies, which included purported remakes of real movies like Will Smith starring in a rebooted “The Matrix,” to be better than the originals.

But the study did have an important caveat.

“However, deepfakes were no more effective than simple text descriptions at distorting memory,” the paper reads, suggesting that deepfakes aren’t entirely necessary to trick somebody into accepting a false memory.

“We shouldn’t jump to predictions of dystopian futures based on our fears around emerging technologies,” lead study author Gillian Murphy, a misinformation researcher at University College Cork in Ireland, told The Daily Beast. “Yes there are very real harms posed by deep fakes, but we should always gather evidence for those harms in the first instance, before rushing to solve problems we’ve just assumed might exist.”

Keep reading