Teen Dies by Suicide After Being Targeted in AI-Generated ‘Sextortion’ Scheme

A 16-year-old Kentucky boy reportedly committed suicide shortly after he was blackmailed with AI-generated nude images, an increasingly common scheme known as “sextortion.”

Elijah Heacock of Glasgow, Kentucky, received a text including an AI-generated nude photo depicting himself and a demand that he pay $3,000 to prevent the image from being sent to family and friends, according to a report by KFDA.

On February 28, shortly after receiving the message, the teen died from a self-inflicted gunshot wound.

Elijah’s parents, John Burnett and Shannon Heacock, told CBS that they didn’t have a solid understanding of the circumstances that led to their son’s death until they found the messages on his phone.

Heacock said she now believes her son was a victim of a sextortion scheme.

“Sextortion is a form of child sexual exploitation where children are threatened or blackmailed, most often with the possibility of sharing with the public a nude or sexual images of them, by a person who demands additional sexual content, sexual activity or money from the child,” the National Center for Missing and Exploited Children (NCMEC) explains.

“This crime may happen when a child has shared an image with someone they thought they knew or trusted, but in many cases they are targeted by an individual they met online who obtained a sexual image from the child through deceit, coercion, or some other method,” the NCMEC continued.

“In many cases, the blackmailers may have stolen or taken images of another person and they are communicating through a fake account,” the organization added.

Elijah’s parents said they had never heard of sextortion until law enforcement began investigating their son’s death.

“The people that are after our children are well organized,” Burnett said. “They are well financed, and they are relentless. They don’t need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child.”

NCMEC says sextortion schemes have skyrocketed, revealing the organization has received more than 500,000 reports of sextortion against minors in just the last year.

Since 2021, at least 20 young people have committed suicide as a result of becoming victims of sextortion scams, according to the FBI.

Keep reading

Welcome to the age of paranoia as deepfakes and scams abound

These days, when Nicole Yelland receives a meeting request from someone she doesn’t already know, she conducts a multistep background check before deciding whether to accept. Yelland, who works in public relations for a Detroit-based nonprofit, says she’ll run the person’s information through Spokeo, a personal data aggregator that she pays a monthly subscription fee to use. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams call—with their camera on.

If Yelland sounds paranoid, that’s because she is. In January, before she started her current nonprofit role, Yelland says, she got roped into an elaborate scam targeting job seekers. “Now, I do the whole verification rigamarole any time someone reaches out to me,” she tells WIRED.

Digital imposter scams aren’t new; messaging platforms, social media sites, and dating apps have long been rife with fakery. In a time when remote work and distributed teams have become commonplace, professional communications channels are no longer safe, either. The same artificial intelligence tools that tech companies promise will boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds.

On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.

Keep reading

House passes “Take it Down Act,” sending revenge porn bill backed by Melania Trump to president’s desk

The House passed a bipartisan bill Monday that makes it a federal crime to post real and fake sexually explicit imagery online of a person without their consent, sending the legislation that was backed by first lady Melania Trump to the president’s desk. 

The bill, known as the “Take It Down Act,” cleared the lower chamber in a 409-2 vote. The two “no” votes came from Republicans. The Senate unanimously passed the measure in February. 

The legislation requires social media companies and other websites to remove images and videos, including deepfakes generated by artificial intelligence, within 48 hours after a victim’s request. 

“If you’re a victim of revenge porn or AI-generated explicit imagery, your life changes forever,” Sen. Ted Cruz, a Texas Republican, said at a March 3 roundtable promoting the bill. 

Cruz, who introduced the bill, recalled the experience of a teenage victim, Elliston Berry, whose classmate used an app to create explicit images of her and then sent them to her classmates. Berry’s mother had tried unsuccessfully to get Snapchat to remove the images for months before she contacted Cruz’s office for help. 

“It should not take a sitting senator or sitting member of Congress picking up the phone to get a picture down or video down,” Cruz said. 

The first lady, who rarely appears in public, attended the March discussion at the U.S. Capitol to advocate for the bill’s passage in the House. 

“It’s heartbreaking to witness young teens, especially girls, grappling with the overwhelming challenges posed by malicious online content like deep fakes,” she said. “This toxic environment can be severely damaging.” 

The first lady applauded Congress after its passage and said the bipartisan vote made a “powerful statement that we stand united in protecting the dignity, privacy, and safety of our children.” 

“I am thankful to the Members of Congress — both in the House and Senate — who voted to protect the well-being of our youth,” she said in a statement. 

According to the FBI, in recent years there have been an alarming number of cases where victims have been extorted that have ended in suicide. Lawmakers said they hope the bill will save lives by providing recourse for victims. 

Keep reading

Congress Takes Another Step Toward Enabling Broad Internet Censorship

The House Energy and Commerce Committee on Tuesday advanced the TAKE IT DOWN Act (S. 146) , a bill that seeks to speed up the removal of certain kinds of troubling online content. While the bill is meant to address a serious problem—the distribution of non-consensual intimate imagery (NCII)—the notice-and-takedown system it creates is an open invitation for powerful people to pressure websites into removing content they dislike. 

As we’ve written before, while protecting victims of these heinous privacy invasions is a legitimate goal, good intentions alone are not enough to make good policy. 

This bill mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without meaningfully addressing the problem it claims to solve. The “takedown” provision applies to a much broader category of content—potentially any images involving intimate or sexual content at all—than the narrower NCII definitions found elsewhere in the bill. The bill contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. 

The legislation’s 48-hour takedown deadline means that online service providers, particularly smaller ones, will have to comply quickly to avoid legal risks. That time crunch will make it impossible for services to verify the content is in fact NCII. Instead, services will rely on automated filters—infamously blunt tools that frequently flag legal content, from fair-use commentary to news reporting.

Keep reading

Meta’s new “Movie Gen” AI system can deepfake video from a single photo

On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand.

The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to “enhance their inherent creativity” rather than replace human artists and animators. The company envisions future applications such as easily creating and editing “day in the life” videos for social media platforms or generating personalized animated birthday greetings.

Movie Gen builds on Meta’s previous work in video synthesis, following 2022’s Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos.

Keep reading

Minnesota ‘Acting as a Ministry of Truth’ With Anti-Deep Fake Law, Says Lawsuit

A new lawsuit takes aim at a Minnesota law banning the “use of deep fake technology to influence an election.” The measure—enacted in 2023 and amended this year—makes it a crime to share AI-generated content if a person “knows or acts with reckless disregard about whether the item being disseminated is a deep fake” and the sharing is done without the depicted individual’s consent, intended to “injure a candidate or influence the result of an election,” and either within 90 days before a political party nominating convention or after the start of the absentee voting period prior to a presidential nomination primary, any state or local primary, or a general election.

Christopher Kohls, a content creator who goes by Mr. Reagan, and by Minnesota state Rep. Mary Franson (R–District 12B) argue that the law is an “impermissible and unreasonable restriction of protected speech.”

Violating Minnesota’s deep fake law is punishable by up to 90 days imprisonment and/or a fine of up to $1,000, with penalties increasing if the offender has a prior conviction within the past five years for the same thing or the deep fake is determined to have been shared with an “intent to cause violence or bodily harm.” The law also allows for the Minnesota attorney general, county or city attorneys, individuals depicted in the deep fake, or any candidate “who is injured or likely to be injured by dissemination” to sue for injunctive relief “against any person who is reasonably believed to be about to violate or who is in the course of violating” the law.

If a candidate for office is found guilty of violating this law, they must forfeit the nomination or office and are henceforth disqualified “from being appointed to that office or any other office for which the legislature may establish qualifications.”

There are obviously a host of constitutional problems with this measure, which defines “deep fake” very broadly: “any video recording, motion-picture film, sound recording, electronic image, or photograph, or any technological representation of speech or conduct substantially derivative thereof” that is realistic enough for a reasonable person to believe it depicts speech or conduct that did not occur and developed though “technical means” rather than “the ability of another individual to physically or verbally impersonate such individual.”

Keep reading

Judge blocks California deepfakes law that sparked Musk-Newsom row

A federal judge on Wednesday blocked a California measure restricting the use of digitally altered political “deepfakes” just two weeks after Gov. Gavin Newsom signed the bill into law.

The ruling is a blow to a push by the state’s leading Democrats to rein in misleading content on social media ahead of Election Day.

Chris Kohls, known as “Mr Reagan” on X, sued to prevent the state from enforcing the law after posting an AI-generated video of a Harris campaign ad on the social media site. He claimed the video was protected by the First Amendment because it was a parody.

The judge agreed.

“Most of [the law] acts as a hammer instead of a scalpel,” Senior U.S. District Judge John A. Mendez wrote, calling it “a blunt tool hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas.” He carved out an exception for a “not unduly burdensome” portion of the law that requires verbal disclosure of digitally altered content in audio-only recordings.

Theodore Frank, an attorney for Kohls, said in a statement they were “gratified that the district court agreed with our analysis.”

Keep reading

The Babylon Bee Strikes Back: Lawsuit Takes on California’s Anti-Satire Laws

In a world where politicians crave safe spaces from jokes, California’s latest move to suppress satire might just take the cake. California has decided that it’s time to put an end to all that pesky “political humor.” Yes, the state that brought us Hollywood is now terrified of a few biting punchlines, and naturally, satire site The Babylon Bee and outspoken attorney Kelly Chang Rickert are not having it.

The champion of online irreverence has just slapped the State of California with a lawsuit that reads less like legalese and more like a desperate plea for common sense. They’re arguing, quite reasonably, that California’s new laws—AB 2839 and AB 2655—are a massive overreach, a heavy-handed attempt to quash their First Amendment rights and kill the punchline before it even has a chance to land.

We obtained a copy of the lawsuit for you here.

Keep reading

Gavin Newsom Just Signed an AI-Deepfake Ban That Could Impact Free Speech. This Is the First Lawsuit Against It.

We were waiting for California Governor Gavin Newsom to sign on the dotted line of a new controversial bill that limits speech around AI “deepfake” content that many believe could impact memes. Now that Newsom has signed the bill, the first of several lawsuits against the censorial bills can be filed.

Christopher Kohls, an online content creator known for his political satire, has filed a lawsuit against the state, challenging recent legislation aimed at regulating AI-generated content. The lawsuit seeks to strike down two California laws which Kohls claims infringe upon First Amendment rights.

Read the lawsuit here.

The controversy began when Kohls posted a satirical video parodying Vice President Kamala Harris, which quickly caught the attention of California Governor Gavin Newsom. Following the video’s viral spread, largely due to a share by Elon Musk, Governor Newsom criticized the content, leading to the swift passage of AB 2655, the “Defending Democracy from Deepfake Deception Act of 2024,” and AB 2839, related to “Elections: deceptive media in advertisements.”

Keep reading

California’s New AI Law Proposals Could Impact Memes

California’s state legislature has passed several bills related to “AI,” including a ban on deepfakes “around elections.”

The lawmakers squeezed these bills in during the last week of the current sessions of the state Senate and House, and it is now up to Governor Gavin Newsom (who has called for such laws) to sign or veto them by the end of this month.

One of the likely future laws is Defending Democracy from Deepfake Deception Act of 2024, which aims to regulate how sites, apps, and social media (defined for the purposes of the legislation as large online platforms) should deal with content that the bill considers to be “materially deceptive related to elections in California.”

Namely, the bill wants such content blocked, specifying that this refers to “specified” periods – 120 days before and 60 days after an election. And campaigns will have to disclose if their ads contain AI-altered content.

Now comes the hard part – what qualifies for blocking as deceptive, in order to “defend democracy from deepfakes”? It’s a very broad “definition” that can be interpreted all the way to banning memes.

For example, who’s to say if – satirical – content that shows a candidate “saying something (they) did not do or say” can end up “reasonably likely” harming the reputation or prospects of a candidate? And who’s to judge what “reasonably likely” is? But the bill uses these terms, and there’s more.

Also outlawed would be content showing an election official “doing or saying something in connection with the performance of their elections-related duties that the elections official did not do or say and that is reasonably likely to falsely undermine confidence in the outcome of one or more election contests.”

If the bill gets signed into law on September 30, given the time-frame, it would comprehensively cover not only the current campaign, but the period after it.

Keep reading