House passes “Take it Down Act,” sending revenge porn bill backed by Melania Trump to president’s desk

The House passed a bipartisan bill Monday that makes it a federal crime to post real and fake sexually explicit imagery online of a person without their consent, sending the legislation that was backed by first lady Melania Trump to the president’s desk. 

The bill, known as the “Take It Down Act,” cleared the lower chamber in a 409-2 vote. The two “no” votes came from Republicans. The Senate unanimously passed the measure in February. 

The legislation requires social media companies and other websites to remove images and videos, including deepfakes generated by artificial intelligence, within 48 hours after a victim’s request. 

“If you’re a victim of revenge porn or AI-generated explicit imagery, your life changes forever,” Sen. Ted Cruz, a Texas Republican, said at a March 3 roundtable promoting the bill. 

Cruz, who introduced the bill, recalled the experience of a teenage victim, Elliston Berry, whose classmate used an app to create explicit images of her and then sent them to her classmates. Berry’s mother had tried unsuccessfully to get Snapchat to remove the images for months before she contacted Cruz’s office for help. 

“It should not take a sitting senator or sitting member of Congress picking up the phone to get a picture down or video down,” Cruz said. 

The first lady, who rarely appears in public, attended the March discussion at the U.S. Capitol to advocate for the bill’s passage in the House. 

“It’s heartbreaking to witness young teens, especially girls, grappling with the overwhelming challenges posed by malicious online content like deep fakes,” she said. “This toxic environment can be severely damaging.” 

The first lady applauded Congress after its passage and said the bipartisan vote made a “powerful statement that we stand united in protecting the dignity, privacy, and safety of our children.” 

“I am thankful to the Members of Congress — both in the House and Senate — who voted to protect the well-being of our youth,” she said in a statement. 

According to the FBI, in recent years there have been an alarming number of cases where victims have been extorted that have ended in suicide. Lawmakers said they hope the bill will save lives by providing recourse for victims. 

Keep reading

Congress Takes Another Step Toward Enabling Broad Internet Censorship

The House Energy and Commerce Committee on Tuesday advanced the TAKE IT DOWN Act (S. 146) , a bill that seeks to speed up the removal of certain kinds of troubling online content. While the bill is meant to address a serious problem—the distribution of non-consensual intimate imagery (NCII)—the notice-and-takedown system it creates is an open invitation for powerful people to pressure websites into removing content they dislike. 

As we’ve written before, while protecting victims of these heinous privacy invasions is a legitimate goal, good intentions alone are not enough to make good policy. 

This bill mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without meaningfully addressing the problem it claims to solve. The “takedown” provision applies to a much broader category of content—potentially any images involving intimate or sexual content at all—than the narrower NCII definitions found elsewhere in the bill. The bill contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. 

The legislation’s 48-hour takedown deadline means that online service providers, particularly smaller ones, will have to comply quickly to avoid legal risks. That time crunch will make it impossible for services to verify the content is in fact NCII. Instead, services will rely on automated filters—infamously blunt tools that frequently flag legal content, from fair-use commentary to news reporting.

Keep reading

Meta’s new “Movie Gen” AI system can deepfake video from a single photo

On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand.

The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to “enhance their inherent creativity” rather than replace human artists and animators. The company envisions future applications such as easily creating and editing “day in the life” videos for social media platforms or generating personalized animated birthday greetings.

Movie Gen builds on Meta’s previous work in video synthesis, following 2022’s Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos.

Keep reading

Minnesota ‘Acting as a Ministry of Truth’ With Anti-Deep Fake Law, Says Lawsuit

A new lawsuit takes aim at a Minnesota law banning the “use of deep fake technology to influence an election.” The measure—enacted in 2023 and amended this year—makes it a crime to share AI-generated content if a person “knows or acts with reckless disregard about whether the item being disseminated is a deep fake” and the sharing is done without the depicted individual’s consent, intended to “injure a candidate or influence the result of an election,” and either within 90 days before a political party nominating convention or after the start of the absentee voting period prior to a presidential nomination primary, any state or local primary, or a general election.

Christopher Kohls, a content creator who goes by Mr. Reagan, and by Minnesota state Rep. Mary Franson (R–District 12B) argue that the law is an “impermissible and unreasonable restriction of protected speech.”

Violating Minnesota’s deep fake law is punishable by up to 90 days imprisonment and/or a fine of up to $1,000, with penalties increasing if the offender has a prior conviction within the past five years for the same thing or the deep fake is determined to have been shared with an “intent to cause violence or bodily harm.” The law also allows for the Minnesota attorney general, county or city attorneys, individuals depicted in the deep fake, or any candidate “who is injured or likely to be injured by dissemination” to sue for injunctive relief “against any person who is reasonably believed to be about to violate or who is in the course of violating” the law.

If a candidate for office is found guilty of violating this law, they must forfeit the nomination or office and are henceforth disqualified “from being appointed to that office or any other office for which the legislature may establish qualifications.”

There are obviously a host of constitutional problems with this measure, which defines “deep fake” very broadly: “any video recording, motion-picture film, sound recording, electronic image, or photograph, or any technological representation of speech or conduct substantially derivative thereof” that is realistic enough for a reasonable person to believe it depicts speech or conduct that did not occur and developed though “technical means” rather than “the ability of another individual to physically or verbally impersonate such individual.”

Keep reading

Judge blocks California deepfakes law that sparked Musk-Newsom row

A federal judge on Wednesday blocked a California measure restricting the use of digitally altered political “deepfakes” just two weeks after Gov. Gavin Newsom signed the bill into law.

The ruling is a blow to a push by the state’s leading Democrats to rein in misleading content on social media ahead of Election Day.

Chris Kohls, known as “Mr Reagan” on X, sued to prevent the state from enforcing the law after posting an AI-generated video of a Harris campaign ad on the social media site. He claimed the video was protected by the First Amendment because it was a parody.

The judge agreed.

“Most of [the law] acts as a hammer instead of a scalpel,” Senior U.S. District Judge John A. Mendez wrote, calling it “a blunt tool hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas.” He carved out an exception for a “not unduly burdensome” portion of the law that requires verbal disclosure of digitally altered content in audio-only recordings.

Theodore Frank, an attorney for Kohls, said in a statement they were “gratified that the district court agreed with our analysis.”

Keep reading

The Babylon Bee Strikes Back: Lawsuit Takes on California’s Anti-Satire Laws

In a world where politicians crave safe spaces from jokes, California’s latest move to suppress satire might just take the cake. California has decided that it’s time to put an end to all that pesky “political humor.” Yes, the state that brought us Hollywood is now terrified of a few biting punchlines, and naturally, satire site The Babylon Bee and outspoken attorney Kelly Chang Rickert are not having it.

The champion of online irreverence has just slapped the State of California with a lawsuit that reads less like legalese and more like a desperate plea for common sense. They’re arguing, quite reasonably, that California’s new laws—AB 2839 and AB 2655—are a massive overreach, a heavy-handed attempt to quash their First Amendment rights and kill the punchline before it even has a chance to land.

We obtained a copy of the lawsuit for you here.

Keep reading

Gavin Newsom Just Signed an AI-Deepfake Ban That Could Impact Free Speech. This Is the First Lawsuit Against It.

We were waiting for California Governor Gavin Newsom to sign on the dotted line of a new controversial bill that limits speech around AI “deepfake” content that many believe could impact memes. Now that Newsom has signed the bill, the first of several lawsuits against the censorial bills can be filed.

Christopher Kohls, an online content creator known for his political satire, has filed a lawsuit against the state, challenging recent legislation aimed at regulating AI-generated content. The lawsuit seeks to strike down two California laws which Kohls claims infringe upon First Amendment rights.

Read the lawsuit here.

The controversy began when Kohls posted a satirical video parodying Vice President Kamala Harris, which quickly caught the attention of California Governor Gavin Newsom. Following the video’s viral spread, largely due to a share by Elon Musk, Governor Newsom criticized the content, leading to the swift passage of AB 2655, the “Defending Democracy from Deepfake Deception Act of 2024,” and AB 2839, related to “Elections: deceptive media in advertisements.”

Keep reading

California’s New AI Law Proposals Could Impact Memes

California’s state legislature has passed several bills related to “AI,” including a ban on deepfakes “around elections.”

The lawmakers squeezed these bills in during the last week of the current sessions of the state Senate and House, and it is now up to Governor Gavin Newsom (who has called for such laws) to sign or veto them by the end of this month.

One of the likely future laws is Defending Democracy from Deepfake Deception Act of 2024, which aims to regulate how sites, apps, and social media (defined for the purposes of the legislation as large online platforms) should deal with content that the bill considers to be “materially deceptive related to elections in California.”

Namely, the bill wants such content blocked, specifying that this refers to “specified” periods – 120 days before and 60 days after an election. And campaigns will have to disclose if their ads contain AI-altered content.

Now comes the hard part – what qualifies for blocking as deceptive, in order to “defend democracy from deepfakes”? It’s a very broad “definition” that can be interpreted all the way to banning memes.

For example, who’s to say if – satirical – content that shows a candidate “saying something (they) did not do or say” can end up “reasonably likely” harming the reputation or prospects of a candidate? And who’s to judge what “reasonably likely” is? But the bill uses these terms, and there’s more.

Also outlawed would be content showing an election official “doing or saying something in connection with the performance of their elections-related duties that the elections official did not do or say and that is reasonably likely to falsely undermine confidence in the outcome of one or more election contests.”

If the bill gets signed into law on September 30, given the time-frame, it would comprehensively cover not only the current campaign, but the period after it.

Keep reading

Newsom Threatens Laws Against Deepfakes After Kamala Harris Parody Video Goes Viral

In a clash between Gov. Gavin Newsom and tech magnate Elon Musk, the California governor promoted his intention to endorse a law targeting what he calls the misuse of AI in political advertising.

This decision escalates the ongoing dispute between the two influential figures. Newsom criticized a parody video shared by Musk, which seemed to showcase a campaign ad for Vice President Kamala Harris with a synthetic voiceover, by posting, “Manipulating a voice in an ‘ad’ like this one should be illegal. I’ll be signing a bill in a matter of weeks to make sure it is.”

Musk retorted sharply on social media, emphasizing the legality of parody in the United States.

Keep reading

Election 2024: Arizona and Michigan Train Clerks To Report AI Deepfakes To Law Enforcement

The AI (and specifically, deepfakes) panic is playing a prominent role in this US election campaign, with the states of Arizona and Michigan introducing a scheme to train election clerks in identifying content branded as such.

Arizona Secretary of State Adrian Fontes and Michigan and Minnesota counterparts Jocelyn Benson and Steve Simon, all three Democrats, are among those pushing an initiative called the Artificial Intelligence Task Force, launched by the NewDEAL Forum.

NewDEAL Forum is a Washington-based NGO whose board is populated by Democrat-associated figures, and which states it set out to “defend democracy” by developing tools and methods to help election officials and voters not only identify but also flag “malicious AI-generated activity” like deepfakes and “misinformation.”

Arizona and Michigan are considered to be swing states and there this effort is happening in the form of tabletop exercises that teach participants how to inform law enforcement and first responders about flagged content.

That’s not the only recently launched “project:” there’s liberal voting rights and media Democracy Docket platform, which is quoting Jocelyn Benson as saying that Michigan now has a law making “knowingly distributing materially-deceptive deep fakes” a felony.

But this applies only if this activity is seen as intending to harm a candidate’s reputation or chance at success, the Michigan secretary of state explained. However, it wasn’t immediately clear how transparent and precise the rules around determining the intent behind a deep fake are.

If applied arbitrarily, such legislation could catch a lot of things in its net – like satire and parody.

And it’s not an insignificant distinction when talking about AI, and deepfakes for that matter, since both have been around for a while, the latter notably in the entertainment industry.

Keep reading