Gavin Newsom Just Signed an AI-Deepfake Ban That Could Impact Free Speech. This Is the First Lawsuit Against It.

We were waiting for California Governor Gavin Newsom to sign on the dotted line of a new controversial bill that limits speech around AI “deepfake” content that many believe could impact memes. Now that Newsom has signed the bill, the first of several lawsuits against the censorial bills can be filed.

Christopher Kohls, an online content creator known for his political satire, has filed a lawsuit against the state, challenging recent legislation aimed at regulating AI-generated content. The lawsuit seeks to strike down two California laws which Kohls claims infringe upon First Amendment rights.

Read the lawsuit here.

The controversy began when Kohls posted a satirical video parodying Vice President Kamala Harris, which quickly caught the attention of California Governor Gavin Newsom. Following the video’s viral spread, largely due to a share by Elon Musk, Governor Newsom criticized the content, leading to the swift passage of AB 2655, the “Defending Democracy from Deepfake Deception Act of 2024,” and AB 2839, related to “Elections: deceptive media in advertisements.”

Keep reading

California’s New AI Law Proposals Could Impact Memes

California’s state legislature has passed several bills related to “AI,” including a ban on deepfakes “around elections.”

The lawmakers squeezed these bills in during the last week of the current sessions of the state Senate and House, and it is now up to Governor Gavin Newsom (who has called for such laws) to sign or veto them by the end of this month.

One of the likely future laws is Defending Democracy from Deepfake Deception Act of 2024, which aims to regulate how sites, apps, and social media (defined for the purposes of the legislation as large online platforms) should deal with content that the bill considers to be “materially deceptive related to elections in California.”

Namely, the bill wants such content blocked, specifying that this refers to “specified” periods – 120 days before and 60 days after an election. And campaigns will have to disclose if their ads contain AI-altered content.

Now comes the hard part – what qualifies for blocking as deceptive, in order to “defend democracy from deepfakes”? It’s a very broad “definition” that can be interpreted all the way to banning memes.

For example, who’s to say if – satirical – content that shows a candidate “saying something (they) did not do or say” can end up “reasonably likely” harming the reputation or prospects of a candidate? And who’s to judge what “reasonably likely” is? But the bill uses these terms, and there’s more.

Also outlawed would be content showing an election official “doing or saying something in connection with the performance of their elections-related duties that the elections official did not do or say and that is reasonably likely to falsely undermine confidence in the outcome of one or more election contests.”

If the bill gets signed into law on September 30, given the time-frame, it would comprehensively cover not only the current campaign, but the period after it.

Keep reading

Newsom Threatens Laws Against Deepfakes After Kamala Harris Parody Video Goes Viral

In a clash between Gov. Gavin Newsom and tech magnate Elon Musk, the California governor promoted his intention to endorse a law targeting what he calls the misuse of AI in political advertising.

This decision escalates the ongoing dispute between the two influential figures. Newsom criticized a parody video shared by Musk, which seemed to showcase a campaign ad for Vice President Kamala Harris with a synthetic voiceover, by posting, “Manipulating a voice in an ‘ad’ like this one should be illegal. I’ll be signing a bill in a matter of weeks to make sure it is.”

Musk retorted sharply on social media, emphasizing the legality of parody in the United States.

Keep reading

Election 2024: Arizona and Michigan Train Clerks To Report AI Deepfakes To Law Enforcement

The AI (and specifically, deepfakes) panic is playing a prominent role in this US election campaign, with the states of Arizona and Michigan introducing a scheme to train election clerks in identifying content branded as such.

Arizona Secretary of State Adrian Fontes and Michigan and Minnesota counterparts Jocelyn Benson and Steve Simon, all three Democrats, are among those pushing an initiative called the Artificial Intelligence Task Force, launched by the NewDEAL Forum.

NewDEAL Forum is a Washington-based NGO whose board is populated by Democrat-associated figures, and which states it set out to “defend democracy” by developing tools and methods to help election officials and voters not only identify but also flag “malicious AI-generated activity” like deepfakes and “misinformation.”

Arizona and Michigan are considered to be swing states and there this effort is happening in the form of tabletop exercises that teach participants how to inform law enforcement and first responders about flagged content.

That’s not the only recently launched “project:” there’s liberal voting rights and media Democracy Docket platform, which is quoting Jocelyn Benson as saying that Michigan now has a law making “knowingly distributing materially-deceptive deep fakes” a felony.

But this applies only if this activity is seen as intending to harm a candidate’s reputation or chance at success, the Michigan secretary of state explained. However, it wasn’t immediately clear how transparent and precise the rules around determining the intent behind a deep fake are.

If applied arbitrarily, such legislation could catch a lot of things in its net – like satire and parody.

And it’s not an insignificant distinction when talking about AI, and deepfakes for that matter, since both have been around for a while, the latter notably in the entertainment industry.

Keep reading

Anti-Deepfake Porn Bill Unanimously Passes the Senate

The anti-Deepfake porn bill, otherwise known as the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, passed the U.S. Senate unanimously on Thursday with 100 votes.

Introduced by Sen. Dick Durbin (D-IL) and Sen. Lindsey Graham (R-SC) while being sponsored by Rep. Alexandria Ocasio-Cortez (D-NY) from the U.S. House, the bill would essentially allow victims of Deepfake porn to “sue anyone who knowingly creates, receives, or distributes non-consensual, sexually explicit content made with artificial intelligence,” per Gizmodo. Victims will have a 10-year statute of limitations.

“The bill is one of the first to address the damages caused by AI, which is currently a self-regulated industry,” the outlet noted. “It doesn’t allow for criminal prosecution, but hopefully, it’s a first step towards more federal oversight.”

Alexandria Ocasio-Cortez said she pushed the bill after falling victim to Deepfake porn online, which become a significant problem in recent years.

Keep reading

Trump Blasts Media, Biden Camp As “Shameless Liars” For Claiming Videos Are Fake

In a speech in Wisconsin Tuesday, Donald Trump called out the Biden campaign and its media mouthpieces as “shameless liars” for suggesting that footage of him wandering around looking lost at the G7 summit and having to be led off a stage by Obama at a fundraiser were somehow faked.

“They have a candidate that has no clue, doesn’t know where he is, and all they can do is lie. Joe Biden is humiliating our country on the world stage” Trump told the crowd, adding “they say the videos of crooked Joe shuffling around are clean fakes,” referring to the recent footage.

Further speaking about Biden’s odd behaviour, Trump asserted “At the G7 in Europe, he had to be rescued by other world leaders, they had to pull him back, then he bizarrely put his face on the Pope’s forehead. What was that all about? That was weird.”

“Did you see the Pope? The Pope is like, ‘What’s happening?’ The Pope didn’t really know what was happening. He said this is strange,” Trump hilariously continued.

Keep reading

Biden wants U.S. government to scan all images on your phone to comply with new AI rules

To supposedly stop people from exchanging non-consensual artificial intelligence (AI) images of a sexual nature, President Biden wants to probe everyone’s smartphones as part of a sweeping surveillance effort.

press release from the White House explains the Biden regime’s desire for the tech and financial industries to take charge in stopping the creation and spread of abusive sexual imagery created by AI robots.

According to Biden’s handlers, “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

The plan is to have mobile operating systems such as Android and iOS automatically scan and analyze people’s private photos to determine which ones are sexual or non-consensual. Users would not have the ability to keep any of their images private from government spooks.

It might sound like a good thing until you recognize the privacy implications of such an arrangement. Do we the people really want to allow the government direct access to our photos?

Beyond the search and analysis framework, the Biden regime also wants mobile app stores like Apple’s App Store and Google Play to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.”

(Related: AI is just one component among many of the dystopian present.)

Keep reading

Biden’s Bold Move to Combat AI Abuse Stirs Surveillance and Censorship Fears

The Biden administration is pushing for sweeping measures to combat the proliferation of nonconsensual sexual AI-generated images, including controversial proposals that could lead to extensive on-device surveillance and control of the types of images generated. In a White House press release, President Joe Biden’s administration outlined demands for the tech industry and financial institutions to curb the creation and distribution of abusive sexual images made with artificial intelligence (AI).

A key focus of these measures is the use of on-device technology to prevent the sharing of nonconsensual sexual images. The administration stated that “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

This proposal implies that mobile operating systems would need to scan and analyze images directly on users’ devices to determine if they are sexual or non-consensual. The implications of such surveillance raise significant privacy concerns, as it involves monitoring and analyzing private content stored on personal devices.

Additionally, the administration is calling on mobile app stores to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.” This broad mandate would require a wide range of apps, including image editing and drawing apps, to scan and monitor user activities on devices, analyze what art they’re creating and block the creation of certain kinds of content. Once this technology of on-device monitoring becomes normalized, this level of scrutiny could extend beyond the initial intent, potentially leading to censorship of other types of content that the administration finds objectionable.

The administration’s call to action extends to various sectors, including AI developers, payment processors, financial institutions, cloud computing providers, search engines, and mobile app store gatekeepers like Apple and Google. By encouraging cooperation from these entities, the White House hopes to curb the creation, spread, and monetization of nonconsensual AI images.

Keep reading

Open Source Tools for Fighting Disinformation

Deepfakes and disinformation have the ability to move financial marketsinfluence public opinion, and scam businesses and individuals out of millions of dollars. The Semantic Forensics program (SemaFor) is a DARPA-funded initiative to create comprehensive forensic technologies to help mitigate online threats perpetuated via synthetic and manipulated media. Over the last eight years, Kitware has helped DARPA create a powerful set of tools to analyze whether media has been artificially generated or manipulated. Kitware and DARPA are now bringing those tools out of the lab to defend digital authenticity in the real world.

Kitware has a history of building various image and video forensics algorithms to defend against disinformation by detecting various types of manipulations, beginning with DARPA’s Media Forensics (MediFor) program. Building on this foundation, our team expanded its focus to include multimodal analysis of text, audio, and video under the SemaFor program. For additional information about Kitware’s contributions to SemaFor, check out the “Voices from DARPA” podcast episode, “Demystifying Deepfakes,” where Arslan Basharat, assistant director of computer vision at Kitware, is a guest speaker.

Keep reading

Black athletic director of Baltimore high school arrested for creating AI deepfake of white principal to stage race hoax

A Maryland school athletic director was arrested at BWI-Marshall Airport after it was discovered that he had allegedly spread an AI-generated impersonation of the Pikesville High School principal that framed the principal as being racist. Dhazon Darien, 31, is looking at many charges due to the deepfake, including stalking, theft, disruption of school operations, and retaliation against a witness.

The police investigation began in January when a voice recording alleged to be school principal Eric Eiswert began making the rounds. Eiswert was temporarily removed from his position as head of the school and the school received many phone calls and social media comments and messages over the recording after it was spread on social media, according to The Baltimore Sun.

Keep reading