Newsom Threatens Laws Against Deepfakes After Kamala Harris Parody Video Goes Viral

In a clash between Gov. Gavin Newsom and tech magnate Elon Musk, the California governor promoted his intention to endorse a law targeting what he calls the misuse of AI in political advertising.

This decision escalates the ongoing dispute between the two influential figures. Newsom criticized a parody video shared by Musk, which seemed to showcase a campaign ad for Vice President Kamala Harris with a synthetic voiceover, by posting, “Manipulating a voice in an ‘ad’ like this one should be illegal. I’ll be signing a bill in a matter of weeks to make sure it is.”

Musk retorted sharply on social media, emphasizing the legality of parody in the United States.

Keep reading

Election 2024: Arizona and Michigan Train Clerks To Report AI Deepfakes To Law Enforcement

The AI (and specifically, deepfakes) panic is playing a prominent role in this US election campaign, with the states of Arizona and Michigan introducing a scheme to train election clerks in identifying content branded as such.

Arizona Secretary of State Adrian Fontes and Michigan and Minnesota counterparts Jocelyn Benson and Steve Simon, all three Democrats, are among those pushing an initiative called the Artificial Intelligence Task Force, launched by the NewDEAL Forum.

NewDEAL Forum is a Washington-based NGO whose board is populated by Democrat-associated figures, and which states it set out to “defend democracy” by developing tools and methods to help election officials and voters not only identify but also flag “malicious AI-generated activity” like deepfakes and “misinformation.”

Arizona and Michigan are considered to be swing states and there this effort is happening in the form of tabletop exercises that teach participants how to inform law enforcement and first responders about flagged content.

That’s not the only recently launched “project:” there’s liberal voting rights and media Democracy Docket platform, which is quoting Jocelyn Benson as saying that Michigan now has a law making “knowingly distributing materially-deceptive deep fakes” a felony.

But this applies only if this activity is seen as intending to harm a candidate’s reputation or chance at success, the Michigan secretary of state explained. However, it wasn’t immediately clear how transparent and precise the rules around determining the intent behind a deep fake are.

If applied arbitrarily, such legislation could catch a lot of things in its net – like satire and parody.

And it’s not an insignificant distinction when talking about AI, and deepfakes for that matter, since both have been around for a while, the latter notably in the entertainment industry.

Keep reading

Anti-Deepfake Porn Bill Unanimously Passes the Senate

The anti-Deepfake porn bill, otherwise known as the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, passed the U.S. Senate unanimously on Thursday with 100 votes.

Introduced by Sen. Dick Durbin (D-IL) and Sen. Lindsey Graham (R-SC) while being sponsored by Rep. Alexandria Ocasio-Cortez (D-NY) from the U.S. House, the bill would essentially allow victims of Deepfake porn to “sue anyone who knowingly creates, receives, or distributes non-consensual, sexually explicit content made with artificial intelligence,” per Gizmodo. Victims will have a 10-year statute of limitations.

“The bill is one of the first to address the damages caused by AI, which is currently a self-regulated industry,” the outlet noted. “It doesn’t allow for criminal prosecution, but hopefully, it’s a first step towards more federal oversight.”

Alexandria Ocasio-Cortez said she pushed the bill after falling victim to Deepfake porn online, which become a significant problem in recent years.

Keep reading

Trump Blasts Media, Biden Camp As “Shameless Liars” For Claiming Videos Are Fake

In a speech in Wisconsin Tuesday, Donald Trump called out the Biden campaign and its media mouthpieces as “shameless liars” for suggesting that footage of him wandering around looking lost at the G7 summit and having to be led off a stage by Obama at a fundraiser were somehow faked.

“They have a candidate that has no clue, doesn’t know where he is, and all they can do is lie. Joe Biden is humiliating our country on the world stage” Trump told the crowd, adding “they say the videos of crooked Joe shuffling around are clean fakes,” referring to the recent footage.

Further speaking about Biden’s odd behaviour, Trump asserted “At the G7 in Europe, he had to be rescued by other world leaders, they had to pull him back, then he bizarrely put his face on the Pope’s forehead. What was that all about? That was weird.”

“Did you see the Pope? The Pope is like, ‘What’s happening?’ The Pope didn’t really know what was happening. He said this is strange,” Trump hilariously continued.

Keep reading

Biden wants U.S. government to scan all images on your phone to comply with new AI rules

To supposedly stop people from exchanging non-consensual artificial intelligence (AI) images of a sexual nature, President Biden wants to probe everyone’s smartphones as part of a sweeping surveillance effort.

press release from the White House explains the Biden regime’s desire for the tech and financial industries to take charge in stopping the creation and spread of abusive sexual imagery created by AI robots.

According to Biden’s handlers, “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

The plan is to have mobile operating systems such as Android and iOS automatically scan and analyze people’s private photos to determine which ones are sexual or non-consensual. Users would not have the ability to keep any of their images private from government spooks.

It might sound like a good thing until you recognize the privacy implications of such an arrangement. Do we the people really want to allow the government direct access to our photos?

Beyond the search and analysis framework, the Biden regime also wants mobile app stores like Apple’s App Store and Google Play to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.”

(Related: AI is just one component among many of the dystopian present.)

Keep reading

Biden’s Bold Move to Combat AI Abuse Stirs Surveillance and Censorship Fears

The Biden administration is pushing for sweeping measures to combat the proliferation of nonconsensual sexual AI-generated images, including controversial proposals that could lead to extensive on-device surveillance and control of the types of images generated. In a White House press release, President Joe Biden’s administration outlined demands for the tech industry and financial institutions to curb the creation and distribution of abusive sexual images made with artificial intelligence (AI).

A key focus of these measures is the use of on-device technology to prevent the sharing of nonconsensual sexual images. The administration stated that “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

This proposal implies that mobile operating systems would need to scan and analyze images directly on users’ devices to determine if they are sexual or non-consensual. The implications of such surveillance raise significant privacy concerns, as it involves monitoring and analyzing private content stored on personal devices.

Additionally, the administration is calling on mobile app stores to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.” This broad mandate would require a wide range of apps, including image editing and drawing apps, to scan and monitor user activities on devices, analyze what art they’re creating and block the creation of certain kinds of content. Once this technology of on-device monitoring becomes normalized, this level of scrutiny could extend beyond the initial intent, potentially leading to censorship of other types of content that the administration finds objectionable.

The administration’s call to action extends to various sectors, including AI developers, payment processors, financial institutions, cloud computing providers, search engines, and mobile app store gatekeepers like Apple and Google. By encouraging cooperation from these entities, the White House hopes to curb the creation, spread, and monetization of nonconsensual AI images.

Keep reading

Open Source Tools for Fighting Disinformation

Deepfakes and disinformation have the ability to move financial marketsinfluence public opinion, and scam businesses and individuals out of millions of dollars. The Semantic Forensics program (SemaFor) is a DARPA-funded initiative to create comprehensive forensic technologies to help mitigate online threats perpetuated via synthetic and manipulated media. Over the last eight years, Kitware has helped DARPA create a powerful set of tools to analyze whether media has been artificially generated or manipulated. Kitware and DARPA are now bringing those tools out of the lab to defend digital authenticity in the real world.

Kitware has a history of building various image and video forensics algorithms to defend against disinformation by detecting various types of manipulations, beginning with DARPA’s Media Forensics (MediFor) program. Building on this foundation, our team expanded its focus to include multimodal analysis of text, audio, and video under the SemaFor program. For additional information about Kitware’s contributions to SemaFor, check out the “Voices from DARPA” podcast episode, “Demystifying Deepfakes,” where Arslan Basharat, assistant director of computer vision at Kitware, is a guest speaker.

Keep reading

Black athletic director of Baltimore high school arrested for creating AI deepfake of white principal to stage race hoax

A Maryland school athletic director was arrested at BWI-Marshall Airport after it was discovered that he had allegedly spread an AI-generated impersonation of the Pikesville High School principal that framed the principal as being racist. Dhazon Darien, 31, is looking at many charges due to the deepfake, including stalking, theft, disruption of school operations, and retaliation against a witness.

The police investigation began in January when a voice recording alleged to be school principal Eric Eiswert began making the rounds. Eiswert was temporarily removed from his position as head of the school and the school received many phone calls and social media comments and messages over the recording after it was spread on social media, according to The Baltimore Sun.

Keep reading

BBC Tries To Frame AI Parody Images as a “Disinformation” Scoop

The BBC used to produce some of the best comedy series – but now many of the “comedians” seem to have moved to the news division.

To make things more perplexing, they appear not to – or not want to – understand humor. Now they (pretend?) not to understand content produced by obviously parody accounts, and (pretend?) to be taking them seriously.

So now, critically minded observers are not laughing with the BBC, but at them, especially when the BBC comes out with a deceptive podcast episode (based on parody content, falsely framed as “serious news).

The target: the highly likely US presidential candidate Donald Trump.

It’s one thing to not like Trump – but is it worth, so to speak, “burning your own house down?” In this case, what remains of BBC’s once commanding credibility?

And yet the situation is actually no laughing matter, in terms of free speech and free press. Unsatisfied with turning out questionable, in terms of bias, and just clickbaity content these last years, the BBC News would clearly like to establish itself as an arbiter of truth for other media to follow. Absurd, or what?

Keep reading

Meta’s AI Watermarking Plan is Flimsy, At Best

In the past few months, we’ve seen a deepfake robocall of Joe Biden encouraging New Hampshire voters to “save your vote for the November election” and a fake endorsement of Donald Trump from Taylor Swift. It’s clear that 2024 will mark the first “AI election” in United States history.

With many advocates calling for safeguards against AI’s potential harms to our democracy, Meta (the parent company of Facebook and Instagram) proudly announced last month that it will label AI-generated content that was created using the most popular generative AI tools. The company said it’s “building industry-leading tools that can identify invisible markers at scale—specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards.”

Unfortunately, social media companies will not solve the problem of deepfakes on social media this year with this approach. Indeed, this new effort will do very little to tackle the problem of AI-generated material polluting the election environment.

The most obvious weakness is that Meta’s system will only work if the bad actors creating deepfakes use tools that already put watermarks—that is, hidden or visible information about the origin of digital content—into their images. Unsecured “open-source” generative AI tools mostly don’t produce watermarks at all. (We use the term unsecured and put “open-source” in quotes to denote that many such tools don’t meet traditional definitions of open-source software, but still pose a threat because their underlying code or model weights have been made publicly available.) If new versions of these unsecured tools are released that do contain watermarks, the old tools will still be available and able to produce watermark-free content, including personalized and highly persuasive disinformation and nonconsensual deepfake pornography.

We are also concerned that bad actors can easily circumvent Meta’s labeling regimen even if they are using the AI tools that Meta says will be covered, which include products from GoogleOpenAIMicrosoft, Adobe, Midjourney, and Shutterstock. Given that it takes about two seconds to remove a watermark from an image produced using the current C2PA watermarking standard that these companies have implemented, Meta’s promise to label AI-generated images falls flat.

Keep reading