Anti-Deepfake Porn Bill Unanimously Passes the Senate

The anti-Deepfake porn bill, otherwise known as the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, passed the U.S. Senate unanimously on Thursday with 100 votes.

Introduced by Sen. Dick Durbin (D-IL) and Sen. Lindsey Graham (R-SC) while being sponsored by Rep. Alexandria Ocasio-Cortez (D-NY) from the U.S. House, the bill would essentially allow victims of Deepfake porn to “sue anyone who knowingly creates, receives, or distributes non-consensual, sexually explicit content made with artificial intelligence,” per Gizmodo. Victims will have a 10-year statute of limitations.

“The bill is one of the first to address the damages caused by AI, which is currently a self-regulated industry,” the outlet noted. “It doesn’t allow for criminal prosecution, but hopefully, it’s a first step towards more federal oversight.”

Alexandria Ocasio-Cortez said she pushed the bill after falling victim to Deepfake porn online, which become a significant problem in recent years.

Keep reading

Trump Blasts Media, Biden Camp As “Shameless Liars” For Claiming Videos Are Fake

In a speech in Wisconsin Tuesday, Donald Trump called out the Biden campaign and its media mouthpieces as “shameless liars” for suggesting that footage of him wandering around looking lost at the G7 summit and having to be led off a stage by Obama at a fundraiser were somehow faked.

“They have a candidate that has no clue, doesn’t know where he is, and all they can do is lie. Joe Biden is humiliating our country on the world stage” Trump told the crowd, adding “they say the videos of crooked Joe shuffling around are clean fakes,” referring to the recent footage.

Further speaking about Biden’s odd behaviour, Trump asserted “At the G7 in Europe, he had to be rescued by other world leaders, they had to pull him back, then he bizarrely put his face on the Pope’s forehead. What was that all about? That was weird.”

“Did you see the Pope? The Pope is like, ‘What’s happening?’ The Pope didn’t really know what was happening. He said this is strange,” Trump hilariously continued.

Keep reading

Biden wants U.S. government to scan all images on your phone to comply with new AI rules

To supposedly stop people from exchanging non-consensual artificial intelligence (AI) images of a sexual nature, President Biden wants to probe everyone’s smartphones as part of a sweeping surveillance effort.

press release from the White House explains the Biden regime’s desire for the tech and financial industries to take charge in stopping the creation and spread of abusive sexual imagery created by AI robots.

According to Biden’s handlers, “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

The plan is to have mobile operating systems such as Android and iOS automatically scan and analyze people’s private photos to determine which ones are sexual or non-consensual. Users would not have the ability to keep any of their images private from government spooks.

It might sound like a good thing until you recognize the privacy implications of such an arrangement. Do we the people really want to allow the government direct access to our photos?

Beyond the search and analysis framework, the Biden regime also wants mobile app stores like Apple’s App Store and Google Play to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.”

(Related: AI is just one component among many of the dystopian present.)

Keep reading

Biden’s Bold Move to Combat AI Abuse Stirs Surveillance and Censorship Fears

The Biden administration is pushing for sweeping measures to combat the proliferation of nonconsensual sexual AI-generated images, including controversial proposals that could lead to extensive on-device surveillance and control of the types of images generated. In a White House press release, President Joe Biden’s administration outlined demands for the tech industry and financial institutions to curb the creation and distribution of abusive sexual images made with artificial intelligence (AI).

A key focus of these measures is the use of on-device technology to prevent the sharing of nonconsensual sexual images. The administration stated that “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

This proposal implies that mobile operating systems would need to scan and analyze images directly on users’ devices to determine if they are sexual or non-consensual. The implications of such surveillance raise significant privacy concerns, as it involves monitoring and analyzing private content stored on personal devices.

Additionally, the administration is calling on mobile app stores to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.” This broad mandate would require a wide range of apps, including image editing and drawing apps, to scan and monitor user activities on devices, analyze what art they’re creating and block the creation of certain kinds of content. Once this technology of on-device monitoring becomes normalized, this level of scrutiny could extend beyond the initial intent, potentially leading to censorship of other types of content that the administration finds objectionable.

The administration’s call to action extends to various sectors, including AI developers, payment processors, financial institutions, cloud computing providers, search engines, and mobile app store gatekeepers like Apple and Google. By encouraging cooperation from these entities, the White House hopes to curb the creation, spread, and monetization of nonconsensual AI images.

Keep reading

Open Source Tools for Fighting Disinformation

Deepfakes and disinformation have the ability to move financial marketsinfluence public opinion, and scam businesses and individuals out of millions of dollars. The Semantic Forensics program (SemaFor) is a DARPA-funded initiative to create comprehensive forensic technologies to help mitigate online threats perpetuated via synthetic and manipulated media. Over the last eight years, Kitware has helped DARPA create a powerful set of tools to analyze whether media has been artificially generated or manipulated. Kitware and DARPA are now bringing those tools out of the lab to defend digital authenticity in the real world.

Kitware has a history of building various image and video forensics algorithms to defend against disinformation by detecting various types of manipulations, beginning with DARPA’s Media Forensics (MediFor) program. Building on this foundation, our team expanded its focus to include multimodal analysis of text, audio, and video under the SemaFor program. For additional information about Kitware’s contributions to SemaFor, check out the “Voices from DARPA” podcast episode, “Demystifying Deepfakes,” where Arslan Basharat, assistant director of computer vision at Kitware, is a guest speaker.

Keep reading

Black athletic director of Baltimore high school arrested for creating AI deepfake of white principal to stage race hoax

A Maryland school athletic director was arrested at BWI-Marshall Airport after it was discovered that he had allegedly spread an AI-generated impersonation of the Pikesville High School principal that framed the principal as being racist. Dhazon Darien, 31, is looking at many charges due to the deepfake, including stalking, theft, disruption of school operations, and retaliation against a witness.

The police investigation began in January when a voice recording alleged to be school principal Eric Eiswert began making the rounds. Eiswert was temporarily removed from his position as head of the school and the school received many phone calls and social media comments and messages over the recording after it was spread on social media, according to The Baltimore Sun.

Keep reading

BBC Tries To Frame AI Parody Images as a “Disinformation” Scoop

The BBC used to produce some of the best comedy series – but now many of the “comedians” seem to have moved to the news division.

To make things more perplexing, they appear not to – or not want to – understand humor. Now they (pretend?) not to understand content produced by obviously parody accounts, and (pretend?) to be taking them seriously.

So now, critically minded observers are not laughing with the BBC, but at them, especially when the BBC comes out with a deceptive podcast episode (based on parody content, falsely framed as “serious news).

The target: the highly likely US presidential candidate Donald Trump.

It’s one thing to not like Trump – but is it worth, so to speak, “burning your own house down?” In this case, what remains of BBC’s once commanding credibility?

And yet the situation is actually no laughing matter, in terms of free speech and free press. Unsatisfied with turning out questionable, in terms of bias, and just clickbaity content these last years, the BBC News would clearly like to establish itself as an arbiter of truth for other media to follow. Absurd, or what?

Keep reading

Meta’s AI Watermarking Plan is Flimsy, At Best

In the past few months, we’ve seen a deepfake robocall of Joe Biden encouraging New Hampshire voters to “save your vote for the November election” and a fake endorsement of Donald Trump from Taylor Swift. It’s clear that 2024 will mark the first “AI election” in United States history.

With many advocates calling for safeguards against AI’s potential harms to our democracy, Meta (the parent company of Facebook and Instagram) proudly announced last month that it will label AI-generated content that was created using the most popular generative AI tools. The company said it’s “building industry-leading tools that can identify invisible markers at scale—specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards.”

Unfortunately, social media companies will not solve the problem of deepfakes on social media this year with this approach. Indeed, this new effort will do very little to tackle the problem of AI-generated material polluting the election environment.

The most obvious weakness is that Meta’s system will only work if the bad actors creating deepfakes use tools that already put watermarks—that is, hidden or visible information about the origin of digital content—into their images. Unsecured “open-source” generative AI tools mostly don’t produce watermarks at all. (We use the term unsecured and put “open-source” in quotes to denote that many such tools don’t meet traditional definitions of open-source software, but still pose a threat because their underlying code or model weights have been made publicly available.) If new versions of these unsecured tools are released that do contain watermarks, the old tools will still be available and able to produce watermark-free content, including personalized and highly persuasive disinformation and nonconsensual deepfake pornography.

We are also concerned that bad actors can easily circumvent Meta’s labeling regimen even if they are using the AI tools that Meta says will be covered, which include products from GoogleOpenAIMicrosoft, Adobe, Midjourney, and Shutterstock. Given that it takes about two seconds to remove a watermark from an image produced using the current C2PA watermarking standard that these companies have implemented, Meta’s promise to label AI-generated images falls flat.

Keep reading

What The Rise Of AI Videos Means For The Future

OpenAI, a company backed by Microsoft, is one of the world’s leading artificial intelligence research organizations paving the way for AI advancement across multiple technology sectors.

Best known for the creation of ChatGPT, on February 15th the company debuted previews of its new prompt-based AI video generation software, Sora. A text-to-video tool which allows users to input a text prompt into the software which then uses generative AI to create a video based on the input.

While Sora isn’t the first of its kind as far as generative AI video tools are concerned, industry insiders have noted the crispness and detail as well as the ability to generate complex scenes and longer length videos than previous models, at present totaling up to sixty seconds in length, as a “significant leap” for the development of the technology.

On its website, OpenAI has stated “We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.” While providing several brief examples of videos generated with the software and without any further modification.

The videos span a variety of scenes, including photorealistic clips of puppies playing in the snow, historic footage of California during the gold rush, a robot living in a cyberpunk world, woolly mammoths in the snow, an animation of a cute fluffy creature playing with fairies in an enchanted forest, and various cityscapes along with other animations.

Keep reading

Audio Cloning can Take Over a Phone Call in Real Time Without the Speakers Knowing

Generative AI could be listening to your phone calls and hijacking them with fake biometric audio for fraud or manipulation purposes, according to new research published by Security Intelligence. In the wake of a Hong Kong fraud case that saw an employee transfer US$25 million in funds to five bank accounts after a virtual meeting with what turned out to be audio-video deepfakes of senior management, the biometrics and digital identity world is on high alert, and the threats are growing more sophisticated by the day.

blog post by Chenta Lee, chief architect of threat intelligence at IBM Security, breaks down how researchers from IBM X-Force successfully intercepted and covertly hijacked a live conversation by using LLM to understand the conversation and manipulate it for malicious purposes – without the speakers knowing it was happening.

“Alarmingly,” writes Lee, “it was fairly easy to construct this highly intrusive capability, creating a significant concern about its use by an attacker driven by monetary incentives and limited to no lawful boundary.”

Keep reading