Congress pushes bill to let Americans SUE if fake porn images of them are published after Taylor Swift deep fake scandal

A group of lawmakers are stepping in to try and take down Taylor Swift ‘deep fake’ perpetrators with a bill that would allow Americans to sue if fake porn images of them are published. 

Popstar Taylor Swift became the latest target of nonconsensual deepfakes after artificial intelligence generated sexually explicit images of her flooded the internet this week. 

The dozens of graphic images showed Swift in a series of sexual acts while dressed in Kansas City Chief memorabilia after she became a regular at football games to support of her boyfriend Travis Kelce. 

Swift is now considering legal action against the deepfake porn website that posted the images amid calls from fans and even the White House for legislative action to combat the growing issue. 

Lawmakers decided to step in to combat the rise of nonconsensual deepfakes with a new bill that allows victims to take action against fake porn made in their likeness.

The DEFIANCE Act of 2024 was introduced by Senate Judiciary Committee Chairman Dick Durbin, D-Ill., Ranking Member Lindsey Graham, R-S.C., Senator Josh Hawley, R-Mo., and Senator Amy Klobuchar, R-Minn.

Keep reading

The US Has Failed to Pass AI Regulation. New York City Is Stepping Up

AS THE US federal government struggles to meaningfully regulate AI—or even function—New York City is stepping into the governance gap.

The city introduced an AI Action Plan this week that mayor Eric Adams calls a first of its kind in the nation. The set of roughly 40 policy initiatives is designed to protect residents against harm like bias or discrimination from AI. It includes development of standards for AI purchased by city agencies and new mechanisms to gauge the risk of AI used by city departments.

New York’s AI regulation could soon expand still further. City council member Jennifer Gutiérrez, chair of the body’s technology committee, today introduced legislation that would create an Office of Algorithmic Data Integrity to oversee AI in New York.

If established, the office would provide a place for citizens to take complaints about automated decisionmaking systems used by public agencies, functioning like an ombudsman for algorithms in the five boroughs. It would also assess AI systems before deployment by the city for bias and discrimination.

Several US senators have suggested creating a new federal agency to regulate AI earlier this year, but Gutiérrez says she’s learned that there’s no point in waiting for action in Washington, DC. “We have a unique responsibility because a lot of innovation lives here,” she says. “It’s really important for us to take the lead.”

Keep reading

Here Come the Cyborgs: Mating AI with Human Brain Cells

If you read and believe headlines, it seems scientists are very close to being able to merge human brains with AI. In mid-December 2023, a Nature Electronics article triggered a flurry of excitement about progress on that transhuman front:

“‘Biocomputer’ combines lab-grown brain tissue with electronic hardware”

“A system that integrates brain cells into a hybrid machine can recognize voices”

“Brainoware: Pioneering AI and Brain Organoid Fusion”

Scientists are trying to inject human brain tissue into artificial networks because AI isn’t working quite as well as we have been led to think. AI uses a horrendous amount of energy do its kind of parallel processing, while the human brain uses about a light bulb’s worth of power to perform similar feats. So, AI designers are looking to cannibalize some parts from humans to make artificial networks work as efficiently as human brains. But let’s put the fact of AI’s shortcomings aside for the moment and examine this new cyborg innovation.

The breakthrough in biocomputing reported by Hongwei Cai et al. in Nature Electronics involves the creation of a brain organoid. That is a ball of artificially-cultured stem cells that have been coaxed into developing into neurons.

The cells are not taken from someone’s brain—which relieves us of certain ethical concerns. But because this lump of neurons does not have any blood vessels, as normal brain tissue does, the organoid cannot survive for long. And so ultimately, the prospect of training organoids on datasets does not seem practical, economically speaking, at present.

But that is not going to stop this research.  The drive to seamlessly integrate biology and technology is strong.  But can it be done?  And why do so many research scientists and funding agencies assume it’s possible?

Keep reading

Google Update Reveals AI Will Read All Your Private Messages

There’s understandable excitement that Google is bringing Bard to Messages. A readymade ChatGPT-like UI for a readymade user base of hundreds of millions. “It’s an AI assistant,” says Bard, “that can improve your messaging experience… from facilitating communication to enhancing creativity and providing information… it will be your personal AI assistant within your messaging app.”

But Bard will also analyze the private content of messages “to understand the context of your conversations, your tone, and your interests.” It will analyze the sentiment of your messages, “to tailor its responses to your mood and vibe.” And it will “analyze your message history with different contacts to understand your relationship dynamics… to personalize responses based on who you’re talking to.”

And so here comes the next privacy battlefield for smartphone owners still coming to terms with app permissions, privacy labels and tracking transparency, and with all those voice AI assistant eavesdropping scandals still fresh in the memory. Google’s challenge will be convincing users that this doesn’t open the door to the same kind of privacy nightmares we’ve seen before, where user content and AI platforms meet.

There will be another, less contentious privacy issue with your Messages requests to Bard. These will be sent to the cloud for processing, used for training and maybe seen by humans—albeit anonymized. This data will be stored for 18-months, and will persist for a few days even if you disable the AI, albeit manual deletion is available.

Keep reading

Elon Musk’s X Blocks Searches for ‘Taylor Swift’ Amid Spread of Explicit AI-Generated Images

X was blocking searches for “Taylor Swift”over the weekend following the spread of AI-generated images depicting the pop star in sexually explicit poses.

Searches for “Taylor Swift” and “Taylor Swift AI” on X returned error messages on Saturday and Sunday, though Elon Musk’s platform allowed variations on the search terms, including “Taylor Swift photos AI.”

X confirmed it is deliberately blocking the search phrases for the time being.

“This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” X’s head of business operations Joe Benarroch said in a statement sent to multiple media outlets.

The Joe Biden administration and the mainstream news media shifted into high gear after the fake Taylor Swift images went viral, seeking to protect the left-wing pop star.

“We are alarmed by the reports of the circulation of the false images,” White House press secretary Karine Jean-Pierre told reporters on Friday, saying social media companies need to do a better job enforcing their own rules.

Keep reading

AI went rogue and couldn’t be brought back in ‘legitimately scary’ study

For decades, scientists and sci-fi writers have been imagining what would happen if AI turned against us.

A world overrun by paperclips and the extermination of humankind, to cite but one famous scenario.

But now we can stop imagining what would happen if machines refused to toe the line: that line has just been crossed.

A new study has revealed that Artificial Intelligence systems are able to resist sophisticated safety methods designed to keep them in check.

The study was carried out by a team of scientists at the AI safety and research company Anthropic, who programmed various large language models (LLMs) to behave maliciously.

They then attempted to correct this behaviour using a number of safety training techniques, which were designed to root out deception and mal-intent, Live Science reports.

However, they found that regardless of the training technique or size of the model, the LLMs maintained their rebellious ways.

Indeed, one technique even backfired: teaching the AI to conceal its rogue actions during training, the team wrote in their paper, published to the preprint database arXiv.

“Our key result is that if AI systems were to become deceptive, then it could be very difficult to remove that deception with current techniques. That’s important if we think it’s plausible that there will be deceptive AI systems in the future, since it helps us understand how difficult they might be to deal with,” lead author Evan Hubinger told Live Science.

Keep reading

AI will increase the number and impact of cyber attacks, intel officers say

The assessment, from the UK’s Government Communications Headquarters, predicted ransomware will be the biggest threat to get a boost from AI over the next two years. AI will lower barriers to entry, a change that will bring a surge of new entrants into the criminal enterprise. More experienced threat actors—such as nation-states, the commercial firms that serve them, and financially motivated crime groups—will likely also benefit, as AI allows them to identify vulnerabilities and bypass security defenses more efficiently.

“The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” Lindly Cameron, CEO of the GCHQ’s National Cyber Security Centre, said. Cameron and other UK intelligence officials said that their country must ramp up defenses to counter the growing threat.

The assessment, which was published Wednesday, focused on the effect AI is likely to have in the next two years. The chances of AI increasing the volume and impact of cyber attacks in that timeframe were described as “almost certain,” the GCHQ’s highest confidence rating. Other, more-specific predictions listed as almost certain were:

  • AI improving capabilities in reconnaissance and social engineering, making them more effective and harder to detect
  • More impactful attacks against the UK as threat actors use AI to analyze exfiltrated data faster and more effectively, and use it to train AI models
  • Beyond the two-year threshold, commoditization of AI-improving capabilities of financially motivated and state actors
  • The trend of ransomware criminals and other types of threat actors who are already using AI will continue in 2025 and beyond.

The area of biggest impact from AI, Wednesday’s assessment said, would be in social engineering, particularly for less-skilled actors.

“Generative AI (GenAI) can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing,” intelligence officials wrote. “This will highly likely increase over the next two years as models evolve and uptake increases.”

The assessment added: “To 2025, GenAI and large language models (LLMs) will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.”

Keep reading

Fears Pentagon was ‘building killer robots in the basement’ sparked stricter AI rules, DoD official claims

Fears the Pentagon has been ‘building killer robots in the basement’ may have led to stricter AI rules that mandated all systems must be approved before deployment.

The Department of Defense (DoD) recently updated its AI rules among ‘a lot of confusion about’ how it plans to use self-decision-making machines on the battlefield, according to the deputy assistant defense secretary.

Michael Horowitz explained at an event this month that the ‘directive does not prohibit the development of any systems,’ but will ‘make clear what is and isn’t allowed’ and uphold a ‘commitment to responsible behavior,’ as it develops lethal autonomous systems.

While the Pentagon believes the changes should ease the public’s minds, some have said they are not ‘convinced’ by the efforts.

News of the update to the Pentagon’s 2012 ‘Autonomy in Weapon Systems,’ has sparked a debate online with many people saying ‘If the Pentagon says they’re not doing it, they’re doing it.’

Dailymail.com has reached out to the DoD for comment. 

The DoD has been aggressively pushing to modernize its arsenal with autonomous drones, tanks, and other weapons that select and attack a target without human intervention.

Mark Brakel, director of the advocacy organization Future of Life Institute (FLI), told DailyMail.com: ‘These weapons carry a massive risk of unintended escalation.’

He explained that AI-powered weapons could misinterpret something, like a ray of sunlight, and perceive it as a threat, thus attacking foreign powers without cause.

Brakel said the result could be devastating because ‘without meaningful human control, AI-powered weapons are like the Norwegian rocket incident [a near nuclear armageddon] on steroids and they could increase the risk of accidents in hotspots such as the Taiwan Strait.’

Keep reading

Congress Is Trying to Stop AI Nudes and Deepfake Scams Because Celebrities Are Mad

If you’ve been on TikTok lately, you may have noticed weird videos of celebrities promoting extremely shady products, such as a robotic-sounding Taylor Swift promising viewers a free cookware set. All of these videos are scams created with generative AI—the latest example of how the technology is being used to create disturbing virtual clones of people without their consent.

Needless to say, this kind of thing has pissed off a lot of famous people. And now, Congress is proposing new legislation that aims to combat AI deepfakes—specifically when it comes to things like fake celebrity endorsements and non-consensual AI-generated nudes, which have become a problem online and in high schools. Despite the surging popularity of websites and apps designed to generate deepfakes, there’s no comprehensive law on the books banning the creation of AI images. 

The new bill, called the No AI FRAUD Act and introduced by Rep. María Elvira Salazar (R-FL) and Rep. Madeleine Dean (D-PA), would establish legal definitions for “likeness and voice rights,” effectively banning the use of AI deepfakes to nonconsensually mimic another person, living or dead. The draft bill proclaims that “every individual has a property right in their own likeness and voice,” and cites several recent incidents where people have been turned into weird AI robots. It specifically mentions recent viral videos that featured AI-generated songs mimicking the voices of pop artists like Justin Bieber, Bad Bunny, Drake, and The Weeknd.

Keep reading

AlphaFold found thousands of possible psychedelics. Will its predictions help drug discovery?

Researchers have used the protein-structure-prediction tool AlphaFold to identify1 hundreds of thousands of potential new psychedelic molecules — which could help to develop new kinds of antidepressant. The research shows, for the first time, that AlphaFold predictions — available at the touch of a button — can be just as useful for drug discovery as experimentally derived protein structures, which can take months, or even years, to determine.

The development is a boost for AlphaFold, the artificial-intelligence (AI) tool developed by DeepMind in London that has been a game changer in biology. The public AlphaFold database holds structure predictions for nearly every known protein. Protein structures of molecules implicated in disease are used in the pharmaceutical industry to identify and improve promising medicines. But some scientists had been starting to doubt whether AlphaFold’s predictions could stand in for gold standard experimental models in the hunt for new drugs.

“AlphaFold is an absolute revolution. If we have a good structure, we should be able to use it for drug design,” says Jens Carlsson, a computational chemist at the University of Uppsala in Sweden.

Keep reading