Denmark Is Fighting AI by Giving Citizens Copyright to Their Own Faces

Your image, your voice, and your essence as a human being could be gobbled up and regurgitated by AI. The clock is ticking on when you’re control over your image and representation is completely out of your hands.

To tip the scales back in favor of those who wish to remain in firm control of their image, Denmark has put forth a proposal that would give every one of its citizens the legal ground to go after someone who uses their image without their consent.

This specifically covers deepfakes, those videos of a person’s face or body that have been digitally altered so they appear to be someone else.

The Scandinavian nation has put forth a proposal to amend its copyright laws so that everyone owns the rights to their own face, their own voice, and their body. Current laws aren’t quite up to snuff when it comes to protecting people from having their likenesses twisted and contorted.

Keep reading

Denmark Plans Sweeping Ban on Online Deepfakes to Combat “Misinformation”

Denmark is preparing legislation that would outlaw the sharing of deepfake content online, a move that could open the door to unprecedented restrictions on digital expression.

Deepfakes, which can involve photos, videos, or audio recordings manipulated by artificial intelligence, are designed to convincingly fabricate actions or statements that never occurred.

While governments cite misinformation concerns, broad bans risk stifling creativity, political commentary, and legitimate speech.

The Danish Ministry of Culture announced Thursday that lawmakers from many parties are backing the effort to clamp down on the distribution of AI-generated imitations of people’s appearances or voices.

The forthcoming proposal, according to officials, aims to block the spread of deepfakes by making it illegal to share such material. Culture Minister Jakob Engel-Schmidt argued that “it was high time that we now create a safeguard against the spread of misinformation and at the same time send a clear signal to the tech giants.”

But these assurances do little to address the chilling effect such measures could have on free expression.

Authorities describe the planned rules as among the most comprehensive attempts yet to confront deepfakes and their potential to mislead the public.

The United States last year introduced legislation criminalizing the non-consensual sharing of intimate deepfakes, while South Korea has imposed tougher punishments for similar offenses and tightened regulations on social media platforms.

Keep reading

Florida Police: Christian School Teacher May Have Used Student Images to Create AI Child Porn

A sixth-grade teacher in Central Florida was arrested this week on a host of charges for possessing child pornography, apparently created with online AI technology and possibly using student photos from his Christian school.

State Attorney General James Uthmeier’s office charged David McKeown of Holly Hill with 19 enhanced felony counts of possession of child sexual abuse material and six counts of possession of animal pornography, according to a statement released by the office.

McKeown was arrested Friday by the Holly Hill Police Department at his home in Volusia County. He was a sixth-grade teacher at United Brethren in Christ (UBIC) Academy, a school affiliated with the UBIC church.

Holly Hill Police Department’s investigation alleges that McKeown shared and downloaded pornographic images depicting child porn via Discord, an online chat service, while at school and connected to the school’s Wi-Fi network.

Some 30 images were allegedly shared, including six files depicting McKeown sexually abusing animals, the Florida Department of Law Enforcement (FDLA) reported.

Uthmeier said in the statement:

As a teacher, parents trusted Mr. McKeown to impart knowledge to their children. Instead, he spent parts of the school day sending and receiving child sex abuse material and providing other pedophiles with UBIC Academy students’ personal information. What he did is beyond betrayal — it’s devastating and sick.

The investigation was launched early this month after receiving a tip from the National Center for Missing and Exploited Children, which tracks the internet for exploitative content involving minors, Orlando’s Fox 35 reported.

The news outlet also reported authorities believe McKeown used AI technology to create the pornographic images and may have used photos of real children, perhaps his own students. The investigation is continuing.

Detectives seized a number of devices from the teacher’s home in Holly Hill and from the school. He was booked into the Volusia County jail and a judge denied him the possibility of bond.

If convicted, he faces up to 315 years in prison, officials said.

Keep reading

The AI Slop Fight Between Iran and Israel

As Israel and Iran trade blows in a quickly escalating conflict that risks engulfing the rest of the region as well as a more direct confrontation between Iran and the U.S., social media is being flooded with AI-generated media that claims to show the devastation, but is fake.

The fake videos and images show how generative AI has already become a staple of modern conflict. On one end, AI-generated content of unknown origin is filling the void created by state-sanctioned media blackouts with misinformation, and on the other end, the leaders of these countries are sharing AI-generated slop to spread the oldest forms of xenophobia and propaganda.

If you want to follow a war as it’s happening, it’s easier than ever. Telegram channels post live streams of bombing raids as they happen and much of the footage trickles up to X, TikTok, and other social media platforms. There’s more footage of conflict than there’s ever been, but a lot of it is fake.

A few days ago, Iranian news outlets reported that Iran’s military had shot down three F-35s. Israel denied it happened. As the claim spread so did supposed images of the downed jet. In one, a massive version of the jet smolders on the ground next to a town. The cockpit dwarfs the nearby buildings and tiny people mill around the downed jet like Lilliputians surrounding Gulliver.

It’s a fake, an obvious one, but thousands of people shared it online. Another image of the supposedly downed jet showed it crashed in a field somewhere in the middle of the night. Its wings were gone and its afterburner still glowed hot. This was also a fake.

Keep reading

Teen Dies by Suicide After Being Targeted in AI-Generated ‘Sextortion’ Scheme

A 16-year-old Kentucky boy reportedly committed suicide shortly after he was blackmailed with AI-generated nude images, an increasingly common scheme known as “sextortion.”

Elijah Heacock of Glasgow, Kentucky, received a text including an AI-generated nude photo depicting himself and a demand that he pay $3,000 to prevent the image from being sent to family and friends, according to a report by KFDA.

On February 28, shortly after receiving the message, the teen died from a self-inflicted gunshot wound.

Elijah’s parents, John Burnett and Shannon Heacock, told CBS that they didn’t have a solid understanding of the circumstances that led to their son’s death until they found the messages on his phone.

Heacock said she now believes her son was a victim of a sextortion scheme.

“Sextortion is a form of child sexual exploitation where children are threatened or blackmailed, most often with the possibility of sharing with the public a nude or sexual images of them, by a person who demands additional sexual content, sexual activity or money from the child,” the National Center for Missing and Exploited Children (NCMEC) explains.

“This crime may happen when a child has shared an image with someone they thought they knew or trusted, but in many cases they are targeted by an individual they met online who obtained a sexual image from the child through deceit, coercion, or some other method,” the NCMEC continued.

“In many cases, the blackmailers may have stolen or taken images of another person and they are communicating through a fake account,” the organization added.

Elijah’s parents said they had never heard of sextortion until law enforcement began investigating their son’s death.

“The people that are after our children are well organized,” Burnett said. “They are well financed, and they are relentless. They don’t need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child.”

NCMEC says sextortion schemes have skyrocketed, revealing the organization has received more than 500,000 reports of sextortion against minors in just the last year.

Since 2021, at least 20 young people have committed suicide as a result of becoming victims of sextortion scams, according to the FBI.

Keep reading

Welcome to the age of paranoia as deepfakes and scams abound

These days, when Nicole Yelland receives a meeting request from someone she doesn’t already know, she conducts a multistep background check before deciding whether to accept. Yelland, who works in public relations for a Detroit-based nonprofit, says she’ll run the person’s information through Spokeo, a personal data aggregator that she pays a monthly subscription fee to use. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams call—with their camera on.

If Yelland sounds paranoid, that’s because she is. In January, before she started her current nonprofit role, Yelland says, she got roped into an elaborate scam targeting job seekers. “Now, I do the whole verification rigamarole any time someone reaches out to me,” she tells WIRED.

Digital imposter scams aren’t new; messaging platforms, social media sites, and dating apps have long been rife with fakery. In a time when remote work and distributed teams have become commonplace, professional communications channels are no longer safe, either. The same artificial intelligence tools that tech companies promise will boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds.

On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.

Keep reading

House passes “Take it Down Act,” sending revenge porn bill backed by Melania Trump to president’s desk

The House passed a bipartisan bill Monday that makes it a federal crime to post real and fake sexually explicit imagery online of a person without their consent, sending the legislation that was backed by first lady Melania Trump to the president’s desk. 

The bill, known as the “Take It Down Act,” cleared the lower chamber in a 409-2 vote. The two “no” votes came from Republicans. The Senate unanimously passed the measure in February. 

The legislation requires social media companies and other websites to remove images and videos, including deepfakes generated by artificial intelligence, within 48 hours after a victim’s request. 

“If you’re a victim of revenge porn or AI-generated explicit imagery, your life changes forever,” Sen. Ted Cruz, a Texas Republican, said at a March 3 roundtable promoting the bill. 

Cruz, who introduced the bill, recalled the experience of a teenage victim, Elliston Berry, whose classmate used an app to create explicit images of her and then sent them to her classmates. Berry’s mother had tried unsuccessfully to get Snapchat to remove the images for months before she contacted Cruz’s office for help. 

“It should not take a sitting senator or sitting member of Congress picking up the phone to get a picture down or video down,” Cruz said. 

The first lady, who rarely appears in public, attended the March discussion at the U.S. Capitol to advocate for the bill’s passage in the House. 

“It’s heartbreaking to witness young teens, especially girls, grappling with the overwhelming challenges posed by malicious online content like deep fakes,” she said. “This toxic environment can be severely damaging.” 

The first lady applauded Congress after its passage and said the bipartisan vote made a “powerful statement that we stand united in protecting the dignity, privacy, and safety of our children.” 

“I am thankful to the Members of Congress — both in the House and Senate — who voted to protect the well-being of our youth,” she said in a statement. 

According to the FBI, in recent years there have been an alarming number of cases where victims have been extorted that have ended in suicide. Lawmakers said they hope the bill will save lives by providing recourse for victims. 

Keep reading

Congress Takes Another Step Toward Enabling Broad Internet Censorship

The House Energy and Commerce Committee on Tuesday advanced the TAKE IT DOWN Act (S. 146) , a bill that seeks to speed up the removal of certain kinds of troubling online content. While the bill is meant to address a serious problem—the distribution of non-consensual intimate imagery (NCII)—the notice-and-takedown system it creates is an open invitation for powerful people to pressure websites into removing content they dislike. 

As we’ve written before, while protecting victims of these heinous privacy invasions is a legitimate goal, good intentions alone are not enough to make good policy. 

This bill mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without meaningfully addressing the problem it claims to solve. The “takedown” provision applies to a much broader category of content—potentially any images involving intimate or sexual content at all—than the narrower NCII definitions found elsewhere in the bill. The bill contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. 

The legislation’s 48-hour takedown deadline means that online service providers, particularly smaller ones, will have to comply quickly to avoid legal risks. That time crunch will make it impossible for services to verify the content is in fact NCII. Instead, services will rely on automated filters—infamously blunt tools that frequently flag legal content, from fair-use commentary to news reporting.

Keep reading

Meta’s new “Movie Gen” AI system can deepfake video from a single photo

On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand.

The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to “enhance their inherent creativity” rather than replace human artists and animators. The company envisions future applications such as easily creating and editing “day in the life” videos for social media platforms or generating personalized animated birthday greetings.

Movie Gen builds on Meta’s previous work in video synthesis, following 2022’s Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos.

Keep reading

Minnesota ‘Acting as a Ministry of Truth’ With Anti-Deep Fake Law, Says Lawsuit

A new lawsuit takes aim at a Minnesota law banning the “use of deep fake technology to influence an election.” The measure—enacted in 2023 and amended this year—makes it a crime to share AI-generated content if a person “knows or acts with reckless disregard about whether the item being disseminated is a deep fake” and the sharing is done without the depicted individual’s consent, intended to “injure a candidate or influence the result of an election,” and either within 90 days before a political party nominating convention or after the start of the absentee voting period prior to a presidential nomination primary, any state or local primary, or a general election.

Christopher Kohls, a content creator who goes by Mr. Reagan, and by Minnesota state Rep. Mary Franson (R–District 12B) argue that the law is an “impermissible and unreasonable restriction of protected speech.”

Violating Minnesota’s deep fake law is punishable by up to 90 days imprisonment and/or a fine of up to $1,000, with penalties increasing if the offender has a prior conviction within the past five years for the same thing or the deep fake is determined to have been shared with an “intent to cause violence or bodily harm.” The law also allows for the Minnesota attorney general, county or city attorneys, individuals depicted in the deep fake, or any candidate “who is injured or likely to be injured by dissemination” to sue for injunctive relief “against any person who is reasonably believed to be about to violate or who is in the course of violating” the law.

If a candidate for office is found guilty of violating this law, they must forfeit the nomination or office and are henceforth disqualified “from being appointed to that office or any other office for which the legislature may establish qualifications.”

There are obviously a host of constitutional problems with this measure, which defines “deep fake” very broadly: “any video recording, motion-picture film, sound recording, electronic image, or photograph, or any technological representation of speech or conduct substantially derivative thereof” that is realistic enough for a reasonable person to believe it depicts speech or conduct that did not occur and developed though “technical means” rather than “the ability of another individual to physically or verbally impersonate such individual.”

Keep reading