Canada To Revive Online Censorship Targeting “Harmful” Content, “Hate” Speech, and Deepfakes

A renewed censorship effort is taking shape in Canada as the federal government pushes ahead with a controversial bill targeting what it labels “harmful online content.”

Framed as a safeguard against exploitation and “hate,” the proposed legislation mirrors the widely criticized Bill C-36, which was abandoned after concerns about its vague language and expansive reach.

Bill C-63 would have established a powerful new Digital Safety Commission tasked with pressuring platforms to restrict user content.

If passed, the law would have compelled tech companies to remove flagged material such as intimate images shared without consent or child abuse content within 24 hours.

It also gives both the poster and complainant a chance to respond, but the final decision would ultimately fall to a state-backed regulator.

Heritage Minister Steven Guilbeault attempted to justify the new push during a House of Commons committee meeting, stating the bill aims to remove “clearly harmful content” and is “designed to comply with the Charter of Rights and Freedoms.”

He added, “Online safety is certainly about protecting kids, but it’s obviously more than that.”

Beyond images and exploitation, the bill includes a broader mandate to police expression.

It calls for tougher Criminal Code penalties around so-called “hate propaganda,” including a life sentence for promoting genocide. It would create a new offense for “hate crimes” and let judges issue “peace bonds” to restrict someone’s freedom based on a prediction of possible future hate-based offenses.

On top of that, the proposal seeks to amend the Canadian Human Rights Act, allowing individuals to file complaints over online speech that meets a definition of “detestation or vilification,” as outlined by past Supreme Court decisions.

Keep reading

Senator Amy Klobuchar Can’t Take A Joke, Demands Censorship Law

Senator Amy Klobuchar has acknowledged what opponents of her legislation have been warning all along.

In a recent New York Times opinion piece, she confirmed that her proposed NO FAKES Act would be used to censor AI-generated parody.

Her target is a meme video that pokes fun at her reaction to an American Eagle jeans advertisement featuring actress Sydney Sweeney.

Rather than brush off the obvious satire, Klobuchar doubled down on the need to suppress it. “As anyone would, I wanted the video taken down or at least labeled ‘digitally altered content,’” she wrote.

She applauded TikTok for removing the clip, praised Meta for tagging it, and expressed frustration that X would not help her attach a Community Note.

This public complaint confirms that the NO FAKES Act, Senate Bill 1367, is not just about preventing identity theft or stopping fraud. Klobuchar is one of the bill’s lead authors, and she is openly calling for legal tools to remove content that ridicules her.

The bill gives individuals the right to sue over the creation or distribution of “unauthorized digital replicas.”

It also places heavy compliance burdens on platforms, which would face steep fines for failing to remove flagged content quickly or for not implementing policies to suspend repeat offenders.

While the bill claims to allow space for parody, satire, and documentaries, Klobuchar’s statements make it clear that those exemptions offer little practical protection.

The parody video in question shows an AI-generated version of Klobuchar speaking at a fake Senate hearing, ranting about Democrats needing more visibility in advertising. The fictional version of the senator says, “If Republicans are going to have beautiful girls with perfect titties…we want ugly, fat bitches wearing pink wigs and long-ass fake nails being loud and twerking on top of a cop car at a Waffle House ‘cause they didn’t get extra ketchup.”

Keep reading

Democrats Can’t Take A Joke, So They’re Trying To Outlaw Free Speech

Sen. Amy Klobuchar, D-Minn., wants to make one thing perfectly clear: She has never said Sydney Sweeney has “perfect [breasts].” Nor has she accused her fellow Democrats of being “too fat to wear jeans or too ugly to go outside.”

The Minnesota leftist attempted to clear the air earlier this week in a New York Times opinion piece headlined, “Amy Klobuchar: What I Didn’t Say About Sydney Sweeney.” 

Klobuchar wrote that she is the victim of a hoax, a “realistic deepfake.” Some trickster apparently put together and pushed out an AI-generated video in which Klobuchar appears to make (hilariously) outrageous comments about Sweeney’s American Eagle jeans ad — after liberals charged that the commercial is racist and an endorsement of eugenics. 

‘Party of Ugly People’

The doctored Klobuchar appears to be speaking at a Senate committee hearing, She demands Democrats receive “representation.” Of course, the satirical video has gone viral. 

“If Republicans are going to have beautiful girls with perfect ti**ies” in their ads, we want ads for Democrats, too, you know?” the fake Klobuchar asserts in the vid. “We want ugly, fat bitches wearing pink wigs and long-ass fake nails being loud and twerking on top of a cop car at a Waffle House ‘cause they didn’t get extra ketchup.”

“Just because we’re the party of ugly people doesn’t mean we can’t be featured in ads, okay?” the AI Amy implores. “And I know most of us are too fat to wear jeans or too ugly to go outside, but we want representation.” 

She appears — and sounds — so sincere.  But Klobuchar wants you to know it certainly was not her saying such “vulgar and absurd” things. That’s why she’s urging Congress to pass laws to ban such AI videos, which would be as absurd as social justice warriors calling American Eagle white supremacists for paying a blue jeans-clad, beautiful actress to say she has great jeans

Any such law would certainly and rightly be challenged in court. 

Keep reading

Judge Strikes Down California Deepfake Censorship Law

California’s attempt to regulate political speech on major social media platforms has been blocked in federal court, with a judge ruling the state’s latest “deepfake” law clashes with protections already established by Congress.

Assembly Bill 2655 attempted to compel certain large platforms to track down and delete “materially deceptive content” about candidates, election officials, and officeholders.

Supporters described it as a safeguard against manipulated media. The companies targeted, including X and Rumble, argued it was an attempt to turn them into agents of government censorship.

Senior US District Judge John Mendez sided with the platforms and did not even need to reach the argument of constitutional free speech questions to strike down the measure.

He found the federal Communications Decency Act [CDA] already shields online services from punishment over third-party content.

“No parts of this statute are severable because the whole statute is preempted,” Mendez said in court. “No parts of A.B. 2655 can be salvaged.”

The ruling applies to the companies in the lawsuit, and his earlier order freezing enforcement of the law remains in effect statewide until he issues a formal opinion.

For Mendez, the law punished companies for doing something they are “clearly protected by [the CDA] from doing.”

The court also cast doubt on another state law, Assembly Bill 2839, which prohibits false or misleading digital communications aimed at election workers, officials, voting equipment, or candidates in the months leading up to an election. That measure is also on hold, and Mendez signaled he doubts it will survive judicial review.

“Anybody can sue,” he said. “I can sue. If I see the video, under this law, I can sue.” He warned that such a rule chills protected speech and noted the state had not shown it was using the least speech-restrictive approach possible.

Keep reading

Denmark Is Fighting AI by Giving Citizens Copyright to Their Own Faces

Your image, your voice, and your essence as a human being could be gobbled up and regurgitated by AI. The clock is ticking on when you’re control over your image and representation is completely out of your hands.

To tip the scales back in favor of those who wish to remain in firm control of their image, Denmark has put forth a proposal that would give every one of its citizens the legal ground to go after someone who uses their image without their consent.

This specifically covers deepfakes, those videos of a person’s face or body that have been digitally altered so they appear to be someone else.

The Scandinavian nation has put forth a proposal to amend its copyright laws so that everyone owns the rights to their own face, their own voice, and their body. Current laws aren’t quite up to snuff when it comes to protecting people from having their likenesses twisted and contorted.

Keep reading

Denmark Plans Sweeping Ban on Online Deepfakes to Combat “Misinformation”

Denmark is preparing legislation that would outlaw the sharing of deepfake content online, a move that could open the door to unprecedented restrictions on digital expression.

Deepfakes, which can involve photos, videos, or audio recordings manipulated by artificial intelligence, are designed to convincingly fabricate actions or statements that never occurred.

While governments cite misinformation concerns, broad bans risk stifling creativity, political commentary, and legitimate speech.

The Danish Ministry of Culture announced Thursday that lawmakers from many parties are backing the effort to clamp down on the distribution of AI-generated imitations of people’s appearances or voices.

The forthcoming proposal, according to officials, aims to block the spread of deepfakes by making it illegal to share such material. Culture Minister Jakob Engel-Schmidt argued that “it was high time that we now create a safeguard against the spread of misinformation and at the same time send a clear signal to the tech giants.”

But these assurances do little to address the chilling effect such measures could have on free expression.

Authorities describe the planned rules as among the most comprehensive attempts yet to confront deepfakes and their potential to mislead the public.

The United States last year introduced legislation criminalizing the non-consensual sharing of intimate deepfakes, while South Korea has imposed tougher punishments for similar offenses and tightened regulations on social media platforms.

Keep reading

Florida Police: Christian School Teacher May Have Used Student Images to Create AI Child Porn

A sixth-grade teacher in Central Florida was arrested this week on a host of charges for possessing child pornography, apparently created with online AI technology and possibly using student photos from his Christian school.

State Attorney General James Uthmeier’s office charged David McKeown of Holly Hill with 19 enhanced felony counts of possession of child sexual abuse material and six counts of possession of animal pornography, according to a statement released by the office.

McKeown was arrested Friday by the Holly Hill Police Department at his home in Volusia County. He was a sixth-grade teacher at United Brethren in Christ (UBIC) Academy, a school affiliated with the UBIC church.

Holly Hill Police Department’s investigation alleges that McKeown shared and downloaded pornographic images depicting child porn via Discord, an online chat service, while at school and connected to the school’s Wi-Fi network.

Some 30 images were allegedly shared, including six files depicting McKeown sexually abusing animals, the Florida Department of Law Enforcement (FDLA) reported.

Uthmeier said in the statement:

As a teacher, parents trusted Mr. McKeown to impart knowledge to their children. Instead, he spent parts of the school day sending and receiving child sex abuse material and providing other pedophiles with UBIC Academy students’ personal information. What he did is beyond betrayal — it’s devastating and sick.

The investigation was launched early this month after receiving a tip from the National Center for Missing and Exploited Children, which tracks the internet for exploitative content involving minors, Orlando’s Fox 35 reported.

The news outlet also reported authorities believe McKeown used AI technology to create the pornographic images and may have used photos of real children, perhaps his own students. The investigation is continuing.

Detectives seized a number of devices from the teacher’s home in Holly Hill and from the school. He was booked into the Volusia County jail and a judge denied him the possibility of bond.

If convicted, he faces up to 315 years in prison, officials said.

Keep reading

The AI Slop Fight Between Iran and Israel

As Israel and Iran trade blows in a quickly escalating conflict that risks engulfing the rest of the region as well as a more direct confrontation between Iran and the U.S., social media is being flooded with AI-generated media that claims to show the devastation, but is fake.

The fake videos and images show how generative AI has already become a staple of modern conflict. On one end, AI-generated content of unknown origin is filling the void created by state-sanctioned media blackouts with misinformation, and on the other end, the leaders of these countries are sharing AI-generated slop to spread the oldest forms of xenophobia and propaganda.

If you want to follow a war as it’s happening, it’s easier than ever. Telegram channels post live streams of bombing raids as they happen and much of the footage trickles up to X, TikTok, and other social media platforms. There’s more footage of conflict than there’s ever been, but a lot of it is fake.

A few days ago, Iranian news outlets reported that Iran’s military had shot down three F-35s. Israel denied it happened. As the claim spread so did supposed images of the downed jet. In one, a massive version of the jet smolders on the ground next to a town. The cockpit dwarfs the nearby buildings and tiny people mill around the downed jet like Lilliputians surrounding Gulliver.

It’s a fake, an obvious one, but thousands of people shared it online. Another image of the supposedly downed jet showed it crashed in a field somewhere in the middle of the night. Its wings were gone and its afterburner still glowed hot. This was also a fake.

Keep reading

Teen Dies by Suicide After Being Targeted in AI-Generated ‘Sextortion’ Scheme

A 16-year-old Kentucky boy reportedly committed suicide shortly after he was blackmailed with AI-generated nude images, an increasingly common scheme known as “sextortion.”

Elijah Heacock of Glasgow, Kentucky, received a text including an AI-generated nude photo depicting himself and a demand that he pay $3,000 to prevent the image from being sent to family and friends, according to a report by KFDA.

On February 28, shortly after receiving the message, the teen died from a self-inflicted gunshot wound.

Elijah’s parents, John Burnett and Shannon Heacock, told CBS that they didn’t have a solid understanding of the circumstances that led to their son’s death until they found the messages on his phone.

Heacock said she now believes her son was a victim of a sextortion scheme.

“Sextortion is a form of child sexual exploitation where children are threatened or blackmailed, most often with the possibility of sharing with the public a nude or sexual images of them, by a person who demands additional sexual content, sexual activity or money from the child,” the National Center for Missing and Exploited Children (NCMEC) explains.

“This crime may happen when a child has shared an image with someone they thought they knew or trusted, but in many cases they are targeted by an individual they met online who obtained a sexual image from the child through deceit, coercion, or some other method,” the NCMEC continued.

“In many cases, the blackmailers may have stolen or taken images of another person and they are communicating through a fake account,” the organization added.

Elijah’s parents said they had never heard of sextortion until law enforcement began investigating their son’s death.

“The people that are after our children are well organized,” Burnett said. “They are well financed, and they are relentless. They don’t need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child.”

NCMEC says sextortion schemes have skyrocketed, revealing the organization has received more than 500,000 reports of sextortion against minors in just the last year.

Since 2021, at least 20 young people have committed suicide as a result of becoming victims of sextortion scams, according to the FBI.

Keep reading

Welcome to the age of paranoia as deepfakes and scams abound

These days, when Nicole Yelland receives a meeting request from someone she doesn’t already know, she conducts a multistep background check before deciding whether to accept. Yelland, who works in public relations for a Detroit-based nonprofit, says she’ll run the person’s information through Spokeo, a personal data aggregator that she pays a monthly subscription fee to use. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams call—with their camera on.

If Yelland sounds paranoid, that’s because she is. In January, before she started her current nonprofit role, Yelland says, she got roped into an elaborate scam targeting job seekers. “Now, I do the whole verification rigamarole any time someone reaches out to me,” she tells WIRED.

Digital imposter scams aren’t new; messaging platforms, social media sites, and dating apps have long been rife with fakery. In a time when remote work and distributed teams have become commonplace, professional communications channels are no longer safe, either. The same artificial intelligence tools that tech companies promise will boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds.

On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.

Keep reading