Macron accuses US of ‘intimidation’ against EU

US visa restrictions against several senior EU officials amount to “intimidation and coercion” aimed at undermining the bloc’s digital policies and sovereignty, French President Emmanuel Macron has said.

On Tuesday, the administration of US President Donald Trump announced new sanctions targeting Thierry Breton, the former European Commissioner for Internal Market appointed by Macron himself, and four other officials over what it described as “efforts to coerce American platforms to punish American viewpoints they oppose.”

At the core of the dispute are the EU’s Digital Markets Act and Digital Services Act, which impose strict competition and transparency obligations on large online platforms. Given that most such firms – including Microsoft, Google, Meta, and Amazon – are headquartered in the US, American officials have argued the framework is discriminatory. Breton in particular was among the officials who played a pivotal role in establishing the EU digital rulebook.

Keep reading

Victoria Moves to Force Online Platforms to ID Users and Expand State Powers to Curb “Hate Speech”

Victoria is preparing to introduce some of the most far-reaching online censorship and surveillance powers ever proposed in an Australian state, following the Bondi Beach terror attack.

Premier Jacinta Allan’s new five-point plan, presented as a response to antisemitism, includes measures that would compel social media platforms to identify users accused of “hate speech” and make companies legally liable if they cannot.

Presented as a defense against hate, the plan’s mechanisms cut directly into long-standing principles of privacy and freedom of expression. It positions anonymity online as a form of protection for “cowards,” creating a precedent for government-mandated identity disclosure that could chill lawful speech and dissent.

During her announcement, Premier Allan said:

“That’s why Victoria will spearhead new laws to hold social media companies and their anonymous users to account – and we’ll commission a respected jurist to unlock the legislative path forward.”

Under the proposal, if a user accused of “vilification” cannot be identified, the platform itself could be held responsible for damages. This effectively converts private platforms into instruments of state enforcement, obligating them to expose user data or face financial risk.

The Premier also announced plans to accelerate the introduction of the Justice Legislation Amendment (Anti-vilification and Social Cohesion) Act 2024, which had been due to take effect in mid-2026. It will now be brought forward to April 2026.

The law allows individuals to sue others for public conduct, including online speech, that a “reasonable person” might find “hateful, contemptuous, reviling or severely ridiculing” toward someone with a protected attribute. These protected categories include religion, race, sex, gender identity, sexual orientation, and disability, among others.

This framework gives the state and private citizens broad interpretive power to determine what speech is “hateful.” As many civil liberties experts note, such wording opens the door to legal action based on subjective offense rather than clear, objective harm.

Keep reading

Google and Substack Warn Britain Is Building a Censorship Machine

Major American companies and commentators, including Google and Substack CEO Chris Best, have condemned the United Kingdom’s censorship law, the Online Safety Act (OSA), describing it as a measure that risks censoring lawful speech while failing to make the internet safer for children.

They argue that the law normalizes digital surveillance, restricts open debate, and complicates how global platforms operate in the UK.

Their objections surfaced through The Telegraph, which published essays from Best and from Heritage Foundation researchers John Peluso and Miles Pollard, alongside new reporting on Google’s formal response to an Ofcom consultation.

That consultation, focused on how tech firms should prevent “potentially illegal” material from spreading online, closed in October, with Ofcom releasing the submissions in December.

Google’s filing accused the regulator of promoting rules that would “undermine users’ rights to freedom of expression” by encouraging pre-emptive content suppression.

Ofcom rejected this view, insisting that “nothing in our proposals would require sites and apps to take down legal content.” Yet Google was hardly alone in raising alarms: other American companies and trade groups submitted responses voicing comparable fears about the Act’s scope and implications.

Chris Best wrote that his company initially set out to comply with the new law but quickly discovered it to be far more intrusive than expected. “What I’ve learned is that, in practice, it pushes toward something much darker: a system of mass political censorship unlike anywhere else in the western world,” he said.

Best describes how the OSA effectively forces platforms to classify and filter speech on a constant basis, anticipating what regulators might later deem harmful.

Compliance, he explained, requires “armies of human moderators or AI” to scan journalism, commentary, and even satire for potential risk.

The process, he continued, doesn’t simply remove content but “gates it” behind identity checks or age-verification hurdles that often involve facial scans or ID uploads.

“These measures don’t technically block the content,” Best said, “but they gate it behind steps that prove a hassle at best, and an invasion of privacy at worst.” He warned that this structure discourages readers, reduces visibility for writers, and weakens open cultural exchange.

Best, who emphasized Substack’s commitment to press freedom, said the OSA misdiagnoses the problem of online harm by targeting speech rather than prosecuting actual abuse or criminal behavior.

Keep reading

Irony Alert: Google Suddenly Champions Free Speech As UK Crushes Online Expression

In a stunning reversal, Google has slammed the UK for threatening to stifle free speech through its aggressive online regulations. This from the company infamous for its own censorship crusades against conservative voices and inconvenient truths. If even Google is raising the alarm, you know the situation in Britain has hit rock bottom.

The move signals a broader culture shift in Big Tech, where woke agendas are crumbling under pressure from free speech advocates. It’s no coincidence this comes after Elon Musk turned Twitter into X, a platform where ideas flow without the heavy hand of ideological gatekeepers.

Google, which has demonetized, shadow-banned, and outright censored content that doesn’t align with leftist narratives, now positions itself as a defender of open discourse, accusing Britain of threatening to stifle free speech in an escalation of US opposition to online safety rules.

Keep reading

Bipartisan Bill Seeks to Repeal Section 230, Endangering Online Free Speech

A proposal in the US Senate titled the Sunset Section 230 Act seeks to dismantle one of the core protections that has shaped the modern internet.

Put forward by Senator Lindsey Graham with bipartisan backing from Senators Dick Durbin, Josh Hawley, Amy Klobuchar, and Richard Blumenthal, the bill would repeal Section 230 of the Communications Act of 1934, a provision that has, for nearly thirty years, shielded online platforms from liability for the actions of their users.

We obtained a copy of the bill for you here.

Under the plan, Section 230 would be fully repealed two years after the bill’s passage.

This short transition period would force websites, social platforms, and hosting services to rethink how they handle public interaction.

The current statute stops courts from holding online platforms legally responsible as the publishers of material shared by their users.

Its protection has been instrumental in allowing everything from local discussion boards to global platforms such as YouTube and Wikipedia to operate without being sued over every user comment or upload.

The legislation’s text removes Section 230 entirely and makes “conforming amendments” across multiple federal laws.

“I am extremely pleased that there is such wide and deep bipartisan support for repealing Section 230, which protects social media companies from being sued by the people whose lives they destroy.

Giant social media platforms are unregulated, immune from lawsuits, and are making billions of dollars in advertising revenue off some of the most unsavory content and criminal activity imaginable,” said Senator Graham.

“It is past time to allow those who have been harmed by these behemoths to have their day in court.”

Keep reading

In rare public comments, career DOJ officials offer chilling warnings about online network 764

In striking and chilling terms, several career Justice Department officials on Thursday offered dire warnings about the online extremist network “764,” whose young followers around the world use popular social media platforms to target, groom and push vulnerable teens into harming themselves and others.

“I don’t think Stephen King is dark enough to come up with some of the stuff that these kids are coming up with,” said Justin Sher, a trial attorney with the Justice Department’s National Security Division.

“It is as serious a threat as you can imagine,” Sher’s Justice Department colleague James Donnelly said. “[And] they’re trying to metastasize the evil.”

Their comments came during a panel about 764 hosted by George Washington University’s Program on Extremism. It was a rare public appearance for two career prosecutors who the panel’s moderator described as “the point people” on 764 within the department’s National Security Division.

Sher and Donnelly both noted that 764 members are increasingly trying to push victims to take deadly actions, including suicide or school shootings and other mass-casualty attacks.

As ABC News has previously reported, 764 members find vulnerable victims on popular online platforms, elicit private information and intimate sexual images from them, and then use that sensitive material to threaten and blackmail victims into mutilating themselves, harming others, or taking other violent action — all while streaming it on social media so others can watch and then disseminate recordings of it.

“For them, content is currency,” Sher said. “So they are building their content inventory … and putting it out there to build their status within these groups.”

Keep reading

US Suspends $41 Billion Tech Deal with UK over Online Censorship Laws

The great transatlantic tech romance has hit the skids. What was sold as a landmark agreement binding Silicon Valley brains to British ambition has been shoved into neutral, all because Britain decided it quite fancies telling American machines what they are allowed to say.

Washington has now suspended the much-trumpeted US-UK technology agreement, a decision driven by mounting alarm over Britain’s new censorship law, the Online Safety Act.

The idea that a British regulator might fine or muzzle American firms has landed in Washington like a dropped wrench.

One participant in the talks put it bluntly, telling The Telegraph, “Americans went into this deal thinking Britain were going to back off regulating American tech firms but realized it was going to restrict the speech of American chatbots.”

The Online Safety Act gives Britain the power to fine companies it believes are enabling “harmful” or “hateful” speech, concepts elastic enough to stretch around just about anything if you pull hard enough.

The communications regulator Ofcom has not been shy about using these powers.

Enforcement notices have already landed on the desks of major American firms, even when their servers, staff, and coffee machines are nowhere near Britain.

From Washington’s perspective, this looks less like safety and more like Britain peering over the Atlantic with a ruler, ready to rap American knuckles.

The White House had been keen on the £31 ($41) billion Tech Prosperity Deal, seeing it as a front door to closer ties on AI research and digital trade.

Instead, officials began to see the Online Safety Act as a mechanism for deciding what American platforms, and their algorithms, are allowed to say. Chatbots like ChatGPT or Elon Musk’s Grok suddenly looked like potential defendants in a British courtroom, accused of wrongthink.

Keep reading

UK Parliament Rejects Petition to Repeal Online Censorship Law, Calls for Expanded Censorship

This week in the UK, Parliament held a debate in response to a public petition that gathered hundreds of thousands of signatures calling for the repeal of the Online Safety Act (OSA).

It was a rare opportunity for elected officials to prove they still listen to their constituents.

Instead, the overwhelming message from MPs was clear: thanks for your concern, but we’d actually like even more control over what you can do online.

One by one, MPs stood up not to defend free expression, or question whether one of the most radical internet control laws in modern British history might have gone too far, but to argue that it hadn’t gone far enough.

“It’s Not Censorship, It’s Responsibility” (Apparently)

Lizzi Collinge, Labour MP for Morecambe and Lunesdale, insisted the OSA “is not about controlling speech.” She claimed it was about giving the online world the same “safety features” as the offline one.

This was a recurring theme throughout the debate: reassure the public that speech isn’t being restricted while calling for more mechanisms to restrict it.

Ian Murray, Minister for Digital Government and Data, also insisted the OSA protects freedom of expression. According to him, there’s no contradiction in saying people can speak freely, as long as they’re age-verified, avoid VPNs, and don’t say anything that might be flagged by a government regulator.

It’s a neat trick. Say you support free speech, then build an entire law designed to monitor, filter, and police it.

VPNs in the Firing Line

There is a growing fixation inside government with VPNs. These are basic privacy tools used by millions of people every day, often to protect their data. But several MPs, including Jim McMahon, Julia Lopez, and Ian Murray, suggested VPNs should be subject to age verification or regulatory restrictions.

It’s unclear whether these MPs understand how VPNs work or if they simply dislike the idea of anyone browsing the internet without supervision.

Either way, the intent is clear. The government wants fewer ways for people to browse anonymously.

Keep reading

Porn Sites Must Block VPNs To Comply With Indiana’s Age-Verification Law, State Suggests in New Lawsuit

Indiana Attorney General Todd Rokita is suing dozens of porn websites, claiming that they are in violation of the state’s age-verification law and seeking “injunctive relief, civil penalties, and recovery of costs incurred to investigate and maintain the action.”

Last year, Indiana Senate Bill 17 mandated that websites featuring “material harmful to minors” must verify that visitors are age 18 or above. Rather than start checking IDs, Aylo—the parent company of Pornhub and an array of other adult websites—responded by blocking access for Indiana residents.

Now, Indiana says this is not good enough. To successfully comply, Pornhub and other Aylo platforms (which include Brazzers, Youporn, and Redtube, among others) must also block virtual private networks and other tools that allow internet users to mask their IP addresses, the state suggests.

This is an insane—and frighteningly dystopian—interpretation of the law.

Broad Anti-Privacy Logic

In a section of the suit detailing how Aylo allegedly violated the age-check law, Indiana notes that last July, “an investigator employed by the Office of the Indiana Attorney General (‘OAG Investigator’) accessed Pornhub.com from Indiana using a Virtual Private Network (VPN) with a Chicago, Illinois IP address.”

“Defendants have not implemented any reasonable form of age verification on its website Pornhub.com,” the suit states. It goes on to detail how Indiana investigators also accessed Brazzers.com, Faketaxi.com, Spicevids.com, and other adult websites using a VPN.

Keep reading

Privacy For The Powerful, Surveillance For The Rest: EU’s Proposed Tech Regulation Goes Too Far

Last month, we lamented California’s Frontier AI Act of 2025. The Act favors compliance over risk management, while shielding bureaucrats and lawmakers from responsibility. Mostly, it imposes top-down regulatory norms, instead of letting civil society and industry experts experiment and develop ethical standards from the bottom up.

Perhaps we could dismiss the Act as just another example of California’s interventionist penchant. But some American politicians and regulators are already calling for the Act to be a “template for harmonizing federal and state oversight.” The other source for that template would be the European Union (EU), so it’s worth keeping an eye on the regulations spewed out of Brussels.

The EU is already way ahead of California in imposing troubling, top-down regulation. Indeed, the EU Artificial Intelligence Act of 2024 follows the EU’s overall precautionary principle. As the EU Parliament’s internal think tank explains, “the precautionary principle enables decision-makers to adopt precautionary measures when scientific evidence about an environmental or human health hazard is uncertain and the stakes are high.” The precautionary principle gives immense power to the EU when it comes to regulating in the face of uncertainty — rather than allowing for experimentation with the guardrails of fines and tort law (as in the US). It stifles ethical learning and innovation. Because of the precautionary principle and associated regulation, the EU economy suffers from greater market concentration, higher regulatory compliance costs, and diminished innovation — compared to an environment that allows for experimentation and sensible risk management. It is small wonder that only four of the world’s top 50 tech companies are European.

From Stifled Innovation to Stifled Privacy

Along with the precautionary principle, the second driving force behind EU regulation is the advancement of rights — but cherry-picking from the EU Charter of Fundamental Rights of rights that often conflict with others. For example, the EU’s General Data Protection Regulation (GDPR) of 2016 was imposed with the idea of protecting a fundamental right to personal data protection (this is technically separate from the right to privacy, and gives the EU much more power to intervene — but that is the stuff of academic journals). The GDPR ended up curtailing the right to economic freedom.

This time, fundamental rights are being deployed to justify the EU’s fight against child sexual abuse. We all love fundamental rights, and we all hate child abuse. But, over the years, fundamental rights have been deployed as a blunt and powerful weapon to expand the EU’s regulatory powers. The proposed Child Sex Abuse regulation (CSA) is no exception. What is exceptional, is the extent of the intrusion: the EU is proposing to monitor communications among European citizens, lumping them all together as potential threats rather than as protected speech that enjoys a prima facie right to privacy.

As of 26 November 2025, the EU bureaucratic machine has been negotiating the details of the CSA. In the latest draft, mandatory scanning of private communications has thankfully been removed, at least formally. But there is a catch. Providers of hosting and interpersonal communication services must identify, analyze, and assess how their services might be used for online child sexual abuse, and then take “all reasonable mitigation measures.” Faced with such an open-ended mandate and the threat of liability, many providers may conclude that the safest — and most legally prudent — way to show they have complied with the EU directive is to deploy large-scale scanning of private communications.

The draft CSA insists that mitigation measures should, where possible, be limited to specific parts of the service or specific groups of users. But the incentive structure points in one direction. Widespread monitoring may end up as the only viable option for regulatory compliance. What is presented as voluntary today risks becoming a de facto obligation tomorrow.

In the words of Peter Hummelgaard, the Danish Minister of Justice: “Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse. This is completely unacceptable.” No one disputes the gravity or turpitude of the problem. And yet, under this narrative, the telecommunications industry and European citizens are expected to absorb dangerous risk-mitigation measures that are likely to involve lost privacy for citizens and widespread monitoring powers for the state.

The cost, we are told, is nothing compared to the benefit.

After all, who wouldn’t want to fight child sexual abuse? It’s high time to take a deep breath. Child abusers should be punished severely. This does not dispense a free society from respecting other core values.

But, wait. There’s more…

Keep reading