Online Safety Act: Ofcom’s super-complaints scheme will be used to censor “emerging online harms”

Introduction

Ofcom is a public corporation accountable to the UK Parliament.  It is the UK’s regulator for communications services.  It regulates the UK’s broadband, home phone and mobile services, TV, radio and video on demand services, oversees the universal postal service and looks after the airwaves used by wireless devices such as cordless phones, walkie-talkies and even some car keys and doorbells.

With the introduction of the Online Safety Act (“the Act”) in 2023, Ofcom became the regulator for online safety.  The aim is that Ofcom will become the online regulator not only in the UK but globally.

The Act gives Ofcom the powers they need to take appropriate action against all companies in scope, no matter where they are based, where services have relevant links with the UK. This means services with a significant number of UK users or where UK users are a target market, as well as other services which have in-scope content that presents a risk of significant harm to people in the UK.Online Safety Act: explainer, Department for Science, Innovation & Technology, UK Government, 24 April 2025

What are Super-Complaints?

A super-complaint is a formal mechanism established under the Online Safety Act that allows designated organisations to raise concerns about systemic issues affecting users of regulated online services.  Unlike individual complaints, super-complaints focus on broad, emerging trends or widespread problems across multiple platforms, rather than isolated incidents. 

Eligible entities, such as consumer protection bodies or organisations representing users, can submit these complaints to trigger a specific regulatory response from Ofcom.  The super-complaints regime is designed to complement Ofcom’s own research and horizon scanning, a forward-looking assessment of emerging risks and trends in digital services.

“The super-complaints regime, in line with the Act and recent Government regulations … will enable eligible entities to raise systemic issues that arise across services or, in exceptional circumstances, on one service, to our attention. We expect to consult on draft guidance for potential super-complainants in September 2025 and publish our final guidance in early 2026,” Ofcom’s roadmap to regulation of “online safety” states.

From November 2023 to 11 January 2024, the Government ran a public consultation on the eligible entity criteria and procedure for super-complaints.  The Government’s response to the consultation was published on 9 June 2025; the Ministerial Foreword by Baroness Maggie Jones stated:

The objective of the super-complaints regime is to ensure that eligible entities can make complaints to Ofcom, as regulator, to make them aware of existing or emerging online harms. This will also support Ofcom’s horizon scanning function, supporting Ofcom in taking an agile approach to regulating online harms.

Once approved by both Houses of Parliament, the super-complaints regime will come into force on 31 December 2025.Ministerial Foreword, Consultation Outcome: Super-complaints eligible entity criteria and procedural requirements: government response, UK Government, 9 June 2025

Keep reading

Epstein’s Inbox Lays Out Gift Networks, PR Tactics, And Strange Habits

Nearly two weeks ago the House Oversight Committee released a trove of emails related to Jeffrey Epstein.

Today Bloomberg revealed they have obtained over 18,000 more – and dedicated a fleet of journalists to sift through them – with what we imagine was an effort to find dirt on President Donald Trump. 

And while mentions of Trump are scant, the emails reveal a vast network of gifts spanning Epstein victims, recruiters, and associates

Trump Stuff

Donald Trump is mentioned a few times in the cache; he appears alongside Epstein and Maxwell at Mar-a-Lago in 2000, and in a 2003 New York Magazine–described dinner Maxwell arranged at Epstein’s townhouse with “barely clad models”; in a Sept. 14, 2006 email in which Maxwell sends Epstein a 51-name VIP list that includes Trump – to which Epstein replies “Remove trump,” with the list’s purpose unclear. On Aug. 23, 2007 Maxwell writes to Epstein that reporters likely “went to donald trump” as the Epstein investigation into his sex crimes intensified.

And there’s one message recounting Trump and Epstein’s real-estate rivalry over Abe Gosman’s former mansion (which Trump ultimately bought). 

The correspondence, most active from 2005 to 2008, includes a 2007 accountant’s spreadsheet itemizing nearly 2,000 gifts, purchases and payments totaling about $1.8 million. Many entries bear Maxwell’s initials, “GM,” indicating she helped arrange them. The records log intended recipients ranging from political aides and financiers to assistants and women who later identified as victims. The spreadsheet does not confirm whether gifts were actually delivered or accepted.

The emails also show Maxwell’s role was broader than she has publicly claimed.

Keep reading

Cybersecurity Experts Warn EU Against Chat Control 2.0 Regulation Ahead of Key Votes

A group of more than 500 experts in cybersecurity, cryptography, and computer science from 34 countries has issued a clear warning against the European Union’s proposed Chat Control 2.0 regulation.

In a joint open letter, the signatories describe the plan as “technically infeasible” and caution that it would open the door to “unprecedented capabilities for surveillance, control, and censorship.”

We obtained a copy of the open letter for you here.

Their statement arrives just days ahead of a critical European Council meeting on September 12, with a final vote set for October 14 that will determine whether the regulation moves forward.

The proposed law would compel messaging apps, email platforms, cloud services, and even providers of end-to-end encrypted communication to scan all user content automatically. This would apply to texts, images, and videos, whether or not there is any suspicion of wrongdoing.

According to the researchers, such detection systems cannot coexist with secure communication. “On‑device detection, regardless of its technical implementation, inherently undermines the protections that end‑to‑end encryption is designed to guarantee.”

By forcing companies to monitor encrypted content, the regulation would introduce security weaknesses that could be exploited by malicious actors and hostile governments.

The scientists also emphasize the inaccuracy of the proposed approach. They argue that large-scale scanning systems produce unacceptable error rates and could generate enormous numbers of false reports.

Keep reading

The Supreme Court Fight That Could Decide Who Gets to Stay Online

A lineup of civil liberties organizations, technology companies, and internet freedom advocates has filed briefs supporting Cox Communications in a closely watched Supreme Court case that could dramatically alter how internet service providers respond to copyright complaints.

The case, Cox Communications v. Sony Music Entertainment, asks whether ISPs must terminate internet service to users accused, but not convicted, of piracy.

The Supreme Court’s upcoming decision is one of the most consequential internet-related cases in years, not just for copyright law, but for the future of how people access the internet in the United States.

At stake is a fundamental question: can internet service providers be held liable and forced to cut off internet access to users simply because they’ve been accused of copyright infringement, without any judicial process or proof?

Those standing with Cox include the Electronic Frontier Foundation (EFF), American Civil Liberties Union (ACLU), Google, Amazon, Microsoft, Mozilla, Pinterest, and X.

Kiwi Farms founder and president of the U.S. Internet Preservation Society (USIPS), Joshua Moon, also filed a brief “in support of neither party,” but opposing internet cutoffs and arguing that contributory copyright infringement is a judge-made doctrine without a statutory basis, and it has become unworkable and unfair because courts have never clearly defined its “knowledge” standard.

This vagueness, combined with the rise of the DMCA’s notice-and-takedown regime, has chilled free speech and fair use, expanded copyright monopolies, and produced abusive practices.

Cox argues that the Fourth Circuit’s ruling, which held it liable for contributory infringement and ordered a new damages trial, created an untenable standard that would force ISPs to police user activity under threat of billion-dollar judgments.

In a joint brief, the Electronic Frontier Foundation, American Library Association, and others sharply criticized the Fourth Circuit’s interpretation.

The Electronic Frontier Foundation, American Library Association, and other groups filed a joint amicus brief urging the Court to reject the liability framework adopted by the Fourth Circuit.

They argue that the ruling could lead to widespread loss of internet access based on unproven accusations, with disproportionate consequences for ordinary people.

The brief stresses the importance of online access in modern life, stating: “Internet access is essential to participation in economic, cultural, and social activity.”

The groups caution that adopting a liability standard based on mere knowledge, rather than intent or inducement, would cause serious collateral harm. The brief warns that imposing such rules on ISPs would inevitably result in users being disconnected because someone else in their household or workplace was accused of infringement.

They further argue that the Fourth Circuit’s decision improperly expands secondary copyright liability without congressional authorization, threatening to undermine constitutional protections tied to access, communication, and expression.

Even the US Department of Justice weighed in, siding with Cox on key legal questions. The DOJ told the Court that the Fourth Circuit’s approach “cannot be reconciled with this Court’s precedent” and warned against punishing providers who may simply be indifferent but not complicit.

In its brief, the DOJ stressed: “The evidence demonstrated at most that Cox was indifferent to its subscribers’ infringement, not that Cox intended to participate in that infringement or wished to bring it about.”

On the issue of willful infringement, the DOJ added that “willfulness in the civil context generally requires knowledge or reckless disregard of the fact that one’s own conduct is unlawful,” not just knowledge of someone else’s actions.

A coalition of major tech companies, Google, Amazon, Microsoft, Mozilla, and Pinterest, also submitted a unified brief opposing the lower court’s interpretation of contributory liability. They argue that the DMCA already outlines specific safe harbor rules, and the Fourth Circuit’s ruling improperly weaponizes the absence of safe harbor protection.

“The Fourth Circuit’s ruling erroneously turns Congress’s DMCA safe harbors into a liability-creating mechanism,” the companies stated.

They argued that liability should only attach to those engaged in “conscious, culpable conduct substantially assisting the primary wrongdoer.”

The brief makes clear that a finding of willfulness demands more than simple awareness: “Willfulness turns on the defendant’s mental state regarding its own conduct.”

USIPS criticized the legal foundation of the lower court’s ruling as illegitimate and warned that fear of liability is driving ISPs toward censorship.

Keep reading

Undersea Cable Cuts Kill Internet To Parts Of Asia, Mideast

Undersea internet cables in the Red Sea have been cut, disrupting internet access to parts of Asia and the Middle East. The cause of the cuts weren’t immediately clear, though China does have a shiny new deep-sea cable cutter (which we’re sure a bunch of countries have too). 

Associated Press (via NBC) seems to think (‘there has been concern’) that Houthi rebels from Yemen have been targeting the cables, which sounds absurd – though parts of the red sea are only as deep at 100m (330 ft). 

While the Houthis might not have submarines, undersea robots, or the ability to hit the deepest parts of the Red Sea, it’s possible to inflict damage on subsea cables without the backing of a major navy.

In March 2013, three divers were arrested by the Egyptian Navy off the coast of Alexandria after cutting the SeaMeWe-4 cable by detonating underwater explosives. Internet speeds reportedly fell around 60 percent after the incident. A motive wasn’t revealed and it’s unclear if they were charged and/or sentenced for the damage.

In 2007, it was reported that police had seized more than 500km of telecom cable taken by fishing vessels to sell for scrap – including an 11km segment identified as belonging to the SeaMeWe-3 cable. –Data Center Dynamics

So, who knows – but AP (deep state) spends considerable ink on the Houthis 

Keep reading

Age Verification Company Exposes User Data, Reinforcing Privacy Fears Over Digital ID Systems

A company tasked with confirming users’ ages before they access adult content may be compromising their privacy by leaking detailed browsing data, according to a report by the nonprofit AI Forensics.

The group’s investigation highlights serious flaws in how some sites are complying with growing online age-check requirements, raising new concerns about surveillance and data exposure under the guise of protecting children.

France’s law requires that users’ identities remain concealed, not just from adult websites, but from the age verification services themselves.

Known as “double anonymity,” this standard is meant to ensure that those performing the verification process have no knowledge of which websites users are visiting or what content they attempt to access.

But AI Forensics found that AgeGO, one of the verification systems in active use, doesn’t meet those expectations.

Instead, AgeGO’s system reportedly transmits precise details about the user’s activity, including the URL of the video being viewed and the name of the website.

Keep reading

Josh Hawley Proposes AI Regulations, Section 230 Repeal, and Digital ID Checks for Chatbots

Senator Josh Hawley (R-Mo.) is pushing for broad new regulations on artificial intelligence, including age verification for chatbot access, data ownership rights, and the full repeal of Section 230 of the Communications Decency Act.

While the proposals are framed as efforts to curb corporate overreach in the tech industry, they will ignite concern among digital rights advocates who warn that such measures could undermine online privacy and freedom of expression.

At the National Conservatism Conference, Hawley accused AI developers of building their systems by collecting and using copyrighted material without permission. “The AI large language models [LLMs] have already trained on enough copyrighted works to fill the Library of Congress 22 times over,” he said.

“Let me just put a finer point on that — AI’s LLMs have ingested every published work in every language known to man already.” He claimed that creators were neither consulted nor compensated.

In July, Hawley introduced the AI Accountability and Personal Data Protection Act, which would allow individuals to sue companies that use personal data without consent and would establish property rights over certain categories of digital information.

However, two key components of Hawley’s platform are raising some alarm. His call to repeal Section 230 has been criticized for potentially damaging the open internet.

Section 230 currently shields online platforms from legal liability for content created by users. Without it, many sites could be forced to preemptively remove user content out of legal risk, resulting in widespread over-moderation and silencing of lawful speech.

Keep reading

Google ordered to pay over $425 million in damages for smartphone privacy violations

Tech giant Google has been ordered to pay over $425 million for improperly snooping on the data of smartphone users and invading users’ privacy from 2016-2024.

It’s a violation of public trust,” said attorney & political analyst Madeline Summerville.

The class action lawsuit, initially filed in 2020, accused the company of collecting data from 98 million devices that had turned off a tracking feature in their Google account.

Even though I’ve shutoff all the different apparatuses that would keep Google from monitoring me, they’re still doing it because they were doing it through third party apps,” Summerville said.

The jury found Google spied on users and was in violation of California privacy laws. But Google denied it was improperly accessing devices. A Google spokesperson told Reuters, this decision misunderstands how its products work and it plans to file an appeal. “Our privacy tools give people control over their data, and when they turn off personalization, we honor that choice.”

Keep reading

ICE Reactivates Contract With Israeli-linked Spyware Firm Paragon

U.S. Immigration and Customs Enforcement (ICE) has reactivated a $2 million spyware contract with Paragon Solutions, an Israeli-founded firm now owned by a U.S. private equity group. The move lifts a Biden-era freeze and signals a deeper embrace of invasive surveillance tools in domestic immigration enforcement.

It is also only the latest sign of how far the federal government’s surveillance apparatus has grown under the banner of “immigration enforcement.” ICE has become one of its most powerful nodes — a conduit through which cutting-edge spyware, data analytics, and AI-driven tools are deployed inside U.S. borders.

Contract Reborn

On September 1, journalist Jack Poulson, citing the official procurement note, reported that ICE quietly lifted a stop-work order on the Paragon contract. The order had been in place since October 2024, after the Biden administration paused the deal under Executive Order 14093. That order barred agencies from buying foreign spyware tied to human rights abuses.

Paragon

Paragon is an Israeli spyware company founded in 2019 by veterans of Israel’s cyberwarfare Unit 8200, the equivalent of the U.S. National Security Agency (NSA). Among the early backers is Prime Minister Ehud Barak, a longtime political heavyweight and known associate of Jeffrey Epstein. From the start, it marketed itself as the “ethical” alternative to Pegasus, another notorious Israeli spyware.

Citizen Lab reports that by 2021 Paragon had launched a U.S. subsidiary and staffed it with former CIA, Air Force, and defense contractor officials. That gave it a foothold in Washington. Within two years, ICE had signed a $2 million contract for its spyware; U.S. Special Operations Command disclosed more than $11 million in related purchases.

In late 2024, ownership shifted. All shares in Paragon Israel were transferred to Paragon Parent Inc., a new Delaware corporation. The deal, reportedly led by Florida-based private equity firm AE Industrial Partners, was valued at $500 million up front, with another $400 million tied to performance goals. Soon after, Paragon was folded into REDLattice, a Virginia contractor already known for offensive cyber tools. U.S. Securities and Exchange Commission (SEC) filings show REDLattice’s parent company then added ex-CIA and U.S. Army chiefs to its board.

Once Paragon became “American-owned,” ICE lifted the freeze on its spyware contract. In effect, the U.S. government blocked the deal when the company was Israeli but allowed it once Americans — many with intelligence and military ties — took control. The spyware itself did not change, only the ownership structure, and it is far from clear how much influence Israeli intelligence veterans still wield inside the company.

Graphite

Graphite is Paragon’s flagship spyware. Unlike Pegasus, which can take full control of a phone, Graphite focuses on breaking in to encrypted messaging apps. It can pull data from WhatsApp, Signal, and iMessage without seizing the entire device.

Investigators have shown that Graphite often relies on “zero-click” exploits. These attacks require no action from the target. Once inside, the spyware extracts texts, call logs, photos, videos, and even microphone input. All of it is sent to remote servers controlled by the operator. Citizen Lab’s forensic report from this June confirmed the tool had been deployed against journalists in Europe. Their devices were fully updated yet still compromised until Apple patched the flaw in iOS 18.3.1.

This technical profile explains why Graphite is so attractive to governments. It is stealthy, precise, and hard to detect. But its use has raised alarms well beyond Israel and the United States.

Keep reading

Macron’s Global Censorship Push Exposed: Leaked Files Reveal France’s Covert Speech Control Campaign

As European leaders push to shape global speech rules under the guise of trade policy, new internal records reveal that the French government quietly built a system to enforce censorship worldwide.

Leaked internal communications from Twitter, now known as X, expose a sophisticated campaign led by President Emmanuel Macron and aided by state-aligned organizations to pressure the platform into suppressing speech far beyond what French law requires.

While publicly promoting values like free expression, France’s leadership was privately demanding crackdowns on political content, anonymous users, and anything that veered from government-approved narratives.

The latest TWITTER FILES – FRANCE, published by Public, which is worth reading, documents how Paris pioneered the modern censorship-by-proxy model; using lawsuits, coordinated NGO pressure, and personal outreach at the highest levels to mold a global moderation regime in France’s image.

One of the more revealing moments in the documents comes from October 2020, when Twitter’s Public Policy Director in France noted unusual persistence from the Élysée Palace.

“President Macron’s team has been asking me (again!) Jack [Dorsey]’s number because the President wants to text him some supporting words re our new policies and functionalities on Election integrity,” the message read.

The only issue? Dorsey didn’t hand out his number, even to heads of state. Staff reminded Macron’s team that a direct message would be more appropriate, though they acknowledged the President didn’t use Twitter personally. Alternatives like Signal, Telegram, and even iMessage were considered.

Keep reading