New York Wants Online Digital ID Rules for Social Media Feeds Under “SAFE For Kids Act”

New York is advancing a set of proposed regulations that would require social media platforms to verify users’ ages before granting access to algorithm-driven feeds or allowing nighttime alerts.

Attorney General Letitia James introduced the draft rules on Monday, tied to the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, which was signed into law last year by Governor Kathy Hochul.

Presented as part of an effort to reduce mental health harms linked to social media, the law would compel platforms to restrict algorithmic content for anyone under 18 or anyone who hasn’t completed an age verification process, which would mean the introduction of digital ID checks to access online platforms.

In those cases, users would be limited to seeing content in chronological order from accounts they already follow.

Platforms would also be barred from sending notifications between 12 a.m. and 6 a.m. to those users.

The rules give companies some flexibility in how they confirm a user’s age, as long as the method is considered effective and designed to protect personal data.

Acceptable alternatives to submitting a government ID include facial analysis that estimates age. Any identifying information collected during verification must be deleted “immediately,” according to the proposal.

For minors to access personalized algorithmic feeds, parental permission would be required.

That too involves a verification step, with the same data-deletion requirements in place once the process is complete.

The SAFE For Kids Act targets platforms where user-generated content is central and where at least 20 percent of time spent involves engagement with feeds tailored to user behavior or device data.

Keep reading

Bipartisan Push in Congress to Weaken Section 230, Expand Online Surveillance, and Increase Platform Liability

During this week’s testimony before both chambers of Congress, FBI Director Kash Patel and several lawmakers made a concerted push to weaken protections for online platforms, advance surveillance partnerships, and promote government intervention in digital speech spaces.

The hearings revealed a rare bipartisan consensus around dismantling Section 230 and tightening control over how people interact and communicate online.

In the Senate, Republican Senator Lindsey Graham opened his questioning by linking online platforms to the assassination of Charlie Kirk, then repeatedly pressed Patel on whether the internet was a breeding ground for radicalization and crime.

Throughout their exchange, Graham blurred the lines between criminal behavior, such as grooming or inciting violence, and broad categories like bullying.

“Is there any law that can shut down one of these sites? For bullying children or allowing sexual predators on the site,” Graham asked.

He repeatedly implied that websites hosting objectionable content should be held legally responsible, asking, “Would you advocate a sunsetting of Section 230 to bring more liability to the companies who send this stuff out?”

Patel replied, “I’ve advocated for that for years.”

Section 230 of the Communications Decency Act is a legal provision that protects online platforms from being held liable for content posted by their users.

Keep reading

LaLiga’s Anti-Piracy Crackdown Triggers Widespread Internet Disruptions Across Spain

LaLiga, Spain’s top football league, is facing a firestorm of criticism after boasting about a staggering 142% increase in anti-piracy takedown notices in early 2025 while simultaneously causing extensive collateral damage across the internet.

As the 2025/2026 season began on August 15, LaLiga ramped up its enforcement strategy, triggering widespread outages for entirely lawful websites, services, and platforms.

These disruptions are tied to a controversial anti-piracy scheme operated in partnership with telecom giant Telefónica.

The initiative, which enjoys judicial backing in Spain, allows LaLiga to instruct major internet service providers, including Movistar, Vodafone, Orange, and DIGI, to block IP addresses suspected of hosting unauthorized streams.

The fallout is that entire chunks of the internet go dark for Spanish users, often during match broadcasts.

LaLiga doesn’t target specific infringing content. Instead, it flags entire IP ranges, many of which are shared by thousands of unrelated domains.

When one site is accused of hosting pirated material, everyone else sharing that IP address gets swept up in the block.

The result is a digital dragnet that has ensnared companies as diverse as Amazon, Cloudflare, GitHub, Twitch, and even Google Fonts.

TorrentFreak has documented repeated weekly blocks of platforms like Vercel since early 2025, while Catalonia’s own .cat domain registry has also reported service disruptions.

The issue became so disruptive that iXsystems, the team behind TrueNAS, a widely used open-source NAS operating system, was forced to shift its distribution model entirely. After its CDN IPs were repeatedly blocked in Spain, making critical security updates inaccessible to users, the developers resorted to distributing their software via BitTorrent.

“These locks have a significant collateral damage about legitimate services, which have nothing to do with football piracy,” TrueNAS noted. Their solution not only bypasses censorship but hands the bandwidth burden back to the same ISPs complicit in the blocking.

Keep reading

Mullvad Introduces QUIC-Based WireGuard Obfuscation to Bypass Censorship and VPN Blocks

Mullvad has begun rolling out a new feature that hides WireGuard connections inside QUIC traffic, a technique designed to help users slip past aggressive censorship systems.

By making VPN traffic look more like ordinary encrypted browsing, the update gives people in tightly controlled regions, including Russia and China, a better chance of maintaining stable access to the internet.

It also helps with accessing websites that are increasingly trying to ban VPNs.

The addition comes as Mullvad prepares to move away from OpenVPN, which it will no longer support starting January 2026.

With that change on the horizon, the company is putting its weight behind WireGuard while also making sure it remains usable in countries where standard WireGuard connections are heavily throttled or blocked.

QUIC itself is not new. Originally created by Google and now the backbone of HTTP/3, the protocol is prized for its speed, ability to handle multiple streams of data at once, and resilience against network issues.

Services like YouTube already rely on it, making QUIC traffic extremely common. Mullvad takes advantage of that by wrapping WireGuard’s UDP packets inside QUIC, effectively disguising VPN usage as something indistinguishable from normal web activity.

To make this possible, Mullvad has turned to MASQUE, a standard that allows UDP traffic to be tunneled through HTTP/3 connections.

The result is traffic that appears identical to everyday browsing, far harder for censors to single out and shut down.

The feature is included in Mullvad’s desktop apps for Windows and macOS beginning with version 2025.9.

Users can activate it in the VPN settings, though if multiple connection attempts fail, the client will automatically switch over to QUIC on its own. Support for Android and iOS devices is also planned.

Different VPN companies are taking different routes to achieve similar goals. Proton VPN relies on its Stealth protocol, which disguises WireGuard traffic inside TLS.

Keep reading

Online Safety Act: Ofcom’s super-complaints scheme will be used to censor “emerging online harms”

Introduction

Ofcom is a public corporation accountable to the UK Parliament.  It is the UK’s regulator for communications services.  It regulates the UK’s broadband, home phone and mobile services, TV, radio and video on demand services, oversees the universal postal service and looks after the airwaves used by wireless devices such as cordless phones, walkie-talkies and even some car keys and doorbells.

With the introduction of the Online Safety Act (“the Act”) in 2023, Ofcom became the regulator for online safety.  The aim is that Ofcom will become the online regulator not only in the UK but globally.

The Act gives Ofcom the powers they need to take appropriate action against all companies in scope, no matter where they are based, where services have relevant links with the UK. This means services with a significant number of UK users or where UK users are a target market, as well as other services which have in-scope content that presents a risk of significant harm to people in the UK.Online Safety Act: explainer, Department for Science, Innovation & Technology, UK Government, 24 April 2025

What are Super-Complaints?

A super-complaint is a formal mechanism established under the Online Safety Act that allows designated organisations to raise concerns about systemic issues affecting users of regulated online services.  Unlike individual complaints, super-complaints focus on broad, emerging trends or widespread problems across multiple platforms, rather than isolated incidents. 

Eligible entities, such as consumer protection bodies or organisations representing users, can submit these complaints to trigger a specific regulatory response from Ofcom.  The super-complaints regime is designed to complement Ofcom’s own research and horizon scanning, a forward-looking assessment of emerging risks and trends in digital services.

“The super-complaints regime, in line with the Act and recent Government regulations … will enable eligible entities to raise systemic issues that arise across services or, in exceptional circumstances, on one service, to our attention. We expect to consult on draft guidance for potential super-complainants in September 2025 and publish our final guidance in early 2026,” Ofcom’s roadmap to regulation of “online safety” states.

From November 2023 to 11 January 2024, the Government ran a public consultation on the eligible entity criteria and procedure for super-complaints.  The Government’s response to the consultation was published on 9 June 2025; the Ministerial Foreword by Baroness Maggie Jones stated:

The objective of the super-complaints regime is to ensure that eligible entities can make complaints to Ofcom, as regulator, to make them aware of existing or emerging online harms. This will also support Ofcom’s horizon scanning function, supporting Ofcom in taking an agile approach to regulating online harms.

Once approved by both Houses of Parliament, the super-complaints regime will come into force on 31 December 2025.Ministerial Foreword, Consultation Outcome: Super-complaints eligible entity criteria and procedural requirements: government response, UK Government, 9 June 2025

Keep reading

Epstein’s Inbox Lays Out Gift Networks, PR Tactics, And Strange Habits

Nearly two weeks ago the House Oversight Committee released a trove of emails related to Jeffrey Epstein.

Today Bloomberg revealed they have obtained over 18,000 more – and dedicated a fleet of journalists to sift through them – with what we imagine was an effort to find dirt on President Donald Trump. 

And while mentions of Trump are scant, the emails reveal a vast network of gifts spanning Epstein victims, recruiters, and associates

Trump Stuff

Donald Trump is mentioned a few times in the cache; he appears alongside Epstein and Maxwell at Mar-a-Lago in 2000, and in a 2003 New York Magazine–described dinner Maxwell arranged at Epstein’s townhouse with “barely clad models”; in a Sept. 14, 2006 email in which Maxwell sends Epstein a 51-name VIP list that includes Trump – to which Epstein replies “Remove trump,” with the list’s purpose unclear. On Aug. 23, 2007 Maxwell writes to Epstein that reporters likely “went to donald trump” as the Epstein investigation into his sex crimes intensified.

And there’s one message recounting Trump and Epstein’s real-estate rivalry over Abe Gosman’s former mansion (which Trump ultimately bought). 

The correspondence, most active from 2005 to 2008, includes a 2007 accountant’s spreadsheet itemizing nearly 2,000 gifts, purchases and payments totaling about $1.8 million. Many entries bear Maxwell’s initials, “GM,” indicating she helped arrange them. The records log intended recipients ranging from political aides and financiers to assistants and women who later identified as victims. The spreadsheet does not confirm whether gifts were actually delivered or accepted.

The emails also show Maxwell’s role was broader than she has publicly claimed.

Keep reading

Cybersecurity Experts Warn EU Against Chat Control 2.0 Regulation Ahead of Key Votes

A group of more than 500 experts in cybersecurity, cryptography, and computer science from 34 countries has issued a clear warning against the European Union’s proposed Chat Control 2.0 regulation.

In a joint open letter, the signatories describe the plan as “technically infeasible” and caution that it would open the door to “unprecedented capabilities for surveillance, control, and censorship.”

We obtained a copy of the open letter for you here.

Their statement arrives just days ahead of a critical European Council meeting on September 12, with a final vote set for October 14 that will determine whether the regulation moves forward.

The proposed law would compel messaging apps, email platforms, cloud services, and even providers of end-to-end encrypted communication to scan all user content automatically. This would apply to texts, images, and videos, whether or not there is any suspicion of wrongdoing.

According to the researchers, such detection systems cannot coexist with secure communication. “On‑device detection, regardless of its technical implementation, inherently undermines the protections that end‑to‑end encryption is designed to guarantee.”

By forcing companies to monitor encrypted content, the regulation would introduce security weaknesses that could be exploited by malicious actors and hostile governments.

The scientists also emphasize the inaccuracy of the proposed approach. They argue that large-scale scanning systems produce unacceptable error rates and could generate enormous numbers of false reports.

Keep reading

The Supreme Court Fight That Could Decide Who Gets to Stay Online

A lineup of civil liberties organizations, technology companies, and internet freedom advocates has filed briefs supporting Cox Communications in a closely watched Supreme Court case that could dramatically alter how internet service providers respond to copyright complaints.

The case, Cox Communications v. Sony Music Entertainment, asks whether ISPs must terminate internet service to users accused, but not convicted, of piracy.

The Supreme Court’s upcoming decision is one of the most consequential internet-related cases in years, not just for copyright law, but for the future of how people access the internet in the United States.

At stake is a fundamental question: can internet service providers be held liable and forced to cut off internet access to users simply because they’ve been accused of copyright infringement, without any judicial process or proof?

Those standing with Cox include the Electronic Frontier Foundation (EFF), American Civil Liberties Union (ACLU), Google, Amazon, Microsoft, Mozilla, Pinterest, and X.

Kiwi Farms founder and president of the U.S. Internet Preservation Society (USIPS), Joshua Moon, also filed a brief “in support of neither party,” but opposing internet cutoffs and arguing that contributory copyright infringement is a judge-made doctrine without a statutory basis, and it has become unworkable and unfair because courts have never clearly defined its “knowledge” standard.

This vagueness, combined with the rise of the DMCA’s notice-and-takedown regime, has chilled free speech and fair use, expanded copyright monopolies, and produced abusive practices.

Cox argues that the Fourth Circuit’s ruling, which held it liable for contributory infringement and ordered a new damages trial, created an untenable standard that would force ISPs to police user activity under threat of billion-dollar judgments.

In a joint brief, the Electronic Frontier Foundation, American Library Association, and others sharply criticized the Fourth Circuit’s interpretation.

The Electronic Frontier Foundation, American Library Association, and other groups filed a joint amicus brief urging the Court to reject the liability framework adopted by the Fourth Circuit.

They argue that the ruling could lead to widespread loss of internet access based on unproven accusations, with disproportionate consequences for ordinary people.

The brief stresses the importance of online access in modern life, stating: “Internet access is essential to participation in economic, cultural, and social activity.”

The groups caution that adopting a liability standard based on mere knowledge, rather than intent or inducement, would cause serious collateral harm. The brief warns that imposing such rules on ISPs would inevitably result in users being disconnected because someone else in their household or workplace was accused of infringement.

They further argue that the Fourth Circuit’s decision improperly expands secondary copyright liability without congressional authorization, threatening to undermine constitutional protections tied to access, communication, and expression.

Even the US Department of Justice weighed in, siding with Cox on key legal questions. The DOJ told the Court that the Fourth Circuit’s approach “cannot be reconciled with this Court’s precedent” and warned against punishing providers who may simply be indifferent but not complicit.

In its brief, the DOJ stressed: “The evidence demonstrated at most that Cox was indifferent to its subscribers’ infringement, not that Cox intended to participate in that infringement or wished to bring it about.”

On the issue of willful infringement, the DOJ added that “willfulness in the civil context generally requires knowledge or reckless disregard of the fact that one’s own conduct is unlawful,” not just knowledge of someone else’s actions.

A coalition of major tech companies, Google, Amazon, Microsoft, Mozilla, and Pinterest, also submitted a unified brief opposing the lower court’s interpretation of contributory liability. They argue that the DMCA already outlines specific safe harbor rules, and the Fourth Circuit’s ruling improperly weaponizes the absence of safe harbor protection.

“The Fourth Circuit’s ruling erroneously turns Congress’s DMCA safe harbors into a liability-creating mechanism,” the companies stated.

They argued that liability should only attach to those engaged in “conscious, culpable conduct substantially assisting the primary wrongdoer.”

The brief makes clear that a finding of willfulness demands more than simple awareness: “Willfulness turns on the defendant’s mental state regarding its own conduct.”

USIPS criticized the legal foundation of the lower court’s ruling as illegitimate and warned that fear of liability is driving ISPs toward censorship.

Keep reading

Undersea Cable Cuts Kill Internet To Parts Of Asia, Mideast

Undersea internet cables in the Red Sea have been cut, disrupting internet access to parts of Asia and the Middle East. The cause of the cuts weren’t immediately clear, though China does have a shiny new deep-sea cable cutter (which we’re sure a bunch of countries have too). 

Associated Press (via NBC) seems to think (‘there has been concern’) that Houthi rebels from Yemen have been targeting the cables, which sounds absurd – though parts of the red sea are only as deep at 100m (330 ft). 

While the Houthis might not have submarines, undersea robots, or the ability to hit the deepest parts of the Red Sea, it’s possible to inflict damage on subsea cables without the backing of a major navy.

In March 2013, three divers were arrested by the Egyptian Navy off the coast of Alexandria after cutting the SeaMeWe-4 cable by detonating underwater explosives. Internet speeds reportedly fell around 60 percent after the incident. A motive wasn’t revealed and it’s unclear if they were charged and/or sentenced for the damage.

In 2007, it was reported that police had seized more than 500km of telecom cable taken by fishing vessels to sell for scrap – including an 11km segment identified as belonging to the SeaMeWe-3 cable. –Data Center Dynamics

So, who knows – but AP (deep state) spends considerable ink on the Houthis 

Keep reading

Age Verification Company Exposes User Data, Reinforcing Privacy Fears Over Digital ID Systems

A company tasked with confirming users’ ages before they access adult content may be compromising their privacy by leaking detailed browsing data, according to a report by the nonprofit AI Forensics.

The group’s investigation highlights serious flaws in how some sites are complying with growing online age-check requirements, raising new concerns about surveillance and data exposure under the guise of protecting children.

France’s law requires that users’ identities remain concealed, not just from adult websites, but from the age verification services themselves.

Known as “double anonymity,” this standard is meant to ensure that those performing the verification process have no knowledge of which websites users are visiting or what content they attempt to access.

But AI Forensics found that AgeGO, one of the verification systems in active use, doesn’t meet those expectations.

Instead, AgeGO’s system reportedly transmits precise details about the user’s activity, including the URL of the video being viewed and the name of the website.

Keep reading