Judge Blocks Virginia’s One-Hour Social Media Limit for Minors as Unconstitutional

A federal judge has blocked Virginia’s attempt to limit minors to one hour of social media per day, ruling the law violates the First Amendment. The decision is a significant check on a growing wave of state legislation that treats time spent reading, watching, and communicating online as something the government can ration.

Judge Patricia Tolliver Giles issued the preliminary injunction Friday, finding that Virginia “does not have the legal authority to block minors’ access to constitutionally protected speech until their parents give their consent by overriding a government-imposed default limit.”

We obtained a copy of the opinion for you here.

The ruling halts enforcement of Senate Bill 854, which carried fines of $7,500 per violation and required platforms to use “commercially reasonable methods” to verify user ages.

The law’s problem wasn’t just the one-hour cap. It was how the cap worked. The state set the default, and parents could ask to change it. That structure puts the government, not families, in control of baseline access to speech. Parental consent here overrides a government restriction that shouldn’t exist in the first place.

Giles found the law over-inclusive in a way that illustrates exactly how blunt these restrictions are. “A minor would be barred from watching an online church service if it exceeded an hour on YouTube,” she wrote, “yet, that same minor is allowed to watch provider-selected religious programming exceeding an hour in length on a streaming platform.”

The law doesn’t regulate harm. It regulates platforms, which means it catches protected speech indiscriminately.

NetChoice, the trade association whose members include Meta, YouTube, Snap, Reddit, and TikTok, sued to stop the law. In November, NetChoice argued that “Virginia has with one broad stroke restricted access to valuable sources for speaking and listening, learning about current events and otherwise exploring the vast realms of human thought and knowledge.” The judge agreed they had standing to pursue a permanent block and found they were likely to succeed on the merits.

Virginia’s attorney general is defending the law alongside 29 other states from both parties. A spokesperson said: “We look forward to continuing to enforce laws that empower parents to protect their children from the proven harms that can come through social media.” The new Democratic attorney-general Jay Jones, who took office in January, had announced he intended to fully enforce the law signed by his Republican predecessor, Glenn Youngkin.

Keep reading

X CRACKS DOWN on AI-Generated War Propaganda: NO MORE Cashing In on Fake Footage Without Labels

In an effort to protect truthful information during global conflicts, X has rolled out strict new rules targeting creators who peddle AI-generated videos of war without clear disclosures. This comes as pro-Iran propagandists flooded the platform with fabricated clips designed to sow chaos.

The policy shift, effective immediately focuses on X’s Creator Revenue Sharing program. Creators posting AI-made content showing armed conflicts must include a clear label, or face penalties that hit where it hurts: their wallets.

According to details from X’s head of product, Nikita Bier, the platform is clamping down hard. “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Bier stated.

Bier elaborated that X will use its Community Notes system—crowdsourced fact-checking that empowers users over elite moderators—along with post metadata to detect undeclared AI content. The rules don’t ban AI videos outright; they just demand transparency via X’s built-in “Made with AI” label option.

Violators get a 90-day suspension from earning ad revenue on their posts. A second offense leads to a permanent ban from the program. This targets those exploiting wars for profit, without stifling creative expression.

The update follows a surge in deceptive content amid the Iran-Israel clash. Pro-Iran accounts have pushed AI fakes, like one claiming Iranian missiles sank the USS Abraham Lincoln. “Iran’s IRGC claims to have struck USS Abraham Lincoln with ballistic missiles. LIE,” CENTCOM posted. “The Lincoln was not hit. The missiles launched didn’t even come close.”

Keep reading

Britain and Europe are struggling economically; their response? Regulate the world

It used to be said that the sun never set on the British Empire, so far-flung were its possessions. Britain has long since retreated from most of those territories, most recently, and controversially, in its attempt to relinquish control of the Chagos Islands. Yet even as it sheds physical dominion, Britain appears increasingly eager to export something else: its laws and regulations. 

In that project, it is joined enthusiastically by its former partners in the European Union. If the Old World has one major export left, it is bureaucracy.

The most obvious current target is X, Elon Musk’s platform, and its Grok AI tool. Some users of questionable taste quickly discovered that Grok could be used to generate deepfake images of celebrities in revealing attire. More seriously, it was alleged that the technology had been used to generate sexualised images of children. In response, last month the UK’s communications regulator, Ofcom, opened a formal investigation under the Online Safety Act, citing potential failures to prevent illegal content. The possible penalties are severe, ranging from multi-million-pound fines, based on the company’s global revenue, to a complete ban on the platform in the UK.

Senior British officials were quick to escalate the rhetoric. Prime Minister Keir Starmer and Technology Secretary Liz Kendall publicly condemned X and emphasised that all options, including nationwide blocking, were on the table. The message was unmistakable; compliance would be enforced, one way or another.

Two days later, X announced new restrictions to prevent Grok from editing images of real people into revealing scenarios and to introduce geo-blocking in jurisdictions where such content is illegal. Ofcom described these changes as “welcome” but insufficient, insisting its investigation would continue. Meanwhile, pressure spread outward. Other governments announced restrictions, and the European Commission expanded its own probes under the Digital Services Act. What began as a British enforcement action quickly morphed into coordinated global pressure, effectively pushing X toward worldwide policy changes.

This is the crucial point. British regulators were not merely seeking compliance for British users. They were pressing for changes to X’s global policies and technical architecture to govern speech and expression far beyond the UK’s borders. What might initially have been framed as a failure to impose sensible safeguards on a powerful new tool has become a test case for whether regulators in one jurisdiction can dictate technological limits everywhere else.

This pattern is not new. Ofcom has already attempted to extend its reach directly into the United States, brushing aside the constitutional protections afforded to Americans. Since the Online Safety Act came into force in 2025, Ofcom has adopted an aggressively expansive interpretation of its authority, asserting that any online service “with links to the UK,” meaning merely accessible to UK users and deemed to pose “risks” to them, must comply with detailed duties to assess, mitigate, and report on illegal harms. Services provided entirely from abroad are explicitly deemed “in scope” if they meet these criteria.

The flashpoints have been 4chan and Kiwi Farms, two US-based forums notorious for unmoderated speech and even harassment campaigns. In mid-2025, Ofcom initiated investigations into both for failing to respond to statutory information requests and for failing to complete the required risk assessments. It ultimately issued a confirmation decision against 4chan, imposing a £20,000 fine plus daily penalties for continued non-compliance, despite the site having no physical presence, staff, or infrastructure in the UK.

Rather than comply, the operators of both sites filed suit in US federal court, arguing that Ofcom’s actions violate the First Amendment and that the regulator lacks jurisdiction to enforce British law against American companies. The litigation frames the dispute starkly: whether a foreign regulator may, through regulatory pressure, compel changes to lawful American speech.

That question has now spilt into US politics. Senior American officials have criticised Ofcom’s posture as an extraterritorial threat to free speech, and at least one member of Congress has threatened retaliatory legislation. What Britain views as online safety increasingly appears, from across the Atlantic, to be regulatory imperialism.

Keep reading

Pro-Palestinian Columbia University Student Group Posts “Marg Bar Amrika” (Death to America)

An unofficial Columbia University student group, the pro-Palestinian ‘Columbia University Apartheid Divest’ posted “marg bar amrika,” the Islamist Iranian regime chant that means, “death to America,” to X Saturday in response to the killing of Iran Supreme Leader Ayatollah Ale Khamenei in joint U.S.-Israeli air strikes.

Via Wikipedia: “Death to America (Persian: مرگ بر آمریکا Marg bar Amrika) is an anti-American political slogan and chant which has been in use in Iran since the inception of the Islamic Revolution in 1978. Ayatollah Khomeini, the first leader of the Islamic Republic of Iran, popularized the term. In most official Iranian translations, the phrase is translated into English as the less offensive “Down with America.” The chant “Death to America” has come to be employed by Islamists, anti-imperialists, and decolonisation movements worldwide. Great Satan is another closely linked propaganda phrase.”

The group later posted that X made them take the post down, but that it stood by the thought, “X forced us to delete our “marg bar amrika” tweet in order to gain back access to our account but the sentiment still stands”

Keep reading

Xbox UK Age Verification Launch Locks Out Thousands of Players

Xbox’s mandatory age verification rollout in the UK was a disaster almost immediately, locking thousands of players out of games, voice chat, and apps like Discord with no clear path back in.

The failures started overnight. Players report being ejected mid-session to complete age verification checks that then took hours, stalled indefinitely, or simply refused to work regardless of what identification they submitted.

Government ID, mobile numbers, and live video age estimation; the system rejected them all for many users. Others made it through verification only to find their accounts still restricted with no explanation and no recourse beyond contacting Xbox support.

Microsoft’s support page now carries a notice confirming it is “aware of the issue and working to fix it.” That’s the extent of the official guidance.

The verification requirement exists to comply with the UK’s new censorship law, the Online Safety Act, legislation mandating that platforms facilitating online communication verify user ages. The actual system XBox built to deliver that compliance forcibly disconnected players from games in progress, stripped away chat functionality with anyone outside their friends list, and blocked access to third-party services.

Users who have held Xbox accounts for over 18 years found themselves flagged for verification anyway. The system doesn’t consider account age, history, or any contextual signal that might indicate an adult user. Everyone gets treated as potentially underage until they hand over documentation.

“The amount of times I’ve tried to do any method of the verification tonight is stupid,” wrote one user. “Can’t change privacy settings on my Xbox to allow me to see mods on games too. Can’t chat on Discord. Utterly broken.”

“Been trying to verify my ID for the past few hours,” added another. “It finally worked but I can’t access anything still. No Discord access at all.”

Keep reading

Germany’s SPD Pushes Mandatory Government ID Verification for Social Media

The SPD of Germany wants to end anonymous social media access in Germany.

Tim Klüssendorf, Secretary General of the Social Democratic Party, confirmed this week that his party is pushing mandatory age verification for all social media platforms, tied directly to the EU Digital Identity Wallet, the bloc’s official government ID scheme.

He’s already in talks with coalition partner CDU, Chancellor Friedrich Merz’s party, which called for an end to online anonymity just last week. Both parties now want the same thing.

Naturally, Klüssendorf framed the proposal as child protection. “We are currently not meeting the state’s obligation to protect. I believe children and young people are particularly at risk there. That has been proven,” he said after an SPD leadership meeting in Berlin.

The platforms, he added, are currently “operating a business model that is simply not compatible with our democratic principles.”

The SPD’s formal position, adopted in an internal policy paper, breaks access into three tiers by age. Under-14s would face a complete ban from social media platforms. Under-16s could access only state-approved “youth versions,” stripped of algorithmic recommendation, infinite scroll, autoplay, and engagement reward systems. For everyone 16 and older, including adults, algorithmic content recommendations would be switched off by default. Want the algorithm? You’d have to actively opt in.

The proposal sounds measured. It isn’t. Mandatory EUDI Wallet verification means linking your social media account to a government-issued digital identity before you can post, scroll, or log in.

Every platform interaction becomes traceable to a verified real-world identity. Klüssendorf acknowledged the data tension, insisting the SPD wants “a very data-minimising solution that is also in the hands of state regulation” rather than handing platforms more user data to monetize.

The EUDI Wallet architecture, at least in theory, allows age confirmation without transmitting full identity details. Whether that promise survives contact with implementation is a different question.

Keep reading

Russia’s FSB Charges Telegram Founder Pavel Durov with Aiding Terrorism

Russia’s Federal Security Service is now pursuing a criminal terrorism case against Pavel Durov, the founder of Telegram. The charge, “assistance to terrorist activities” under Article 205.1 of the Russian Criminal Code, carries up to 15 years in prison. The accusation was published Tuesday in Rossiyskaya Gazeta, Russia’s official state newspaper, which said the article was “based on materials from Russia’s Federal Security Service” and called Telegram “a tool for hybrid threats.”

The timing is hardly subtle. For months, Moscow has been throttling Telegram’s speed, blocking its voice and video calls, and pushing tens of millions of Russians toward MAX, a state-built messaging app with no end-to-end encryption, legally required integration with the FSB’s surveillance infrastructure, and a privacy policy that allows sharing user data with government authorities on request.

MAX has been pre-installed on every smartphone sold in Russia since September 2025. Telegram, used by more than 90 million Russians every month, is the target. MAX is the replacement. The terrorism charge against Durov is the lever.

Durov responded on his Telegram channel: “Russia has opened a criminal case against me for ‘aiding terrorism.’ Each day, the authorities fabricate new pretexts to restrict Russians’ access to Telegram as they seek to suppress the right to privacy and free speech.”

Keep reading

LA County Sues Roblox Over False Child Safety Claims and Lack of Age Verification

Los Angeles County filed a lawsuit against Roblox, alleging the platform has built a system that leaves children exposed to grooming because it does not go far enough in checking user IDs to prove their age.

The suit names the company for public nuisance and violations of California’s false advertising law.

We obtained a copy of the complaint for you here.

The complaint is direct: “Roblox portrays its platform as a safe and appropriate place for children to play. In reality, and as Roblox well knows, the design of its platform makes children easy prey for pedophiles.”

If you weren’t aware of how big Roblox is and why this is important, Roblox serves roughly 144 million daily active users. That’s more than both Fortnite and the entire userbase of the Steam platform combined.

The platform also lets people create and play games, chat through customizable avatars, and spend real money on virtual currency.

LA County’s suit argues Roblox has consistently failed to moderate user-generated content, enforce its own age restrictions, or honestly disclose the risks predators pose to children using the service.

There is no doubt the platform’s moderation gaps have attracted scrutiny for years, and that the platform has had issues with grooming of minors, but the LA lawsuit is the latest in a pattern of governments and researchers documenting the same problem Roblox has repeatedly said it’s addressing, and the latest attempt to mandate digital ID checks.

Roblox rejected the suit’s allegations. A company spokesman said the platform was built “with safety at its core” and pointed to existing protections: “We have advanced safeguards that monitor our platform for harmful content and communications, and users cannot send or receive images via chat, avoiding one of the most prevalent opportunities for misuse seen elsewhere online.”

The company added that it takes action against rule violators and cooperates with law enforcement, closing with: “There is no finish line when it comes to protecting kids and, while no system can be perfect, our commitment to safety never ends.”

The false advertising angle is what is most important to note. LA isn’t suing Roblox over what it collects or who can see it. The county is suing because the company told parents the platform was safe for kids while allegedly knowing otherwise.

Keep reading

FBI Wins Court Ruling to Keep Twitter Payments Secret

A federal judge has handed the FBI a win in its attempts to keep secrets. On February 4th, Chief Judge James Boasberg ruled that the bureau can keep secret the precise amounts it paid Twitter between 2016 and 2023 for complying with legal process requests.

Judicial Watch, which had sued under the Freedom of Information Act, walked away empty-handed.

We obtained a copy of the opinion for you here.

You may remember our earlier reporting on how the FBI was paying Twitter. The payments totaled at least $3.4 million between October 2019 and February 2021 alone. That figure emerged from the Twitter Files released in December 2022. The FBI has never confirmed it. Neither has Twitter. And now, thanks to Boasberg’s ruling, the quarterly breakdown that would show exactly when the money flowed, and how much, stays buried.

Keep reading

Zuckerberg’s “Fix” for Child Safety Could End Anonymous Internet Access for Everyone

Mark Zuckerberg spent more than five hours on the stand in Los Angeles Superior Court on Wednesday, testifying before a jury for the first time about claims that Meta deliberately designed Instagram to addict children.

The headline from most coverage was the spectacle: an annotated paper trail of internal emails, a 35-foot collage of the plaintiff’s Instagram posts unspooled across the courtroom, a CEO growing visibly agitated under cross-examination.

The more important story is what Wednesday’s proceedings are being used to build.

The trial is framed as a child safety case. What it is actually doing, especially through Zuckerberg’s own testimony, is laying the political and legal groundwork for mandatory identity verification across the internet.

And Zuckerberg, rather than pushing back on that outcome, offered the court his preferred implementation plan.

Keep reading