TikTok Says Privacy Makes Users Less Safe

Over the past five years, the largest social platforms settled on a clear position about private messaging. Lock it down. Facebook turned on end-to-end encryption. Instagram and Messenger did the same. X joined the club. Yes, metadata is still an issue and the protocols used matter; but, generally speaking, the move was toward more privacy of actual messages.

TikTok looked at that trend and made a different choice. Then it scheduled a briefing in London with the BBC to explain the reasoning.

The explanation was safety.

In the UK, TikTok belongs to ByteDance, a Chinese technology company that operates under Beijing’s jurisdiction. China maintains strict limits on end-to-end encryption inside its borders. TikTok, after its own review of the issue, reached the same policy outcome for its messaging system.

Alan Woodward, a cybersecurity professor at Surrey University, raised that point directly. The company’s “Chinese influence might be behind the decision,” he said, adding that end-to-end encryption is “largely banned in China.”

TikTok declined to engage with that suggestion, of course. The remark hung in the air. However, it’s worth adding that the US operation of TikTok has made no indication that it is moving towards private messaging standards either.

End-to-end encryption is simple in theory. Only the people in a conversation can read the messages. The platform running the service cannot access the content. Governments cannot request it. Engineers inside the company cannot view it.

TikTok’s system operates in a different way. Messages on the platform remain readable to the company. Employees can access them under defined circumstances. Law enforcement agencies can request them through legal channels.

TikTok argues that readable messages allow the company to identify harmful activity.

The debate turns on a basic technical fact. “We can read your messages to catch predators,” and “we can read your messages” describe the same system.

Keep reading

Britain is Trying to Censor Americans – But America is Fighting Back

Ofcom has confirmed it is referring 4chan to a final enforcement decision under the Online Safety Act. The target is a Delaware company that runs an entirely anonymous imageboard from the United States, with no offices, staff, servers or assets in Britain. The demand: install age-verification systems and content filters so that British children cannot access the site or face daily fines levied from London on an American platform. This case is not an outlier. It is the clearest real-world demonstration of what the new generation of “online safety” laws requires: private companies must build automated filters that decide, in advance, which legal speech is too harmful for minors to see. The question the regulators never quite answer is simple: what exactly does the filter catch?

In the early 2020s, a political consensus formed on both sides of the Atlantic: social media is harming children and something must be done. The result in Washington was the Kids’ Online Safety Act (KOSA); in Westminster, the Online Safety Act (OSA), which received Royal Assent in October 2023 and began enforcement in 2025. The political appeal of both measures is genuine. Adolescent mental health deteriorated in the 2010s, parents are alarmed and platforms have appeared indifferent. But good intentions do not make good law, and the form these interventions took is constitutionally and morally indefensible. Both KOSA and the OSA rest on a duty-of-care model: platforms must take “reasonable measures” or implement “proportionate systems” to prevent minors from encountering content associated with depression, anxiety, eating disorders, self-harm and suicide. This is not a regulation of conduct. It is a mandate to suppress speech based on its topic and its predicted emotional effect on a reader: the very definition of content-based regulation.

The American Civil Liberties Union (ACLU) stated the constitutional problem plainly in its July 2023 letter opposing KOSA: the bill “is a content-based regulation of constitutionally protected speech” that “will silence important conversations, limit minors’ access to potentially vital resources and violate the First Amendment”.  Under Reed v. Town of Gilbert, a law is content-based if it “applies to particular speech because of the topic discussed or the idea or message expressed”. Content-based regulations are “presumptively unconstitutional”.

Keep reading

UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech

The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.

The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.

The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.

The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.

Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”

The actual policy tools being considered are a different matter.

Age verification, as a mechanism, works by proving identity. Every user proves who they are.

A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.

The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.

Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.

Keep reading

UK COUNTER-TERROR Police Ad Warns Teens Sharing ‘Funny’ Content Could Be TERRORISM

The UK’s Counter Terrorism Police have released a disturbing advertisement depicting a white teenager facing police seizure of devices and a potential criminal record simply for sharing a link he found “funny”—content, we are told, was later deemed terrorist material.

This move, part of the broader Prevent anti-radicalization strategy, underscores the UK regime’s push to police online activity among youth, framing it as a gateway to extremism while ignoring surging real-world dangers from mass migration.

In the ad, a teen laments: “I just got all my device taken away by the police… My mom couldn’t believe it. I might get a criminal record and not be able to go to college.” He then explains: “I only shared a link. I just thought it was funny, but it was terrorist content.”

Counter Terrorism Policing describes itself as “a collaboration of UK police forces working with the UK intelligence community to help protect the public and our national security by preventing, deterring, and investigating terrorist activity.”

A recent academic analysis in the Journal of Policing, Intelligence and Counter Terrorism highlights the escalating involvement of family courts and Prevent in childhood radicalization cases, noting “the number of children referred to Prevent and Channel due to concerns that they might be at risk of, or from, radicalisation has been steadily increasing since 2015.”

It adds that professionals like teachers are “legally obligated to refer that child to the police under the auspices of Prevent” if suspecting risk.

Government guidance on Prevent duty in schools urges communication with parents to spot signs, but also empowers referrals if family members show vulnerability. As one factsheet states, referrals can come from “a family member, friend, colleague, or a professional.”

Keep reading

Judge Blocks Virginia’s One-Hour Social Media Limit for Minors as Unconstitutional

A federal judge has blocked Virginia’s attempt to limit minors to one hour of social media per day, ruling the law violates the First Amendment. The decision is a significant check on a growing wave of state legislation that treats time spent reading, watching, and communicating online as something the government can ration.

Judge Patricia Tolliver Giles issued the preliminary injunction Friday, finding that Virginia “does not have the legal authority to block minors’ access to constitutionally protected speech until their parents give their consent by overriding a government-imposed default limit.”

We obtained a copy of the opinion for you here.

The ruling halts enforcement of Senate Bill 854, which carried fines of $7,500 per violation and required platforms to use “commercially reasonable methods” to verify user ages.

The law’s problem wasn’t just the one-hour cap. It was how the cap worked. The state set the default, and parents could ask to change it. That structure puts the government, not families, in control of baseline access to speech. Parental consent here overrides a government restriction that shouldn’t exist in the first place.

Giles found the law over-inclusive in a way that illustrates exactly how blunt these restrictions are. “A minor would be barred from watching an online church service if it exceeded an hour on YouTube,” she wrote, “yet, that same minor is allowed to watch provider-selected religious programming exceeding an hour in length on a streaming platform.”

The law doesn’t regulate harm. It regulates platforms, which means it catches protected speech indiscriminately.

NetChoice, the trade association whose members include Meta, YouTube, Snap, Reddit, and TikTok, sued to stop the law. In November, NetChoice argued that “Virginia has with one broad stroke restricted access to valuable sources for speaking and listening, learning about current events and otherwise exploring the vast realms of human thought and knowledge.” The judge agreed they had standing to pursue a permanent block and found they were likely to succeed on the merits.

Virginia’s attorney general is defending the law alongside 29 other states from both parties. A spokesperson said: “We look forward to continuing to enforce laws that empower parents to protect their children from the proven harms that can come through social media.” The new Democratic attorney-general Jay Jones, who took office in January, had announced he intended to fully enforce the law signed by his Republican predecessor, Glenn Youngkin.

Keep reading

X CRACKS DOWN on AI-Generated War Propaganda: NO MORE Cashing In on Fake Footage Without Labels

In an effort to protect truthful information during global conflicts, X has rolled out strict new rules targeting creators who peddle AI-generated videos of war without clear disclosures. This comes as pro-Iran propagandists flooded the platform with fabricated clips designed to sow chaos.

The policy shift, effective immediately focuses on X’s Creator Revenue Sharing program. Creators posting AI-made content showing armed conflicts must include a clear label, or face penalties that hit where it hurts: their wallets.

According to details from X’s head of product, Nikita Bier, the platform is clamping down hard. “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Bier stated.

Bier elaborated that X will use its Community Notes system—crowdsourced fact-checking that empowers users over elite moderators—along with post metadata to detect undeclared AI content. The rules don’t ban AI videos outright; they just demand transparency via X’s built-in “Made with AI” label option.

Violators get a 90-day suspension from earning ad revenue on their posts. A second offense leads to a permanent ban from the program. This targets those exploiting wars for profit, without stifling creative expression.

The update follows a surge in deceptive content amid the Iran-Israel clash. Pro-Iran accounts have pushed AI fakes, like one claiming Iranian missiles sank the USS Abraham Lincoln. “Iran’s IRGC claims to have struck USS Abraham Lincoln with ballistic missiles. LIE,” CENTCOM posted. “The Lincoln was not hit. The missiles launched didn’t even come close.”

Keep reading

Britain and Europe are struggling economically; their response? Regulate the world

It used to be said that the sun never set on the British Empire, so far-flung were its possessions. Britain has long since retreated from most of those territories, most recently, and controversially, in its attempt to relinquish control of the Chagos Islands. Yet even as it sheds physical dominion, Britain appears increasingly eager to export something else: its laws and regulations. 

In that project, it is joined enthusiastically by its former partners in the European Union. If the Old World has one major export left, it is bureaucracy.

The most obvious current target is X, Elon Musk’s platform, and its Grok AI tool. Some users of questionable taste quickly discovered that Grok could be used to generate deepfake images of celebrities in revealing attire. More seriously, it was alleged that the technology had been used to generate sexualised images of children. In response, last month the UK’s communications regulator, Ofcom, opened a formal investigation under the Online Safety Act, citing potential failures to prevent illegal content. The possible penalties are severe, ranging from multi-million-pound fines, based on the company’s global revenue, to a complete ban on the platform in the UK.

Senior British officials were quick to escalate the rhetoric. Prime Minister Keir Starmer and Technology Secretary Liz Kendall publicly condemned X and emphasised that all options, including nationwide blocking, were on the table. The message was unmistakable; compliance would be enforced, one way or another.

Two days later, X announced new restrictions to prevent Grok from editing images of real people into revealing scenarios and to introduce geo-blocking in jurisdictions where such content is illegal. Ofcom described these changes as “welcome” but insufficient, insisting its investigation would continue. Meanwhile, pressure spread outward. Other governments announced restrictions, and the European Commission expanded its own probes under the Digital Services Act. What began as a British enforcement action quickly morphed into coordinated global pressure, effectively pushing X toward worldwide policy changes.

This is the crucial point. British regulators were not merely seeking compliance for British users. They were pressing for changes to X’s global policies and technical architecture to govern speech and expression far beyond the UK’s borders. What might initially have been framed as a failure to impose sensible safeguards on a powerful new tool has become a test case for whether regulators in one jurisdiction can dictate technological limits everywhere else.

This pattern is not new. Ofcom has already attempted to extend its reach directly into the United States, brushing aside the constitutional protections afforded to Americans. Since the Online Safety Act came into force in 2025, Ofcom has adopted an aggressively expansive interpretation of its authority, asserting that any online service “with links to the UK,” meaning merely accessible to UK users and deemed to pose “risks” to them, must comply with detailed duties to assess, mitigate, and report on illegal harms. Services provided entirely from abroad are explicitly deemed “in scope” if they meet these criteria.

The flashpoints have been 4chan and Kiwi Farms, two US-based forums notorious for unmoderated speech and even harassment campaigns. In mid-2025, Ofcom initiated investigations into both for failing to respond to statutory information requests and for failing to complete the required risk assessments. It ultimately issued a confirmation decision against 4chan, imposing a £20,000 fine plus daily penalties for continued non-compliance, despite the site having no physical presence, staff, or infrastructure in the UK.

Rather than comply, the operators of both sites filed suit in US federal court, arguing that Ofcom’s actions violate the First Amendment and that the regulator lacks jurisdiction to enforce British law against American companies. The litigation frames the dispute starkly: whether a foreign regulator may, through regulatory pressure, compel changes to lawful American speech.

That question has now spilt into US politics. Senior American officials have criticised Ofcom’s posture as an extraterritorial threat to free speech, and at least one member of Congress has threatened retaliatory legislation. What Britain views as online safety increasingly appears, from across the Atlantic, to be regulatory imperialism.

Keep reading

Pro-Palestinian Columbia University Student Group Posts “Marg Bar Amrika” (Death to America)

An unofficial Columbia University student group, the pro-Palestinian ‘Columbia University Apartheid Divest’ posted “marg bar amrika,” the Islamist Iranian regime chant that means, “death to America,” to X Saturday in response to the killing of Iran Supreme Leader Ayatollah Ale Khamenei in joint U.S.-Israeli air strikes.

Via Wikipedia: “Death to America (Persian: مرگ بر آمریکا Marg bar Amrika) is an anti-American political slogan and chant which has been in use in Iran since the inception of the Islamic Revolution in 1978. Ayatollah Khomeini, the first leader of the Islamic Republic of Iran, popularized the term. In most official Iranian translations, the phrase is translated into English as the less offensive “Down with America.” The chant “Death to America” has come to be employed by Islamists, anti-imperialists, and decolonisation movements worldwide. Great Satan is another closely linked propaganda phrase.”

The group later posted that X made them take the post down, but that it stood by the thought, “X forced us to delete our “marg bar amrika” tweet in order to gain back access to our account but the sentiment still stands”

Keep reading

Xbox UK Age Verification Launch Locks Out Thousands of Players

Xbox’s mandatory age verification rollout in the UK was a disaster almost immediately, locking thousands of players out of games, voice chat, and apps like Discord with no clear path back in.

The failures started overnight. Players report being ejected mid-session to complete age verification checks that then took hours, stalled indefinitely, or simply refused to work regardless of what identification they submitted.

Government ID, mobile numbers, and live video age estimation; the system rejected them all for many users. Others made it through verification only to find their accounts still restricted with no explanation and no recourse beyond contacting Xbox support.

Microsoft’s support page now carries a notice confirming it is “aware of the issue and working to fix it.” That’s the extent of the official guidance.

The verification requirement exists to comply with the UK’s new censorship law, the Online Safety Act, legislation mandating that platforms facilitating online communication verify user ages. The actual system XBox built to deliver that compliance forcibly disconnected players from games in progress, stripped away chat functionality with anyone outside their friends list, and blocked access to third-party services.

Users who have held Xbox accounts for over 18 years found themselves flagged for verification anyway. The system doesn’t consider account age, history, or any contextual signal that might indicate an adult user. Everyone gets treated as potentially underage until they hand over documentation.

“The amount of times I’ve tried to do any method of the verification tonight is stupid,” wrote one user. “Can’t change privacy settings on my Xbox to allow me to see mods on games too. Can’t chat on Discord. Utterly broken.”

“Been trying to verify my ID for the past few hours,” added another. “It finally worked but I can’t access anything still. No Discord access at all.”

Keep reading

Germany’s SPD Pushes Mandatory Government ID Verification for Social Media

The SPD of Germany wants to end anonymous social media access in Germany.

Tim Klüssendorf, Secretary General of the Social Democratic Party, confirmed this week that his party is pushing mandatory age verification for all social media platforms, tied directly to the EU Digital Identity Wallet, the bloc’s official government ID scheme.

He’s already in talks with coalition partner CDU, Chancellor Friedrich Merz’s party, which called for an end to online anonymity just last week. Both parties now want the same thing.

Naturally, Klüssendorf framed the proposal as child protection. “We are currently not meeting the state’s obligation to protect. I believe children and young people are particularly at risk there. That has been proven,” he said after an SPD leadership meeting in Berlin.

The platforms, he added, are currently “operating a business model that is simply not compatible with our democratic principles.”

The SPD’s formal position, adopted in an internal policy paper, breaks access into three tiers by age. Under-14s would face a complete ban from social media platforms. Under-16s could access only state-approved “youth versions,” stripped of algorithmic recommendation, infinite scroll, autoplay, and engagement reward systems. For everyone 16 and older, including adults, algorithmic content recommendations would be switched off by default. Want the algorithm? You’d have to actively opt in.

The proposal sounds measured. It isn’t. Mandatory EUDI Wallet verification means linking your social media account to a government-issued digital identity before you can post, scroll, or log in.

Every platform interaction becomes traceable to a verified real-world identity. Klüssendorf acknowledged the data tension, insisting the SPD wants “a very data-minimising solution that is also in the hands of state regulation” rather than handing platforms more user data to monetize.

The EUDI Wallet architecture, at least in theory, allows age confirmation without transmitting full identity details. Whether that promise survives contact with implementation is a different question.

Keep reading