Xbox UK Age Verification Launch Locks Out Thousands of Players

Xbox’s mandatory age verification rollout in the UK was a disaster almost immediately, locking thousands of players out of games, voice chat, and apps like Discord with no clear path back in.

The failures started overnight. Players report being ejected mid-session to complete age verification checks that then took hours, stalled indefinitely, or simply refused to work regardless of what identification they submitted.

Government ID, mobile numbers, and live video age estimation; the system rejected them all for many users. Others made it through verification only to find their accounts still restricted with no explanation and no recourse beyond contacting Xbox support.

Microsoft’s support page now carries a notice confirming it is “aware of the issue and working to fix it.” That’s the extent of the official guidance.

The verification requirement exists to comply with the UK’s new censorship law, the Online Safety Act, legislation mandating that platforms facilitating online communication verify user ages. The actual system XBox built to deliver that compliance forcibly disconnected players from games in progress, stripped away chat functionality with anyone outside their friends list, and blocked access to third-party services.

Users who have held Xbox accounts for over 18 years found themselves flagged for verification anyway. The system doesn’t consider account age, history, or any contextual signal that might indicate an adult user. Everyone gets treated as potentially underage until they hand over documentation.

“The amount of times I’ve tried to do any method of the verification tonight is stupid,” wrote one user. “Can’t change privacy settings on my Xbox to allow me to see mods on games too. Can’t chat on Discord. Utterly broken.”

“Been trying to verify my ID for the past few hours,” added another. “It finally worked but I can’t access anything still. No Discord access at all.”

Keep reading

Germany’s SPD Pushes Mandatory Government ID Verification for Social Media

The SPD of Germany wants to end anonymous social media access in Germany.

Tim Klüssendorf, Secretary General of the Social Democratic Party, confirmed this week that his party is pushing mandatory age verification for all social media platforms, tied directly to the EU Digital Identity Wallet, the bloc’s official government ID scheme.

He’s already in talks with coalition partner CDU, Chancellor Friedrich Merz’s party, which called for an end to online anonymity just last week. Both parties now want the same thing.

Naturally, Klüssendorf framed the proposal as child protection. “We are currently not meeting the state’s obligation to protect. I believe children and young people are particularly at risk there. That has been proven,” he said after an SPD leadership meeting in Berlin.

The platforms, he added, are currently “operating a business model that is simply not compatible with our democratic principles.”

The SPD’s formal position, adopted in an internal policy paper, breaks access into three tiers by age. Under-14s would face a complete ban from social media platforms. Under-16s could access only state-approved “youth versions,” stripped of algorithmic recommendation, infinite scroll, autoplay, and engagement reward systems. For everyone 16 and older, including adults, algorithmic content recommendations would be switched off by default. Want the algorithm? You’d have to actively opt in.

The proposal sounds measured. It isn’t. Mandatory EUDI Wallet verification means linking your social media account to a government-issued digital identity before you can post, scroll, or log in.

Every platform interaction becomes traceable to a verified real-world identity. Klüssendorf acknowledged the data tension, insisting the SPD wants “a very data-minimising solution that is also in the hands of state regulation” rather than handing platforms more user data to monetize.

The EUDI Wallet architecture, at least in theory, allows age confirmation without transmitting full identity details. Whether that promise survives contact with implementation is a different question.

Keep reading

LA County Sues Roblox Over False Child Safety Claims and Lack of Age Verification

Los Angeles County filed a lawsuit against Roblox, alleging the platform has built a system that leaves children exposed to grooming because it does not go far enough in checking user IDs to prove their age.

The suit names the company for public nuisance and violations of California’s false advertising law.

We obtained a copy of the complaint for you here.

The complaint is direct: “Roblox portrays its platform as a safe and appropriate place for children to play. In reality, and as Roblox well knows, the design of its platform makes children easy prey for pedophiles.”

If you weren’t aware of how big Roblox is and why this is important, Roblox serves roughly 144 million daily active users. That’s more than both Fortnite and the entire userbase of the Steam platform combined.

The platform also lets people create and play games, chat through customizable avatars, and spend real money on virtual currency.

LA County’s suit argues Roblox has consistently failed to moderate user-generated content, enforce its own age restrictions, or honestly disclose the risks predators pose to children using the service.

There is no doubt the platform’s moderation gaps have attracted scrutiny for years, and that the platform has had issues with grooming of minors, but the LA lawsuit is the latest in a pattern of governments and researchers documenting the same problem Roblox has repeatedly said it’s addressing, and the latest attempt to mandate digital ID checks.

Roblox rejected the suit’s allegations. A company spokesman said the platform was built “with safety at its core” and pointed to existing protections: “We have advanced safeguards that monitor our platform for harmful content and communications, and users cannot send or receive images via chat, avoiding one of the most prevalent opportunities for misuse seen elsewhere online.”

The company added that it takes action against rule violators and cooperates with law enforcement, closing with: “There is no finish line when it comes to protecting kids and, while no system can be perfect, our commitment to safety never ends.”

The false advertising angle is what is most important to note. LA isn’t suing Roblox over what it collects or who can see it. The county is suing because the company told parents the platform was safe for kids while allegedly knowing otherwise.

Keep reading

40 Attorneys General Urge Congress to ‘Tie Online Access to ID’

Forty state attorneys general (AGs) last week urged federal lawmakers to pass a bill that could ultimately require people to digitally verify their identity to access the internet, according to privacy and free speech watchdog group Reclaim The Net.

In a Feb. 10 letter, the AGs backed the U.S. Senate version of the Kids Online Safety Act. They did not support the U.S. House of Representatives version, which differs in key ways.

If passed, the Senate bill would require government officials and agencies to figure out how computers, cellphones and operating systems could verify people’s age. The bill states:

“The Secretary of Commerce, in coordination with the Federal Communications Commission and the Federal Trade Commission, shall conduct a study evaluating the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.”

The federal officials and agencies would be required to submit a report of their findings to Congress within a year.

Designing cellphones and computer operating systems to verify a user’s age would bring the U.S. another step closer to cementing a digital ID system, Reclaim The Net reported. In an article titled “40 State Attorneys General Want To Tie Online Access to ID,” it wrote:

“Device-level verification would likely depend on digital identity checks tied to government-issued identification, third-party age verification vendors, or persistent account authentication systems. …

“… Once age checks are embedded at the operating system level, the boundary between verifying age and verifying identity becomes difficult to maintain.”

Greg Glaser, a digital privacy expert and attorney, agreed. “By embedding identity checks into apps, hardware, or operating systems, the bill would create a de facto digital ID checkpoint for broad internet use,” he said.

Keep reading

From Blockchain To Ball-And-Chain: Are We Being Borg’d?

Tokenized Tyranny: How Elites Are Digitizing Our World for Total Control

I’ve followed investigative journalist Whitney Webb’s work for years, and her once-distant warnings now feel eerily prophetic as they unfold in real time. What she has consistently exposed, the systematic digitization and commodification of everything from natural ecosystems to human life itself, is no longer speculative theory. It’s happening before our eyes.

When I first encountered Whitney’s reporting, I found it hard to believe. Could this level of control and financialization truly be underway? It seemed too dystopian, too extreme. Yet after digging deeper the evidence was undeniable. What she described was not exaggeration. It was an accurate and meticulously documented reality.

The tokenization of nature and humanity represents a deliberate strategy by the world’s most powerful financial institutions. Figures like BlackRock’s Chairman and CEO Larry Fink have openly championed turning the planet’s resources, and increasingly aspects of human existence, into fractionalized, tradable digital assets on blockchain-based ledgers. This creates new avenues for elite profit and unprecedented surveillance and control.

With Fink now serving as Interim Co-Chair of the World Economic Forum’s Board of Trustees (alongside André Hoffmann), the technocratic elite have gained an ideal global platform to accelerate this agenda. What better forum than the WEF to mainstream and fast-track “total control” from cradle to grave.

The process begins with assigning unique digital identifiers to virtually everything: land, water, forests, carbon credits, even personal behaviors and biological data. These are then logged on universal ledgers, where ownership is sliced into tradable fractions, much like stocks. But this goes far beyond traditional finance. It encompasses the Earth’s finite resources and, ultimately, the very essence of human life, all reduced to programmable, monetizable units in a centralized system of power.

This is tokenized tyranny in action: a quiet revolution that could redefine ownership, freedom, and existence itself..

Keep reading

40 State Attorneys General Want To Tie Online Access to ID

A bloc of 40 state and territorial attorneys general is urging Congress to adopt the Senate’s version of the controversial Kids Online Safety Act, positioning it as the stronger regulatory instrument and rejecting the House companion as insufficient.

The Act would kill online anonymity and tie online activity and speech to a real-world identity.

Acting through the National Association of Attorneys General, the coalition sent a letter to congressional leadership endorsing S. 1748 and opposing H.R. 6484.

We obtained a copy of the letter for you here.

Their request centers on structural differences between the bills. The Senate proposal would create a federally enforceable “Duty of Care” requiring covered platforms to mitigate defined harms to minors.

Enforcement authority would rest with the Federal Trade Commission, which could investigate and sue companies that fail to prevent minors from encountering content deemed to cause “harm to minors.”

That framework would require regulators to evaluate internal content moderation systems, recommendation algorithms, and safety controls.

S. 1748 also directs the Secretary of Commerce, the FTC, and the Federal Communications Commission to study “the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.”

This language moves beyond platform-level age gates and toward infrastructure embedded directly into hardware or operating systems.

Age verification at that layer would not function without some form of credentialing. Device-level verification would likely depend on digital identity checks tied to government-issued identification, third-party age verification vendors, or persistent account authentication systems.

That means users could be required to submit identifying information before accessing broad categories of lawful online speech. Anonymous browsing depends on the ability to access content without linking identity credentials to activity.

Keep reading

UK Fines US Platform Imgur For Lack of Age Verification

Imgur’s decision to suspend access for UK users in September 2025 was an early signal that regulatory pressure was building. The platform’s parent company has now learned the financial cost of that pressure.

The UK Information Commissioner’s Office has fined MediaLab, which operates image hosting company Imgur, £247,590 ($337,000) for violations of the UK GDPR.

According to the regulator, the company processed children’s personal data without a lawful basis, failed to implement effective age assurance measures, and did not complete a required data protection impact assessment.

The ICO’s findings focus on how children under 13 were able to use the service without verified parental consent or “any other lawful basis.”

The regulator also determined that the company lacked meaningful age checks. That means the platform did not reliably verify whether users were children before collecting and processing their data. Additionally, MediaLab did not conduct a formal risk assessment to examine how its service might affect minors’ rights and freedoms.

“MediaLab failed in its legal duties to protect children, putting them at unnecessary risk,” said UK Information Commissioner John Edwards. “For years, it allowed children to use Imgur without any effective age checks, while collecting and processing their data, which in turn exposed them to harmful and inappropriate content. Age checks help organizations keep children’s personal information safe.”

He added, “Ignoring the fact that children use these services, while processing their data unlawfully, is not acceptable. Companies that choose to ignore this can expect to face similar enforcement action.”

The ICO says it has the authority to impose fines of up to £17.5 million or 4 percent of an organization’s annual global revenue, whichever is higher. In setting the penalty at £247,590, the office stated that it “took into consideration the number of children affected by this breach, the degree of potential harm caused, the duration of the contraventions, and the company’s global turnover.”

This enforcement action sits within a broader UK policy change toward mandatory online age verification.

Lawmakers and regulators have increasingly pressed platforms to deploy age assurance tools that can include document checks, facial age estimation, or third-party verification services. All-privacy invasive.

While positioned as child protection measures, these systems often require users to submit government-issued identification or biometric data simply to access online services.

Keep reading

“Kids Off Social Media Act” Opens the Door to Digital ID by Default

Congress is once again stepping into the role of digital caretaker, this time through the Kids Off Social Media Act, with a proposal from Rep. Anna Paulina Luna that seeks to impose federal rules on how young people interact with the world.

The house companion bill (to go along with the senate bill) attempts to set national limits on who can hold social media accounts, how platforms may structure their systems, and what kinds of data they are allowed to use when dealing with children and teenagers.

Framed as a response to growing parental concern, the legislation reflects a broader push to regulate online spaces through age-based access and design mandates rather than direct content rules.

The proposal promises restraint while quietly expanding Washington’s reach into the architecture of online speech. Backers of the bill will insist it targets corporate behavior rather than expression itself. The bill’s mechanics tell a more complicated story.

The bill is the result of a brief but telling legislative evolution. Early versions circulated in 2024 were framed as extensions of existing child privacy rules rather than participation bans. Those drafts focused on limiting data collection, restricting targeted advertising to minors, and discouraging algorithmic amplification, while avoiding hard access restrictions or explicit age enforcement mandates.

That posture shifted as the bill gained bipartisan backing. By late 2024, lawmakers increasingly treated social media as an inherently unsafe environment for children rather than a service in need of reform. When the bill was reintroduced in January 2025, it reflected that change. The new version imposed a categorical ban on accounts for users under 13, restricted recommendation systems for users under 17, and strengthened enforcement through the Federal Trade Commission and state attorneys general, with Senate sponsorship led by Ted Cruz and Brian Schatz.

Keep reading

Discord to Demand Face Scan or ID to Access All Features

Discord is preparing to make age classification a constant background process across its platform. Beginning next month, every account will default to a teen-appropriate experience unless the user takes steps to prove adulthood.

Age determination will sit underneath routine activity, shaping what people can see, say, and join.

For accounts that are not verified as adult, access will narrow immediately. Age-restricted servers and channels will be blocked, voice participation in live “stage” channels will be disabled, and automated filters will apply to content Discord identifies as graphic or sensitive.

Friend requests from unfamiliar users will trigger warning prompts, and direct messages from unknown accounts will be routed into a separate inbox.

Core features such as direct messages with known contacts and servers without age restrictions will continue to function. Age-restricted servers will effectively disappear until verification is completed, including servers that a user joined years earlier.

The global rollout reflects a broader regulatory environment that is pushing platforms toward more aggressive age controls. Discord has already tested similar systems.

Last year, age checks were introduced in the UK and Australia.

For many adult users, the concern is less about access to content and more about surveillance and the ability to communicate anonymously. Verification systems introduce new forms of monitoring, whether through documents, facial analysis, or ongoing behavioral assessment.

Keep reading

Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda

A familiar storyline is hardening into regulatory doctrine across Europe: frame social media use as addiction, then require platforms to reengineer themselves around age segregation and digital ID.

The European Commission’s preliminary case against TikTok, announced today, shows how that narrative is now being operationalized in policy, with consequences that reach well beyond one app.

European regulators have accused TikTok of breaching the Digital Services Act by relying on what they describe as “addictive design” features, including infinite scroll, autoplay, push notifications, and personalized recommendations.

Officials argue these systems drive compulsive behavior among children and vulnerable adults and must be structurally altered.

What sits beneath that argument is a quieter requirement. To deliver different “safe” experiences to minors and adults, platforms must first determine who is a minor and who is not. Any mandate to offer different experiences to minors and adults depends on a reliable method of telling those groups apart.

Platforms cannot apply separate algorithms, screen-time limits, or nighttime restrictions without determining a user’s age with a level of confidence regulators will accept.

Commission spokesman Thomas Regnier described the mechanics bluntly, saying TikTok’s design choices “lead to the compulsive use of the app, especially for our kids, and this poses major risks to their mental health and wellbeing.” He added: “The measures that TikTok has in place are simply not enough.”

The enforcement tool behind those statements is the Digital Services Act, the EU’s platform rulebook that authorizes Brussels to demand redesigns and impose fines of up to 6% of global annual revenue.

Keep reading