Australia’s “eSafety” Commissioner Threatens App Stores Over AI Age Verification Deadline

Australia’s eSafety Commissioner Julie Inman Grant is threatening to go after app stores and search engines unless they block AI services that haven’t verified their users’ ages by March 9, 2026.

The ultimatum landed after a Reuters took it upon itself to survey 50 leading text-based AI platforms, and found that 30 of them had taken no visible steps toward compliance with the country’s controversial censorship and surveillance ideas.

“eSafety will use the full range of our powers where there is non-compliance,” a spokesperson said, spelling out that this extends to “action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services.”

What’s actually being built here is bigger than age verification. Five industry codes taking effect March 9 under Australia’s Online Safety Act 2021 impose age-gating requirements across a wide range of services: AI platforms, app distribution services, social media, gaming, dating apps, and any website deemed high-risk for pornography, extreme violence, or self-harm content.

Every category gets its own code. Each non-compliance carries fines of up to A$49.5 million (around US$35 million). The system isn’t aimed at one corner of the internet. It covers most of it.

The age verification requirement doesn’t stand alone. Under a separate amendment to the Online Safety Act passed last year, social media platforms must already ban users under 16 entirely.

The March 9 codes extend that logic further, requiring services to verify the identity of users and filter what they can see based on age. The infrastructure being assembled connects age to identity to content access across the internet as Australians currently use it.

Keep reading

Teens Didn’t Just Discover Weed. So Why Is The Wall St. Journal Acting Like They Did?

The Wall Street Journal has a new teen-cannabis panic on offer: vape clouds in school bathrooms, sneaky hits during class and administrators playing cat-and-mouse with students who keep finding ways to get high. The gadgets are newer. The hardware is newer. The hiding spots may be newer, too. But the underlying behavior? Please. American teenagers did not just discover weed because a dispensary opened in town. What the Journal really found is an old adolescent ritual in updated packaging, then stretched it into a referendum on legal cannabis.

Let’s get the obvious part out of the way. Teen cannabis use is real. The risks are real. THC can be harmful to developing brains, and schools have every right to care about what students are doing on campus. But that is not the same as proving legalization created some brand-new youth cannabis crisis. That leap is where the piece gets slippery.

Because once you leave the anecdote and look at the trendline, the panic starts to wobble. The University of Michigan’s Monitoring the Future report shows past-year marijuana use among 12th graders at 26.0% in 2024, down from 35.7% in 2019. Among 8th graders, it was 7.0% in 2024, down from 11.8% in 2019. That is not an explosion. That is a decline.

Zoom out further and the same pattern holds. A 2026 Addictive Behaviors paper, “Trends in US adolescent cannabis use, 1991–2023”, found that youth cannabis use rose through the 1990s, peaked in 1999 and then broadly declined. Lifetime use fell from 47.3% in 1999 to 30.1% in 2023. Recent use dropped from 27.1% to 17.8%. Early initiation fell too. In other words, if you want to tell a dramatic story about teen cannabis, the most inconvenient fact is that the peak is a quarter-century behind us.

And if the argument is specifically that legalization caused kids to start using more, the best recent policy literature does not back that up either. A 2024 JAMA Psychiatry study, “Recreational Marijuana Laws and Teen Marijuana Use, 1993-2021”, found no evidence that recreational marijuana laws were associated with current or frequent teen use. A separate 2024 JAMA Pediatrics study, “Recreational Cannabis Legalization, Retail Sales, and Adolescent Substance Use Through 2021”, found no net increases in adolescent cannabis, alcohol, cigarette or e-cigarette use tied to recreational legalization or retail sales. That does not mean every concern is fake. It means the Journal is hinting at a causal story the evidence does not support.

That is the framing trick. The article keeps pointing to real things, then attaching them to the wrong villain. Teens getting THC vapes from older friends? Real. Peer-to-peer sales through Snapchat? Real. Bad packaging that looks too much like candy? Also real. But none of that means adult legality itself is the root problem. If a kid gets cannabis from an older sibling, a sloppy adult or some classmate running a side hustle through social media, that is a diversion problem. A safeguards problem. An adults-failing-kids problem. It is not proof that legal access for adults was the mistake. If an eighth grader grabs a parent’s car keys and takes off, the problem is not that cars are legal for adults. The problem is access, supervision and adults failing to secure something meant for grown people.

Keep reading

Congress Is Considering Abolishing Your Right to Be Anonymous Online

In August 2024, the Biden administration hosted hundreds of influencers at the White House for the first-ever Creator Economy Conference. Neera Tanden, a senior Biden adviser, took to the stage and bemoaned anonymity online. The influencers alongside her agreed, pushing the idea that anonymous speech on the internet is harmful, and regulation is needed to force the use of real names on social media. The audience whispered excitedly as those on stage spoke about how proposed laws like the Kids Online Safety Act, or KOSA, could unmask every troll. 

This narrative of online safety, particularly in relation to children, has become central to the bipartisan effort to censor and deanonymize the internet for everyone. Today, a package of a dozen “child online safety” bills is moving forward in the House of Representatives with bipartisan support. The laws, framed as a way to crack down on harmful content and make the internet safer, would force social media companies to enact invasive identity verification measures in order to keep children from accessing online spaces.

The problem is that there’s no way to reliably verify someone’s age without verifying who they are. A platform cannot magically discern that a user is 16 without collecting identifying information, whether through government documents such as a passport, payment information like a credit card, or other identity-disclosing data. Whether that data is stored by the platform itself or outsourced to a vendor, the result is always the same: A user’s offline identity is forever linked with their online behavior.

Stripping anonymity from the internet would constitute one of the most sweeping rollbacks of civil rights in recent history. It would allow for unprecedented levels of mass surveillance and censorship, endangering the most marginalized members of society. Whistleblowers exposing corporate wrongdoing could be tracked and fired, government employees speaking out about illegal behavior or bad policies could face prosecution, and activists organizing protests could be identified and surveilled before ever setting foot on the street.

Keep reading

Scientists warn against crappy age verification: ‘if implemented without careful consideration… the new regulation might cause more harm than good’

As age verification becomes more commonplace across the web, there are some trying to oppose its rollout on security and privacy grounds. An open letter signed by over 400 researchers and scientists arguing the many reasons why age verification (and most especially the current age assurance technology) isn’t all it’s cracked up to be is now available to read in full.

Here’s a precis on the whole thing: Governments across the world are adopting legislation to ensure usage or compliance with age assurance methods, in the name of keeping kids off the bad parts of the web. That sounds like a good idea until you look into the details. Those details suggest these are often haphazardly applied and with little regard for privacy and data protection.

The open letter outlines a few key arguments:

How easily age verification can be bypassed. This being evident by Discord’s age verification, provided by K-id, which could be bypassed by using Sam’s face in Death Stranding. As the open letter points out, it’s possible to lie about one’s age, trick a system, or buy age-verified credentials online. VPNs are also widely available and prove an easy way to bypass any and all age assurance methods, even if access to said VPNs is age-restricted.

How unreliable age estimation can be. All while potentially necessitating large-scale and invasive data collection or widespread use of government IDs at every online interaction for any semblance of effectiveness. As the letter notes, “We conclude that age assessment presents an inherent disproportionate risk of serious privacy violations and discrimination, without guarantees of effectiveness.”

How it necessitates a global trust infrastructure. This being one of the main goals of the EU’s digital identity wallet, though only pan-EU, being used as a common foundation for all member states to meet one another for age assurance. Though as the letter suggests, “even if such a trust infrastructure would exist, checks can be circumvented by acquiring valid certificates or using VPNs, as long as age assurance regulations are not universally enforced by all affected services.”

How it can push users to lesser-known, potentially dangerous websites. By enforcing age assurance, and with the larger, more responsible websites complying, there is a chance of pushing users to lesser-known, potentially dangerous or scam websites. Following the rollout of the UK’s Online Safety Act, one of the first investigations it launched was into porn websites that did not immediately comply with the new rules for age verification checks. Other websites chose to turn off services to the UK altogether.

Keep reading

California Law Forces Age-Tracking Into Every Operating System by 2027

California wants to build a surveillance layer into every device its residents touch. Assembly Bill 1043, signed by Governor Gavin Newsom and taking effect January 1, 2027, requires every operating system provider to collect age information from users at account setup and broadcast that data to app developers through a real-time API.

Windows, macOS, Android, iOS, Linux distributions, Valve’s SteamOS: if it runs an operating system, it’s covered by this overreaching law.

The proposals are particularly dumb for open-source Linux operating systems. Linux exists specifically because some people want computing that doesn’t surveil them. That’s not incidental to why the platform exists; it’s foundational.

Distributions like Arch, Debian, and Gentoo have no centralized account infrastructure by design. Users download ISOs from mirrors, modify source code freely, and run systems that report to nobody.

Keep reading

Xbox UK Age Verification Launch Locks Out Thousands of Players

Xbox’s mandatory age verification rollout in the UK was a disaster almost immediately, locking thousands of players out of games, voice chat, and apps like Discord with no clear path back in.

The failures started overnight. Players report being ejected mid-session to complete age verification checks that then took hours, stalled indefinitely, or simply refused to work regardless of what identification they submitted.

Government ID, mobile numbers, and live video age estimation; the system rejected them all for many users. Others made it through verification only to find their accounts still restricted with no explanation and no recourse beyond contacting Xbox support.

Microsoft’s support page now carries a notice confirming it is “aware of the issue and working to fix it.” That’s the extent of the official guidance.

The verification requirement exists to comply with the UK’s new censorship law, the Online Safety Act, legislation mandating that platforms facilitating online communication verify user ages. The actual system XBox built to deliver that compliance forcibly disconnected players from games in progress, stripped away chat functionality with anyone outside their friends list, and blocked access to third-party services.

Users who have held Xbox accounts for over 18 years found themselves flagged for verification anyway. The system doesn’t consider account age, history, or any contextual signal that might indicate an adult user. Everyone gets treated as potentially underage until they hand over documentation.

“The amount of times I’ve tried to do any method of the verification tonight is stupid,” wrote one user. “Can’t change privacy settings on my Xbox to allow me to see mods on games too. Can’t chat on Discord. Utterly broken.”

“Been trying to verify my ID for the past few hours,” added another. “It finally worked but I can’t access anything still. No Discord access at all.”

Keep reading

LA County Sues Roblox Over False Child Safety Claims and Lack of Age Verification

Los Angeles County filed a lawsuit against Roblox, alleging the platform has built a system that leaves children exposed to grooming because it does not go far enough in checking user IDs to prove their age.

The suit names the company for public nuisance and violations of California’s false advertising law.

We obtained a copy of the complaint for you here.

The complaint is direct: “Roblox portrays its platform as a safe and appropriate place for children to play. In reality, and as Roblox well knows, the design of its platform makes children easy prey for pedophiles.”

If you weren’t aware of how big Roblox is and why this is important, Roblox serves roughly 144 million daily active users. That’s more than both Fortnite and the entire userbase of the Steam platform combined.

The platform also lets people create and play games, chat through customizable avatars, and spend real money on virtual currency.

LA County’s suit argues Roblox has consistently failed to moderate user-generated content, enforce its own age restrictions, or honestly disclose the risks predators pose to children using the service.

There is no doubt the platform’s moderation gaps have attracted scrutiny for years, and that the platform has had issues with grooming of minors, but the LA lawsuit is the latest in a pattern of governments and researchers documenting the same problem Roblox has repeatedly said it’s addressing, and the latest attempt to mandate digital ID checks.

Roblox rejected the suit’s allegations. A company spokesman said the platform was built “with safety at its core” and pointed to existing protections: “We have advanced safeguards that monitor our platform for harmful content and communications, and users cannot send or receive images via chat, avoiding one of the most prevalent opportunities for misuse seen elsewhere online.”

The company added that it takes action against rule violators and cooperates with law enforcement, closing with: “There is no finish line when it comes to protecting kids and, while no system can be perfect, our commitment to safety never ends.”

The false advertising angle is what is most important to note. LA isn’t suing Roblox over what it collects or who can see it. The county is suing because the company told parents the platform was safe for kids while allegedly knowing otherwise.

Keep reading

Zuckerberg’s “Fix” for Child Safety Could End Anonymous Internet Access for Everyone

Mark Zuckerberg spent more than five hours on the stand in Los Angeles Superior Court on Wednesday, testifying before a jury for the first time about claims that Meta deliberately designed Instagram to addict children.

The headline from most coverage was the spectacle: an annotated paper trail of internal emails, a 35-foot collage of the plaintiff’s Instagram posts unspooled across the courtroom, a CEO growing visibly agitated under cross-examination.

The more important story is what Wednesday’s proceedings are being used to build.

The trial is framed as a child safety case. What it is actually doing, especially through Zuckerberg’s own testimony, is laying the political and legal groundwork for mandatory identity verification across the internet.

And Zuckerberg, rather than pushing back on that outcome, offered the court his preferred implementation plan.

Keep reading

French lawmakers vote to ban social media use by under-15s

French lawmakers have passed a bill that would ban social media use by under-15s, a move championed by president Emmanuel Macron as a way to protect children from excessive screen time.

The lower national assembly adopted the text by a vote of 130 to 21 in a lengthy overnight session from Monday to Tuesday.

It will now go to the Senate, France’s upper house, ahead of becoming law.

Macron hailed the vote as a “major step” to protect French children and teenagers in a post on X.

The legislation, which also provides for a ban on mobile phones in high schools, would make France the second country to take such a step following Australia’s ban for under-16s in December.

As social media has grown, so has concern that too much screen time is harming child development and contributing to mental health problems.

Keep reading

Macron Calls Online Free Speech Argument “Pure Bullshit”

European governments framing social media bans for minors as child protection are quiet about what those bans actually require: identity checks for everyone. Every adult who wants to use Instagram, TikTok, or YouTube in France, Spain, or Germany would need to verify their real-world identity to access the platform. Anonymity, one of the oldest protections for dissenting speech, goes with it.

That’s the context Emmanuel Macron left out when he called free speech online “pure bullshit” in New Delhi on Wednesday.

The French president was addressing companies and their American backers as European governments push social media restrictions, as well as curbs on “hate speech,” a move the Trump administration has criticized as censorship.

Macron’s counterargument is based on algorithmic opacity. “Having no clue about how their algorithm is made, how it’s tested, trained, and where it will guide you, the democratic consequences of this bias could be huge,” he said.

Keep reading