Australia launches youth social media ban it says will be the world’s ‘first domino’

Can children and teenagers be forced off social media en masse? Australia is about to find out.

More than 1 million social media accounts held by users under 16 are set to be deactivated in Australia on Wednesday in a divisive world-first ban that has inflamed a culture war and is being closely watched in the United States and elsewhere.

Social media companies will have to take “reasonable steps” to ensure that under-16s in Australia cannot set up accounts on their platforms and that existing accounts are deactivated or removed.

Australian officials say the landmark ban, which lawmakers swiftly approved late last year, is meant to protect children from addictive social media platforms that experts say can be disastrous for their mental health.

“With one law, we can protect Generation Alpha from being sucked into purgatory by predatory algorithms described by the man who created the feature as ‘behavioral cocaine,’” Communications Minister Anika Wells told the National Press Club in Canberra last week.

While many parents and even their children have welcomed the ban, others say it will hinder young people’s ability to express themselves and connect with others, as well as access online support that is crucial for those from marginalized groups or living in isolated parts of rural Australia. Two 15-year-olds have brought a legal challenge against it to the nation’s highest court.

Supporters say the rest of the world will soon follow the example set by the Australian ban, which faced fierce resistance from social media companies.

“I’ve always referred to this as the first domino, which is why they pushed back,” Julie Inman Grant, who regulates online safety as Australia’s eSafety Commissioner, said at an event in Sydney last week.

Keep reading

This FTC Workshop Could Legitimize the Push for Online Digital ID Checks

In January 2026, the Federal Trade Commission plans to gather a small army of “experts” in Washington to discuss a topic that sounds technical but reads like a blueprint for a new kind of internet.

Officially, the event is about protecting children. Unofficially, it’s about identifying everyone.

The FTC says the January 28 workshop at the Constitution Center will bring together researchers, policy officials, tech companies, and “consumer representatives” to explore the role of age verification and its relationship to the Children’s Online Privacy Protection Act, or COPPA.

It’s all about collecting and verifying age information, developing technical systems for estimation, and scaling those systems across digital environments.

In government language, that means building tools that could determine who you are before you click anything.

The FTC suggests this is about safeguarding minors. But once these systems exist, they rarely stop where they start. The design of a universal age-verification network could reach far beyond child safety, extending into how all users identify themselves across websites, platforms, and services.

The agency’s agenda suggests a framework for what could become a credential-based web. If a website has to verify your age, it must verify you. And once verified, your information doesn’t evaporate after you log out. It’s stored somewhere, connected to something, waiting for the next access request.

The federal effort comes after a wave of state-level enthusiasm for the same idea. TexasUtahMissouriVirginia, and Ohio have each passed laws forcing websites to check the ages of users, often borrowing language directly from the European UnionAustralia, and the United Kingdom. Those rules require identity documents, biometric scans, or certified third parties that act as digital hall monitors.

In these states, “click to enter” has turned into “show your papers.”

Many sites now require proof of age, while others test-drive digital ID programs linking personal credentials to online activity.

The result is a slow creep toward a system where logging into a website looks a lot like crossing a border.

Keep reading

Lawmakers To Consider 19 Bills for Childproofing the Internet

Can you judge the heat of a moral panic by the number of bills purporting to solve it? At the height of human trafficking hysteria in the 2010s, every week seemed to bring some new measure meant to help the government tackle the problem (or at least get good press for the bill’s sponsor). Now lawmakers have moved on from sex trafficking to social media—from Craigslist and Backpage to Instagram, TikTok, and Roblox. So here we are, with a House Energy and Commerce subcommittee hearing on 19 different kids-and-tech bills scheduled for this week.

The fun kicks off tomorrow, with legislators discussing yet another version of the Kids Online Safety Act (KOSA)—a dangerous piece of legislation that keeps failing but also refuses to die. (See some of Reason‘s previous coverage of KOSA herehere, and here.)

The new KOSA no longer explicitly says that online platforms have a “duty of care” when it comes to minors—a benign-sounding term that could have chilled speech by requiring companies to somehow protect minors from a huge array of “harms,” from anxiety and depression to disordered eating to spending too much time online. But it still essentially requires this, saying that covered platforms must “establish, implement, maintain, and enforce reasonable policies, practices, and procedure” that address various harms to minors, including threats, sexual exploitation, financial harm, and the “distribution, sale, or use of narcotic drugs, tobacco products, cannabis products, gambling, or alcohol.” And it would give both the states and the Federal Trade Commission the ability to enforce this requirement, declaring any violation an “unfair or deceptive” act that violates the Federal Trade Commission Act.

Despite the change, KOSA’s core function is still “to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to” some harm, as Joe Mullin wrote earlier this year about a similar KOSA update in the Senate.

Language change or not, the bill would still compel platforms to censor a huge array of content out of fear that the government might decide it contributed to some vague category of harm and then sue.

KOSA is bad enough. But far be it for lawmakers to stop there.

Keep reading

As Expected, a Hearing on Kids Online Safety Becomes a Blueprint for Digital ID

The latest congressional hearing on “protecting children online” opened as you would expect: the same characters, the same script, a few new buzzwords, and a familiar moral panic to which the answer is mass surveillance and censorship.

The Subcommittee on Commerce, Manufacturing, and Trade had convened to discuss a set of draft bills packaged as the “Kids Online Safety Package.” The name alone sounded like a software update against civil liberties.

The hearing was called “Legislative Solutions to Protect Children and Teens Online.” Everyone on the dais seemed eager to prove they were on the side of the kids, which meant, as usual, promising to make the internet less free for everyone else.

Rep. Gus Bilirakis (R-FL), who chaired the hearing, kicked things off by assuring everyone that the proposed bills were “mindful of the Constitution’s protections for free speech.”

He then reminded the audience that “laws with good intentions have been struck down for violating the First Amendment” and added, with all the solemnity of a man about to make that same mistake again, that “a law that gets struck down in court does not protect a child.”

They know these bills are legally risky, but they’re going to do it anyway.

Bilirakis’s point was echoed later by House Energy & Commerce Committee Chairman Brett Guthrie (R-KY), who claimed the bills had been “curated to withstand constitutional challenges.” That word, curated, was doing a lot of work.

Guthrie went on to insist that “age verification is needed…even before logging in” to trigger privacy protections under COPPA 2.0.

The irony of requiring people to surrender their private information in order to be protected from privacy violations was lost in the shuffle.

Keep reading

YouTube says it will comply with Australia’s teen social media ban

Google’s YouTube shared a “disappointing update” to millions of Australian users and content creators on Wednesday, saying it will comply with a world-first teen social media ban by locking out users aged under 16 from their accounts within days.

The decision ends a stand-off between the internet giant and the Australian government which initially exempted YouTube from the age restriction, citing its use for educational purposes. Google (GOOGL.O) had said it was getting legal advice about how to respond to being included.

“Viewers must now be 16 or older to sign into YouTube,” the company said in a statement.

“This is a disappointing update to share. This law will not fulfill its promise to make kids safer online and will, in fact, make Australian kids less safe on YouTube.”

The Australian ban is being closely watched by other jurisdictions considering similar age-based measures, setting up a potential global precedent for how the mostly U.S. tech giants behind the biggest platforms balance child safety with access to digital services.

The Australian government says the measure responds to mounting evidence that platforms are failing to do enough to protect children from harmful content.

Keep reading

Congress Goes Parental on Social Media and Your Privacy

Washington has finally found a monster big enough for bipartisan unity: the attention economy. In a moment of rare cross-aisle cooperation, lawmakers have introduced two censorship-heavy bills and a tax scheme under the banner of the UnAnxious Generation package.

The name, borrowed from Jonathan Haidt’s pop-psychology hit The Anxious Generation, reveals the obvious pitch: Congress will save America’s children from Silicon Valley through online regulation and speech controls.

Representative Jake Auchincloss of Massachusetts, who has built a career out of publicly scolding tech companies, says he’s going “directly at their jugular.”

The plan: tie legal immunity to content “moderation,” tax the ad money, and make sure kids can’t get near an app without producing an “Age Signal.” If that sounds like a euphemism for surveillance, that’s because it is.

The first bill, the Deepfake Liability Act, revises Section 230, the sacred shield that lets platforms host your political rants, memes, and conspiracy reels without getting sued for them.

Under the new proposal, that immunity becomes conditional on a vague “duty of care” to prevent deepfake porn, cyberstalking, and “digital forgeries.”

TIME’s report doesn’t define that last term, which could be a problem since it sounds like anything from fake celebrity videos to an unflattering AI meme of your senator. If “digital forgery” turns out to include parody or satire, every political cartoonist might suddenly need a lawyer on speed dial.

Auchincloss insists the goal is accountability, not censorship. “If a company knows it’ll be liable for deepfake porn, cyberstalking, or AI-created content, that becomes a board-level problem,” he says. In other words, a law designed to make executives sweat.

But with AI-generated content specifically excluded from Section 230 protections, the bill effectively redefines the internet’s liability protections.

Keep reading

Missouri Locks the Web Behind a “Harmful” Content ID Check

Starting November 30, 2025, people in Missouri will find the digital world reshaped: anyone wishing to visit websites containing “harmful” adult material will need to prove they are at least 18 years old by showing ID.

This new requirement marks Missouri’s entry into the growing group of US states adopting age verification laws for online content. Yet the move does more than restrict access; it raises serious questions about how much personal data people must surrender just to browse freely.

For many, that tradeoff is likely to make privacy tools like VPNs a near necessity rather than a choice.

The law defines its targets broadly. Any site or app where over one-third of the material is classified as “harmful to minors” must block entry until users confirm their age.

Those who do not comply risk penalties that can reach $10,000 a day, with violations categorized as “unfair, deceptive, fraudulent, or otherwise unlawful practices.”

To meet these standards, companies are permitted to check age through digital ID systems, government-issued documents such as driver’s licenses or passports, or existing transactional data that proves a person’s age.

Keep reading

Children removed from Australian-British couple living off-grid in Italian forest

The children of an Australian mother living off-grid in an Italian forest have been removed by local authorities, after the family came under scrutiny when they were hospitalised due to eating poisonous mushrooms.

A juvenile court in the Italian city of L’Aquila ruled last week to place the three children of Australian woman Catherine Birmingham and her British husband Nathan Trevallion into protective care.

The court cited poor sanitary conditions at the family’s home in the mountainous Abruzzo region and unauthorised homeschooling of their eight-year-old daughter and six-year-old twin boys, according to AFP.

Ms Birmingham, a life coach and former horse riding teacher from Melbourne, bought the farmhouse in 2021 with Mr Trevallion, a former chef from Bristol.

They were raising the children in the woodlands home without mains electricity, water or gas, relying instead on solar power, well water and homegrown food.

“The members of the Trevallion family have no social interactions, no steady income,” the court said in its written ruling.

Keep reading

Texas: ID Will Be Linked to Every Google Search! New Law Requires Age Verification

Texas SB2420, known as the App Store Accountability Act, requires app stores to verify the age of users and obtain parental consent for those under 18. This law aims to enhance protections for minors using mobile applications and is set to take effect on January 1, 2026.

Texas has joined a multi-state crusade to enforce digital identification in America—marketed as a way to “protect children.”

Yet privacy experts say the real goal isn’t child protection—it’s control. 

Roblox insists its new “age estimation” system improves safety, but it relies on biometric and government data—creating the foundation for permanent digital tracking. With Texas now the fifth state to join the campaign, one question remains: how long before “protecting kids” becomes the excuse to monitor everyone?

From Reclaim the Net:

Texas Sues Roblox Over Child Safety Failures, Joining Multi-State Push for Digital ID

Texas has become the latest state to take legal action against Roblox, joining a growing number of attorneys general who accuse the gaming platform of failing to protect children.

The case also renews attention on the broader push for online age verification, a move that would lead to widespread digital ID requirements.

Attorney General Ken Paxton filed the lawsuit on November 6, alleging that Roblox allowed predators to exploit children while misleading families about safety protections.

We obtained a copy of the lawsuit for you here.

Keep reading

MAHA: Monitoring Americans’ Health Attributes — or CCP-style Digital Control Grid?

This summer, President Donald Trump unveiled a sweeping plan to “bring healthcare into the digital age.” He calls it the “Digital Health Tech Ecosystem.” Health and Human Services Secretary Robert F. Kennedy, Jr. also announced the launch of a digital health ID initiative in conjunction with Amazon, Apple, Google, OpenAI, and Anthropic. The latter is an AI startup that received most of its $580 million seed funding from the now-bankrupt FTX under convicted fraudster Sam Bankman-Fried.

This “Ecosystem” is part of the artificial intelligence (AI) venture Stargate Project, which Trump excitedly announced on his first day in office. Stargate is the reason you may have noticed large AI facilities springing up across the country, driving up energy prices with their unprecedented demand for electricity, and threatening aquifers with their unprecedented demand for water.

Trump declared Texas-based Stargate to be a $500 billion collaboration between leading tech companies that will make the United States the global leader in AI. Among investors are OpenAI CEO Sam Altman and Oracle chairman Larry Ellison. During the White House unveiling, Ellison bragged that Stargate’s AI would be able to produce cancer vaccines in 48 hours.

Microsoft and NVIDIA are two other U.S.-based investors, while Emirati state-owned MGX of Abu Dhabi and U.K.-based Arm Holdings, Inc. are also involved. Stargate’s chairman is Japanese billionaire Masayoshi Son, who also chairs Stargate investor SoftBank.

Data Not Secure

Naturally, the healthcare component of this technological boom is supposed to help the little guy: improving patient care through earlier disease detection and — you guessed it — vaccinations. But are we to believe that this international consortium of businesses has our best interests at heart?

For that matter, do our own politicians? During testimony before Congress earlier this year, Kennedy admitted: “My vision is that every American is wearing a wearable [health-related monitor] within four years.” But he dodged a follow-up question about plans to secure that personal health data. That’s disconcerting, considering the vulnerability of personal information in federal hands. Remember the early 2025 reveal that Elon Musk’s Department of Government Efficiency gained illicit access to 19 sensitive U.S. Health and Human Services databases, exposing everything from electronic health records to Social Security and bank details? 

Wearables

The “wearable” health monitors would expand that data collection astronomically, creating a “digital twin” of yourself as government officials harvest vital signs, movement and sleep patterns, and other physical metrics in real time.

Moreover, Trump signed an executive order in March calling for data-sharing of personal information about Americans across federal agencies. His administration has since awarded more than $900 million in contracts to Peter Thiel’s data analytics company, Palantir, while even current and former employees have petitioned the company to pull out of the plans.

The HopeGirl Alternative News channel on Rumble depicts what healthcare in this modern Fourth Industrial Revolution will look like. Healthcare 4.0 works with a constant stream of data from wearable devices to analyze us — individually and population-wide — at every hour of the day in all settings. This system is already in operation. Starting in 2020, U.S. hospitals implemented “body area networks” (BAN) to deliver real-time vitals to the Pentagon’s Project Salus during the Covid “public health emergency.”

The REAL ID Connection

This helps explain why U.S. Homeland Security Secretary Kristi Noem finally enforced the REAL ID Act of 2005 this year. (Right now, it’s mandatory for domestic air travel and entering federal buildings, but the legislation allows for unlimited expansion of REAL ID requirements.) Until this year, various states stymied REAL ID, correctly labeling it a gross violation of Americans’ constitutionally protected rights. Now, the U.S. Transportation Security Administration boasts on its website about its biometric overhaul.

Indeed, the REAL ID Act allows states to collect biometric data (fingerprints, facial geometry, triangulated body measurements) on each of us. The Citizens’ Council for Health Freedom (CCHF) explains that the “purposes could include banking, employment or health care.”

CCHF warns: “REAL ID provides the digital and biometric infrastructure to implement a China-like control grid, where your access to services could depend on behavior, beliefs or health status.”

Keep reading