Australia Orders Tech Giants to Enforce Age Verification Digital ID by December 10

Australia is preparing to enforce one of the most invasive online measures in its history under the guise of child safety.

With the introduction of mandatory age verification across social media platforms, privacy advocates are warning that the policy, set to begin December 10, 2025, risks eroding fundamental digital rights for every user, not just those under 16.

eSafety Commissioner Julie Inman Grant has told tech giants like Google, Meta, TikTok, and Snap that they must be ready to detect and shut down accounts held by Australians under the age threshold.

She has made it clear that platforms are expected to implement broad “age assurance” systems across their services, and that “self-declaration of age will not, on its own, be enough to constitute reasonable steps.”

The new rules stem from the Online Safety Amendment (Social Media Minimum Age) Act 2024, which gives the government sweeping new authority to dictate how users verify their age before accessing digital services. Any platform that doesn’t comply could be fined up to $31M USD.

While the government claims the law isn’t a ban on social media for children under 16, in practice, it forces platforms to block these users unless they can pass age checks, which means a digital ID.

There will be no penalties for children or their parents, but platforms face immense legal and financial pressure to enforce restrictions, pressure that almost inevitably leads to surveillance-based systems.

The Commissioner said companies must “detect and de-activate these accounts from 10 December, and provide account holders with appropriate information and support before then.”

These expectations extend to providing “clear, age-appropriate communications” and making sure users can download their data and find emotional or mental health resources when their accounts are terminated.

She further stated that “efficacy will require layered safety measures, sometimes known as a ‘waterfall approach’,” a term often associated with collecting increasing amounts of personal data at multiple steps of user interaction.

Such layered systems often rely on facial scanning, government ID uploads, biometric estimation, or AI-powered surveillance tools to estimate age.

Keep reading

Should the Government Restrict ‘Harmful’ Speech Online?

The First Amendment prohibits the federal government from suppressing speech, including speech it deems “harmful,” yet lawmakers keep trying to regulate online discourse.

Over the summer, the Senate passed the Kids Online Safety Act (KOSA), a bill to allegedly protect children from the adverse effects of social media. Senate Majority Leader Chuck Schumer took procedural steps to end the debate and quickly advance the bill to a floor vote. According to Schumer, the situation was urgent. In his remarks, he focused on the stories of children who were targets of bullying and predatory conduct on social media. To address these safety issues, the proposed legislation would place liability on online platforms, requiring them to take “reasonable” measures to prevent and mitigate harm.

It’s now up to the House to push the bill forward to the President’s desk. After initial concerns about censorship, the House Committee on Energy and Commerce advanced the bill in September, paving the way for a final floor vote.

KOSA highlights an ongoing tension between free speech and current efforts to make social media “safer.” In its persistent attempts to remedy social harm, the government shrinks what is permissible to say online and assumes a role that the First Amendment specifically guards against.

At its core, the First Amendment is designed to protect freedom of speech from government intrusion. Congress is not responsible for determining what speech is permissible or what information the public has the right to access. Courts have long held that all speech is protected unless it falls within certain categories. Prohibitions against harmful speech—where “harmful” is determined solely by lawmakers—are not consistent with the First Amendment.

But bills like KOSA add layers of complexity. First, the government is not simply punishing ideological opponents or those with unfavorable viewpoints, which would clearly violate the First Amendment. When viewed in its best light, KOSA is equally about protecting children and their health. New York had similar public health and safety justifications for its controversial hate speech law, which was blocked by a district court and is pending appeal. Under this argument, which is often cited to rationalize speech limitations, the dangers to society are so great that the government should take action to protect vulnerable groups from harm. However, the courts have generally ruled that this is not sufficient justification to limit protected speech.

In American Booksellers Association v. Hudnut (1986), Judge Frank Easterbrook evaluated the constitutionality of a pornography prohibition enacted by the City of Indianapolis. The city reasoned that pornography has a detrimental impact on society because it influences attitudes and leads to discrimination and violence against women. As Judge Easterbrook wrote in his now-famous opinion, just because speech has a role in social conditioning or contributes loosely to social harm does not give the government license to control it. Such content is still protected, however harmful or insidious, and any answer to the contrary would allow the government to become the “great censor and director of which thoughts are good for us.”

In addition to the protecting children argument, a second layer of complexity is that KOSA enables censorship through roundabout means. The government accomplishes what it is barred from doing under the First Amendment by requiring online platforms to police a vast array of harms or risk legal consequences. This is a common feature of recent social media bills, which place the responsibility on platforms.

Practically, the result is inevitably less speech. Under KOSA, the platform has a “duty of care” to mitigate youth anxiety, depression, eating disorders, and addiction-like behaviors. While this provision focuses on the covered entity’s design and operation, it necessarily implicates speech since social media platforms are built around user-generated posts, from content curation to notifications. Because platforms are liable for falling short of the “duty of care,” this requirement is bound to sweep up millions of posts that are protected speech, even ordinary content that may trigger the enumerated harm. While the platform would technically be the entity implementing these policies, the government would be driving content removal.

Keep reading

Brazil Uses Child Safety as Cover for Online Digital ID Surge

Brazil’s Chamber of Deputies has advanced a bill marketed as a child protection measure, drawing sharp condemnation from lawmakers who say the process ignored legislative rules and opens the door to broad censorship of online content.

Bill PL 2628/2022, which outlines mandatory rules for digital platforms operating in Brazil, moved forward at an unusually fast pace after Chamber President Hugo Motta approved an urgency request on August 19.

That decision cut off critical steps in the legislative process, including committee review and broader debate, allowing the proposal to reach the full floor for a vote just one day later.

The urgency motion, Requerimento de Urgência REQ 1785/2025, passed without a roll-call vote. Instead, Motta used a symbolic vote, a method that records no individual positions and relies on the presiding officer’s perception of consensus. Requests for a formal, recorded vote were rejected outright.

Congressman Marcel van Hattem (NOVO-RS) accused the Chamber’s leadership of bypassing democratic norms. He said Motta approved the urgency request to expand the “censorship” of the Lula government.

Other deputies joined the protest, calling the process arbitrary and abusive.

Under the bill, digital platforms must verify users’ ages, take down material labeled offensive to minors, and comply with orders from a newly created federal oversight authority.

That body would hold sweeping powers to enforce regulations, issue sanctions, and even suspend platforms for up to 30 days in some circumstances, potentially without a full court decision.

Although the urgent request had been filed back in May, it gained renewed traction after social media influencer Felca released a series of videos exposing what he called the “adultization” of children online. His content prompted widespread media coverage and pushed the topic of online child safety to the forefront. In response, Motta committed to fast-tracking related legislation.

Keep reading

US Plan To Copy UK’s Disastrous Online Digital ID Verification Is Winning Friends in the Senate

The Kids Online Safety Act (KOSA) is moving forward in the US Senate with 16 new co-sponsors as of July 31, 2025, reviving a proposal that copies the same type of provision found in the UK’s controversial Online Safety Act, which has caused much backlash across the Atlantic.

In Britain, that measure forces online platforms to implement digital ID age checks before granting access to content deemed “harmful,” a policy that has caused intense resentment over privacy violations, the erosion of anonymity, and government overreach in the realm of free speech.

Now, US lawmakers are considering a similar framework, with more senators from both parties throwing their support behind the bill in recent weeks.

Marketed as a way to shield children from harmful online material, KOSA has gained prominent backing from Apple, which has publicly praised it as a step toward improving online safety. Yet beyond the reassuring branding, the legislation contains provisions that raise serious concerns for free expression and user privacy.

If enacted, the bill would give the Federal Trade Commission authority to investigate and sue platforms over content labeled as “harmful” to minors. This would push websites toward aggressive content moderation to avoid liability, creating an environment where speech is heavily filtered without the government ever issuing direct censorship orders.

The legislation also instructs the Secretary of Commerce, FTC, and FCC to explore “systems to verify age at the device or operating system level.” Such a mandate paves the way for nationwide digital identification, where every user’s online activity could be tied to a verifiable real-world identity.

Once anonymity is removed, the scope for surveillance and profiling expands dramatically, with personal data stored and potentially exploited by both corporations and government agencies.

Advocates of a free and open internet warn that laws like KOSA exploit the emotional appeal of child safety to introduce infrastructure that enables ongoing monitoring and identity tracking. Even with recent changes, such as removing state attorneys general from enforcement, these core concerns remain.

Senator Marsha Blackburn defended the bill, stating, “Big Tech platforms have shown time and time again they will always prioritize their bottom line over the safety of our children.” Yet KOSA’s structure could end up reinforcing the dominance of large tech firms, which are best positioned to implement costly verification systems and handle the resulting data.

The bill’s earlier version stalled in the House after leadership, including Speaker Mike Johnson, questioned its impact on free speech. Johnson remarked that he “love[s] the principle, but the details of that are very problematic,” a sentiment still shared by many who view KOSA as a gateway to lasting restrictions on online freedoms.

If this legislation moves forward, it will not simply affect what minors can view; it will alter the fundamental architecture of the internet, embedding identity verification and top-down content control into its design.

Keep reading

Age-Restricted Taxi Tracking? The Absurd Consequences Of Britain’s Online Safety Act

I was recently travelling in the UK and, after a lot of sightseeing on foot, decided to order a taxi to go back to my hotel.

I searched the internet for a local taxi firm and found one with relative ease. I called the number and went through an automated process which worked well. I managed to book a taxi quickly. The computer-generated voice told me that my taxi was on its way. I was sent a link so that I could monitor the progress of my taxi. The message also said that I would know the taxi driver’s name and the type of vehicle and registration number that was on its way….

I can’t understand why anyone would consider a link to show you the progress of a taxi that you have ordered to be age-inappropriate content.

I can only assume that it is to do with the recent Online Safety Act, although coincidentally I had recently changed mobile providers, so it might purely have been that the mobile provider that I’d switched to had a different standard as to what was considered adult content.

I doubt this on the basis that the company I moved to, Talkmobile, is a wholly owned subsidiary of the company I had used previously, Vodafone, and, as you can see, the block was from Vodafone.

Whoever has decided that this link contains age-restricted content hasn’t necessarily thought this through.

Consider the scenario where a 17 year-old girl can’t get hold of her parents and it’s too far away or she does not want to walk home, so she orders a taxi through a reputable taxi service.

A link is sent to her so she can see the progress of the taxi that she has ordered.

Of course, she can’t open it because it’s considered age-inappropriate and, being only 17, she’s not in a position to prove that she’s over 18 and thus get the link to the taxi.

Thankfully it’s rare, but we do know that there are predators out there who will look for people who are vulnerable, and it’s not difficult to spot someone who’s waiting for somebody to pick them up or waiting for a taxi, because every time a car approaches the person will look up from whatever they’re doing to see if it’s the car that’s picking them up.

All it would take would be for a predator to be around at that time, pull the window down and say, “Did you call for a taxi?” and, of course, because she’s just ordered one, she believes this is her taxi, so she gets in, perhaps never to be seen again — all because some moron has decided that a link to follow the progress of a taxi is something you’re not allowed to see if you’re under the age of 18.

Keep reading

DIGITAL ID: The Shocking Plan to Kill Free Speech Forever

The U.S. is on the verge of launching a dystopian online surveillance machine—and disturbingly, Republicans are helping make it law.

The SCREEN Act and KOSA claim to protect kids, but they’re Trojan horses. If passed, every American adult would be forced to verify their ID to access the internet—just like in Australia, where “age checks” morphed into speech policing. In the UK, digital ID is already required for jobs, housing, and healthcare.

This is how they silence dissent: by tying your identity to everything you read, say, or buy online.

The trap is nearly shut. Once it locks in, online freedom vanishes forever.

Will Americans wake up before it’s too late? Watch Maria Zeee expose the full blueprint—and how little time we have left.

Keep reading

Spotify Threatens to Delete Accounts That Fail Digital ID Checks

Spotify has begun warning users that their accounts could be permanently removed unless they complete a new age verification process, part of a broader shift toward stricter content access and censorship controls on digital platforms.

The company has introduced a system that uses facial recognition technology to estimate a user’s age, with further ID verification required if the software detects someone who appears to be underage.

A notification recently began appearing within the app, instructing listeners to verify their age through Yoti, a third-party application that scans faces via smartphone cameras to assess whether a user meets the required age for access.

If the system concludes that a person might be too young, Spotify will ask for additional documentation and show ID. Anyone who does not complete the verification within 90 days will lose access to their account entirely.

According to Spotify’s updated policy page, “You cannot use Spotify if you don’t meet the minimum age requirements for the market you’re in,” adding that users who cannot confirm their age “will be deactivated and eventually deleted.”

The platform, which allows users as young as 13 to join, said it will begin prompting certain individuals to verify their age when they attempt to view content labeled as suitable only for adults.

“Some users will now have to confirm their age by going through an age assurance process,” Spotify stated. This may occur, for example, when someone tries to watch a music video rated 18+ by the rights-holder.

Spotify’s decision arrives amid a wave of newly mandated age-check measures driven by the UK’s new censorship law, the Online Safety Act, which came into force recently.

Under the law, platforms must restrict access to content not suitable for minors, including pornography and violent material, and enforce age thresholds set out in their own user policies. Companies that fail to comply face fines of up to 10 percent of global turnover.

Keep reading

Australia Bans YouTube for Children Under 16

The government of Australia has reversed its decision to grant YouTube an exemption from its sweeping ban on social media for children under 16. YouTube’s parent company, Google, is threatening legal action, but Australian officials vowed to push ahead with the ban.

“We can’t control the ocean, but we can police the sharks, and that is why we will not be intimidated by legal threats when this is a genuine fight for the wellbeing of Australian kids,” Communications Minister Anika Wells said when Google threatened to sue.

Australia announced its “world-leading” plan to bar children from using social media in November 2024. Despite resistance from Internet freedom advocates, and difficult questions about precisely how such a ban could be implemented, the relevant legislation was quickly passed, and the ban is set to take effect in December 2025.

Prime Minister Anthony Albanese gave a press conference on Wednesday in which he pledged to promote Australia’s social media ban to other countries at the United Nations General Assembly in September.

“I know from the discussions I have had with other leaders that they are looking at this and they are considering what impact social media is having on young people in their respective nations, it is a common experience,” Albanese said, appearing with the parents of children who were bullied to death on social media.

“We don’t do this easily. What we do, though, is respond to something that is needed here,” he said.

YouTube was granted an exemption from the ban when it was passed by Parliament in November, for several reasons. One was that YouTube was viewed as an important source of information for teens, so even though it carried potentially harmful content, the good was thought to outweigh the bad.

LGBTQ groups insisted YouTube was an important resource for gay and lesbian children, while public health groups said they used the platform to distribute important information to young people. Australian parents found YouTube less alarming that competing platforms like TikTok. YouTube also featured less direct interaction between users than most of the social media platforms that troubled Australian regulators.

A final objection to banning YouTube was that logging into the service is not required – visitors can access the vast majority of the platform’s content as “guests.” This meant there was no practical way to hold YouTube accountable for policing the age of its users.

Naturally, many of the platforms that were targeted by Australia’s social media ban resented the exemption granted to YouTube. These complaints might have had some bearing on the government’s decision to cancel YouTube’s exemption.

According to Australia’s ABC News, YouTube was added to the social media ban at the request of eSafety Commissioner Julie Inman Grant, who wrote a letter to Wells asking for YouTube’s exemption to be rescinded. Inman Grant said her recommendation was based on a survey of 2,600 children that found nearly 40 percent of them had been exposed to “harmful content” while using YouTube.

Keep reading

Tea App Leak Shows Why UK’s Digital ID Age Verification Laws are Dangerous

The UK’s Online “Safety” Act, legislation marketed as a safety net for children, was rolled out with all the foresight of a toddler launching a space program. Now, any site hosting “potentially harmful” content could be required to collect real-world ID, face scans, or official documents from users.

What could go wrong? Ask Tea, the women-centric dating gossip app that went viral by promising empowerment, then faceplanted into one of the most dangerous data breaches of the year. Their Firebase server, housing tens of thousands of selfies and government-issued IDs, was left wide open to anyone with a link.

This is the real-world consequence of lawmakers selling digital ID mandates as a solution to online harm: private companies getting access to sensitive personal data with all the discretion of a parade float, and then dropping it into the laps of the entire internet.

Let’s pause for a moment and appreciate the cosmic genius it takes to build an app allegedly designed to protect women, and then expose all of their private data to the world with the finesse of a first-time hacker copying a URL.

Tea, the dating app that rocketed to the top of the App Store by selling anonymity, safety, and empowerment, before face-planting into the Firebase server floor, spraying driver’s licenses and selfies like a busted confetti cannon.

Keep reading

Marijuana Legalization Doesn’t Increase Youth Use, Top Researcher Says At Federal Meeting

At a webinar hosted by the federal Substances and Mental Health Services Administration (SAMHSA) last week, a leading cannabis researcher threw cold water on the notion that legalizing marijuana leads to increases in youth use of the drug. He also touched on problems with roadside assessments of cannabis impairment, the risk of testing positive for THC after using CBD products and the need for more nuanced regulation around cannabinoids themselves.

The public talk, from Ryan Vandry, an experimental psychologist and professor at Johns Hopkins University’s Behavioral Pharmacology Research Unit, was aimed at providing continuing education on marijuana for healthcare professionals. Titled “Behavioral Pharmacology of Cannabis – Trends in Use, Novel Products, and Impact,” it focused primarily on how variables like dosage, product formulation, mode of administration and chemical components such as terpenes can influence the drug’s effects.

Vandry began by noting that marijuana is the most commonly used illicit drug in the United States. While self-reported consumption by adults has risen as more states have legalized in recent years, he noted, use by youth has generally remained flat or fallen.

“Use among youth is one of the biggest areas of concern related to the legalization and increased accessibility of cannabis,” he said, “but surprisingly, that cohort has actually maintained relatively stable [for] both past-year and daily use.”

Pointing to data from California going back to 1996, when the state ended prohibition for medical patients, Vandry said there has “really been no change in the rates of cannabis use among eighth, 10th or 12th graders. And in fact, in very recent years, we’ve seen a decrease in rates of consumption.”

Keep reading