YouTube says it will comply with Australia’s teen social media ban

Google’s YouTube shared a “disappointing update” to millions of Australian users and content creators on Wednesday, saying it will comply with a world-first teen social media ban by locking out users aged under 16 from their accounts within days.

The decision ends a stand-off between the internet giant and the Australian government which initially exempted YouTube from the age restriction, citing its use for educational purposes. Google (GOOGL.O) had said it was getting legal advice about how to respond to being included.

“Viewers must now be 16 or older to sign into YouTube,” the company said in a statement.

“This is a disappointing update to share. This law will not fulfill its promise to make kids safer online and will, in fact, make Australian kids less safe on YouTube.”

The Australian ban is being closely watched by other jurisdictions considering similar age-based measures, setting up a potential global precedent for how the mostly U.S. tech giants behind the biggest platforms balance child safety with access to digital services.

The Australian government says the measure responds to mounting evidence that platforms are failing to do enough to protect children from harmful content.

Keep reading

Congress Goes Parental on Social Media and Your Privacy

Washington has finally found a monster big enough for bipartisan unity: the attention economy. In a moment of rare cross-aisle cooperation, lawmakers have introduced two censorship-heavy bills and a tax scheme under the banner of the UnAnxious Generation package.

The name, borrowed from Jonathan Haidt’s pop-psychology hit The Anxious Generation, reveals the obvious pitch: Congress will save America’s children from Silicon Valley through online regulation and speech controls.

Representative Jake Auchincloss of Massachusetts, who has built a career out of publicly scolding tech companies, says he’s going “directly at their jugular.”

The plan: tie legal immunity to content “moderation,” tax the ad money, and make sure kids can’t get near an app without producing an “Age Signal.” If that sounds like a euphemism for surveillance, that’s because it is.

The first bill, the Deepfake Liability Act, revises Section 230, the sacred shield that lets platforms host your political rants, memes, and conspiracy reels without getting sued for them.

Under the new proposal, that immunity becomes conditional on a vague “duty of care” to prevent deepfake porn, cyberstalking, and “digital forgeries.”

TIME’s report doesn’t define that last term, which could be a problem since it sounds like anything from fake celebrity videos to an unflattering AI meme of your senator. If “digital forgery” turns out to include parody or satire, every political cartoonist might suddenly need a lawyer on speed dial.

Auchincloss insists the goal is accountability, not censorship. “If a company knows it’ll be liable for deepfake porn, cyberstalking, or AI-created content, that becomes a board-level problem,” he says. In other words, a law designed to make executives sweat.

But with AI-generated content specifically excluded from Section 230 protections, the bill effectively redefines the internet’s liability protections.

Keep reading

Congress Pushes for Nationwide Internet Age Verification Plan.

Republican lawmakers are proposing a new way to enforce accountability on tech companies to comply with age verification laws, despite resistance from websites like Pornhub. The App Store Accountability Act (ASA), introduced by Senator Mike Lee (R-UT) and Representative John James (R-MI), proposes a different model: requiring app stores themselves to verify users’ ages and pass that information to apps when they are downloaded.

The bill is part of a broader push in Congress to tighten safeguards for minors online and has earned support from major tech companies, including Facebook parent company Meta, Pinterest, and Snap. Pinterest CEO Bill Ready argues that one standard would simplify the process and reduce the confusion created by a patchwork of state requirements. “The need for a federal standard is urgent,” he said.

“I think most people at most of these companies probably do want to protect kids,” Sen. Lee said, adding that support from tech companies like Pinterest “makes a big difference.”

However, the proposal faces resistance from civil liberties groups and digital rights advocates. Critics warn that compulsory age verification could limit access to lawful online content, raising First Amendment concerns. They also cite significant privacy risks, arguing that systems requiring users to submit sensitive personal information could expose them to data breaches or misuse.

Some major websites have rejected attempts to enforce online age verification. Pornhub has withdrawn its services from states that require government-issued ID or similar credentials for access to adult material. The company argued that these laws push users toward unregulated platforms while forcing supposedly legitimate sites to collect data they would prefer not to hold.

In 2025, the Supreme Court upheld a state age-verification law for explicit content in Texas, with the majority concluding that states may require age checks to prevent minors from viewing harmful material.

Supporters of federal action contend that the ASA would avoid the growing compliance difficulties posed by differing state regulations. Sen. Lee has stated, “I don’t believe that there’s anything unlawful, unconstitutional, or otherwise problematic about this legislation,” arguing that an app-store-centered approach would reduce repeated verification across multiple platforms.

Keep reading

Missouri Locks the Web Behind a “Harmful” Content ID Check

Starting November 30, 2025, people in Missouri will find the digital world reshaped: anyone wishing to visit websites containing “harmful” adult material will need to prove they are at least 18 years old by showing ID.

This new requirement marks Missouri’s entry into the growing group of US states adopting age verification laws for online content. Yet the move does more than restrict access; it raises serious questions about how much personal data people must surrender just to browse freely.

For many, that tradeoff is likely to make privacy tools like VPNs a near necessity rather than a choice.

The law defines its targets broadly. Any site or app where over one-third of the material is classified as “harmful to minors” must block entry until users confirm their age.

Those who do not comply risk penalties that can reach $10,000 a day, with violations categorized as “unfair, deceptive, fraudulent, or otherwise unlawful practices.”

To meet these standards, companies are permitted to check age through digital ID systems, government-issued documents such as driver’s licenses or passports, or existing transactional data that proves a person’s age.

Keep reading

GOP-Controlled Senate Committee Warns DC That Marijuana Is Federally Illegal, With ‘Enhanced Penalties’ For Sales Near Schools

GOP members of a powerful Senate committee are issuing a reminder that marijuana remains illegal under federal law and that the sale of cannabis near public schools and playgrounds can carry “enhanced penalties”—an issue they are specifically highlighting in relation to the location of dispensaries in Washington, D.C.

The Republican majority in the Senate Appropriations Committee released the text of a Financial Services and General Government (FSGG) spending bill and an attached report on Tuesday. As expected, the legislation itself retains a rider long championed by Rep. Andy Harris (R-MD) barring D.C. from using its tax dollars to legalize and regulate recreational marijuana sales, despite voters approving a ballot initiative to allow possession and home cultivation more than a decade ago.

In the report, a section on funding for “emergency planning and security costs” associated with the federal government’s presence in the District includes additional language related to cannabis enforcement and zoning issues.

Here’s the text of that section:

Marijuana Dispensary Proximity to Schools—The Committee reminds the District that the distribution, manufacturing, and sale of marijuana remains illegal under Federal law, which includes enhanced penalties for such distribution within one thousand feet of a public or private elementary, vocational, or secondary school or public or private college, junior college, or university, or a playground, among other real property where children frequent.”

The report language is being released months after anti-marijuana organizations formally narced on several locally licensed cannabis businesses in D.C.—sending a letter to President Donald Trump, the U.S. attorney general and a federal prosecutor that identifies dispensaries they allege are too close to schools despite approval from District officials.

The groups said that while they were “pleased” to see former interim U.S. Attorney Ed Martin “take initial steps against one of the worst offenders” by threatening a locally licensed medical marijuana dispensary with criminal prosecution back in March, “we have not seen any public progress since then.”

Martin, for his part, has since been tapped by Trump to serve as U.S. pardon attorney.

Meanwhile, the underlying FSGG spending bill put forward by the committee’s GOP majority would continue to prohibit D.C. from creating a regulated, commercial cannabis market.

Keep reading

Texas: ID Will Be Linked to Every Google Search! New Law Requires Age Verification

Texas SB2420, known as the App Store Accountability Act, requires app stores to verify the age of users and obtain parental consent for those under 18. This law aims to enhance protections for minors using mobile applications and is set to take effect on January 1, 2026.

Texas has joined a multi-state crusade to enforce digital identification in America—marketed as a way to “protect children.”

Yet privacy experts say the real goal isn’t child protection—it’s control. 

Roblox insists its new “age estimation” system improves safety, but it relies on biometric and government data—creating the foundation for permanent digital tracking. With Texas now the fifth state to join the campaign, one question remains: how long before “protecting kids” becomes the excuse to monitor everyone?

From Reclaim the Net:

Texas Sues Roblox Over Child Safety Failures, Joining Multi-State Push for Digital ID

Texas has become the latest state to take legal action against Roblox, joining a growing number of attorneys general who accuse the gaming platform of failing to protect children.

The case also renews attention on the broader push for online age verification, a move that would lead to widespread digital ID requirements.

Attorney General Ken Paxton filed the lawsuit on November 6, alleging that Roblox allowed predators to exploit children while misleading families about safety protections.

We obtained a copy of the lawsuit for you here.

Keep reading

Wisconsin Lawmakers Propose VPN Ban and ID Checks on Adult Sites

Wisconsin legislators have found a new villain in their quest to save people from themselves: the Virtual Private Network.

The state’s latest moral technology initiative, split into Assembly Bill 105 and Senate Bill 130, would force adult websites to verify user ages and ban anyone connecting through a VPN.

It passed the Assembly in March and now waits in the Senate, where someone will have to pretend this is enforceable.

Supporters are selling the plan as a way to “protect minors from explicit material.”

The bill’s machinery reads like a privacy demolition project written by people who still call tech support to reset passwords.

The law would apply to any site that “knowingly and intentionally publishes or distributes material harmful to minors.” It then defines that material as anything lacking “serious literary, artistic, political, or scientific value for minors.”

The wording is broad enough to rope in half the internet, yet somehow manages to exclude “bona fide news” (as to be determined by the state) and cloud platforms that don’t create the content themselves.

Whether that covers social media depends on who you ask: lawyers, lobbyists, or whichever intern wrote the definitions section.

The bill instructs websites to delete verification data after access is granted or denied.

That sounds good until you recall how the tech industry handles deletion promises.

Au10tix left user records exposed for a year after pledging to delete them within 30 days. Tea suffered multiple breaches despite assurances of immediate deletion. In the real world, “deleted” often means “archived on an unsecured server until a hacker finds it.”

The headline feature is a rule penalizing anyone who uses a VPN to access restricted material. VPNs encrypt internet traffic and disguise user locations, which lawmakers apparently see as a threat to order.

The logic is that if people can hide their IP addresses, the state can’t check their ID to ensure they’re old enough to view certain content. That’s technically true and philosophically disturbing.

Officials in other places are already cheering this idea. Michigan introduced a proposal requiring internet providers to detect and block VPN traffic.

If Wisconsin adopts the rule, VPN users would become collateral damage. Journalists, activists, and everyday users who rely on encryption for safety would be swept up in the ban.

Keep reading

Lawmakers Want Proof of ID Before You Talk to AI

It was only a matter of time before someone in Congress decided that the cure for the internet’s ills was to make everyone show their papers.

The “Guidelines for User Age-verification and Responsible Dialogue Act of 2025,” or GUARD Act, has arrived to do just that.

We obtained a copy of the bill for you here.

Introduced by Senators Josh Hawley and Richard Blumenthal, the bill promises to “protect kids” from AI chatbots that allegedly whisper bad ideas into young ears.

The idea: force every chatbot developer in the country to check users’ ages with verified identification.

The senators call it “reasonable age verification.”

That means scanning your driver’s license or passport before you can talk to a digital assistant.

Keeping in mind that AI is being added to pretty much everything these days, the implications of this could be far-reaching.

Keep reading

Florida Attorney Sues Roku Over Failure to Implement Age Verification, Privacy Concerns

Florida’s attorney general has filed a lawsuit against Roku, drawing attention to the growing privacy risks tied to smart devices that quietly track user behavior.

The case, brought by Attorney General James Uthmeier under the Florida Digital Bill of Rights, accuses the streaming company of collecting and selling the personal data of children without consent while refusing to take reasonable steps to determine which users are minors.

We obtained a copy of the lawsuit for you here.

The lawsuit portrays Roku as a company that profits from extensive data collection inside homes, including data from children. According to the complaint, Roku “collected, sold and enabled reidentification of sensitive personal data, including viewing habits, voice recordings and other information from children, without authorization or meaningful notice to Florida families.”

It continues, “Roku knows that some of its users are children but has consciously decided not to implement industry-standard user profiles to identify which of its users are children.”

Another passage states, “Roku buries its head in the sand so that it can continue processing and selling children’s valuable personal and sensitive data.”

The growing push for digital ID–based age verification is being framed as a way to protect children online, but privacy advocates warn it would do the opposite.

Keep reading

Instagram says it’s safeguarding teens by limiting them to PG-13 content

Meta says teenagers on Instagram will be restricted to seeing PG-13 content by default and won’t be able to change their settings without a parent’s permission

Instagram says it’s safeguarding teens by limiting them to PG-13 contentBy BARBARA ORTUTAYAP Technology WriterThe Associated Press

Teenagers on Instagram will be restricted to seeing PG-13 content by default and won’t be able to change their settings without a parent’s permission, Meta announced on Tuesday.

This means kids using teen-specific accounts will see photos and videos on Instagram that are similar to what they would see in a PG-13 movie — no sex, drugs or dangerous stunts, among others.

“This includes hiding or not recommending posts with strong language, certain risky stunts, and additional content that could encourage potentially harmful behaviors, such as posts showing marijuana paraphernalia,” Meta said in a blog post Tuesday, calling the update the most significant since it introduced teen accounts last year.

Anyone under 18 who signs up for Instagram is automatically placed into restrictive teen accounts unless a parent or guardian gives them permission to opt out. The teen accounts are private by default, have usage restrictions on them and already filter out more “sensitive” content — such as those promoting cosmetic procedures. But kids often lie about their ages when they sign up for social media, and while Meta has began using artificial intelligence to find such accounts, the company declined to say how many adult accounts it has determined to be minors since rolling out the feature earlier this year.

The company is also adding an even stricter setting that parents can set up for their children.

The changes come as the social media giant faces relentless criticism over harms to children. As it seeks to add safeguards for younger users, Meta has already promised it wouldn’t show inappropriate content to teens, such as posts about self-harm, eating disorders or suicide.

But this does not always work. A recent report, for instance, found that teen accounts researchers created were recommended age-inappropriate sexual content, including “graphic sexual descriptions, the use of cartoons to describe demeaning sexual acts, and brief displays of nudity.”

Keep reading