GrapheneOS Defies Age Verification Surveillance Laws, Vowing to Protect User Privacy Worldwide

GrapheneOS has a simple answer to the wave of age verification laws moving through US state legislatures and already live in Brazil: no.

The privacy-focused Android fork announced last Friday that it won’t implement the age data collection these laws demand. “GrapheneOS will remain usable by anyone around the world without requiring personal information, identification, or an account,” the project stated.

“If GrapheneOS devices can’t be sold in a region due to their regulations, so be it.” That’s a blunter response than most OS developers are willing to give, and it’s worth understanding what it’s actually refusing.

Brazil’s Digital ECA (Law 15.211) came into force on March 17, hitting OS providers with fines of up to R$50 million, roughly $9.5 million per violation, for failing to build age verification into device setup.

California’s Digital Age Assurance Act, AB-1043, signed by Governor Newsom in October 2025 and effective January 1, 2027, goes further: it requires every OS provider to collect a user’s age or date of birth during account setup, then push that data to app stores and developers through a real-time API.

Colorado’s SB26-051 cleared the state senate on March 3 with similar demands. The architecture these laws collectively envision is an age-linked identity layer baked into the operating system itself, present before you’ve opened a single app.

GrapheneOS is developed by the GrapheneOS Foundation, a registered Canadian nonprofit.

California’s AB-1043 carries civil penalties of up to $2,500 per affected child for negligent violations and $7,500 for intentional ones, enforced by the state attorney general. The Canadian nonprofit status provides some distance but not a guarantee.

The stakes grew more concrete after GrapheneOS and Motorola announced a partnership at MWC on March 2, bringing the hardened OS to future Motorola hardware and ending GrapheneOS’s long exclusivity to Google Pixel devices. A GrapheneOS-powered Motorola phone is expected in 2027.

Once a major hardware manufacturer ships devices with GrapheneOS pre-installed, those products need to comply with local regulations in every market where they’re sold, or Motorola will have to restrict sales geographically.

The defiant stance that’s easy for a nonprofit software project becomes a commercial problem for a global device manufacturer.

Keep reading

Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification

Senator Marsha Blackburn has introduced a 291-page legislative discussion draft that would reshape how information is allowed to exist online.

The TRUMP AMERICA AI Act, officially titled the “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry” Act, bundles together Section 230 repeal, expanded AI liability, age verification mandates, and a stack of additional bills that have been circulating separately for years.

All of it is wrapped in a national AI framework that claims it is tied to President Trump’s December Executive Order. The bill is framed as pro-innovation, pro-safety, designed to “protect children, creators, conservatives, and communities” while positioning the US to win the global AI race.

What the actual 291 pages describe is a system that centralizes regulatory authority, removes the legal protections platforms currently rely on, and hands new enforcement tools to federal agencies, state attorneys general, and private litigants simultaneously.

We obtained a copy of the bill for you here.

The legal foundation of the modern internet is Section 230 of the Communications Decency Act. It shields platforms from being sued for the content that users post. Without Section 230, platforms could become legally responsible for what their users post, which could mean anything controversial, contested, or legally ambiguous becomes a liability they’ll quietly remove rather than defend.

Blackburn’s bill repeals it entirely, after a two-year transition period.

Platforms and AI developers could face lawsuits for “defective design,” “failure to warn,” or deploying systems deemed “unreasonably dangerous.”

AI platforms would be incentivized to heavily monitor users.

Keep reading

Google Gets Caught Red-Handed Pushing Left-Wing Agenda, Silencing Conservative News

Commentator Britt Hughes is raising concerns about what she describes as a lack of ideological balance on Google News, pointing to a February analysis that examined the platform’s top morning stories.

Hughes compared the findings to earlier criticism of another major platform before turning to Google’s results.

“Do you remember when I told you that Apple News had gone three straight months without featuring a single article from a conservative news outlet in its top stories?” Hughes said.

She then shifted to Google’s performance during the same type of review period.

“Well now they can hold Google’s beer,” Hughes said.

Keep reading

White House AI Framework Pushes Age Verification ID Mandate

The White House has published a National AI Legislative Framework, a set of recommendations to Congress intended to govern artificial intelligence with a single uniform standard rather than, as the document puts it, “a patchwork of conflicting state laws.”

The administration wants federal law to preempt the states. That part is straightforward. What the framework actually proposes is less straightforward.

Alongside a genuine free speech provision, the document contains age verification mandateschat surveillance requirements, national security carve-outs that would tighten the relationship between AI companies and federal intelligence agencies, and an expansion of the TAKE IT DOWN Act, a law that we have already flagged for lacking adequate safeguards against censorship.

The White House is presenting all of this as part of the same coherent package.

Start with the child protection section: Congress should establish “commercially reasonable, privacy protective, age-assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” Age verification on AI platforms. The framework calls these requirements “privacy protective.”  They are not.

There is no version of meaningful age verification that doesn’t require collecting sensitive personal data, and there is no version of collecting sensitive personal data at scale that isn’t a breach waiting to happen.

The only tools platforms have are identity-based checks, government IDs, biometric scans, credit card data, and third-party verification services, or biometric estimation.

The only way to prove that someone is old enough to use a site is to collect personal data about who they are.

In October 2025, Discord identified 70,000 users globally who potentially had their photo IDs exposed to hackers.

Keep reading

Blocking the Internet Archive Won’t Stop AI, But It Will Erase the Web’s Historical Record

Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper. 

That’s effectively what’s begun happening online in the last few months. The Internet Archive—the world’s largest digital library—has preserved newspapers since it went online in the mid-1990s. The Archive’s mission is to preserve the web and make it accessible to the public. To that end, the organization operates the Wayback Machine, which now contains more than one trillion archived web pages and is used daily by journalists, researchers, and courts.

But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit. 

For nearly three decades, historians, journalists, and the public have relied on the Internet Archive to preserve news sites as they appeared online. Those archived pages are often the only reliable record of how stories were originally published. In many cases, articles get edited, changed, or removed—sometimes openly, sometimes not. The Internet Archive often becomes the only source for seeing those changes. When major publishers block the Archive’s crawlers, that historical record starts to disappear.

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use

Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response. Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn’t start, and didn’t ask for. 

If publishers shut the Archive out, they aren’t just limiting bots. They’re erasing the historical record. 

Keep reading

UK Regulator Ofcom Has Fined 4chan £520,000 Under a Law That Doesn’t Apply in the US

Ofcom has now fined 4chan £520,000 ($691,572) under the Online Safety Act. The platform hasn’t paid a penny and isn’t intending to. Its lawyer replied to the latest demand with a picture of a hamster.

That’s the state of UK online speech regulation in 2026: a regulator issuing fines to American websites, receiving rodent-themed correspondence in return, and collecting almost nothing.

The breakdown: £450,000 for failing to put age verification in place, £50,000 for failing to assess the risk of illegal material being published, and £20,000 for failing to set out in its terms of service how it protects users from criminal content. Ofcom says 4chan must comply by April 2 or face daily penalties on top.

But this confrontation and push for 4chan to start checking IDs didn’t start with a £520,000 fine. It started with an email sent across the Atlantic to a company that owes the UK government nothing.

4chan is an American platform. Its registered in Delaware. Its servers are in the United States. It has no employees in Britain, no offices in Britain, no legal registration in Britain, and no business presence of any kind in Britain. It is, in every meaningful sense, none of Ofcom’s business.

And what good would the First Amendment be if it could be overridden by foreign demands?

When the Online Safety Act came into full force, Ofcom declared that any site with “links to the UK” had duties to protect UK users, regardless of where in the world it was based.

That phrase, “links to the UK,” is intentionally vague, allowing British authorities to demand compliance from virtually any website. Under that logic, any American platform that a British person can visit is subject to UK speech law. No presence required. No UK operations required. Ofcom thinks it has jurisdiction over planet Earth.

Beginning in April 2025, Ofcom sent a “legally binding information notice” to 4chan’s corporate services company, by email, demanding compliance with the Online Safety Act and threatening that failure could “constitute a criminal offence” resulting in a fine of £18 million or 10% of 4chan’s worldwide turnover, arrest, and imprisonment for up to two years.

The notice was sent to a company not authorized to accept service on 4chan’s behalf. No UK court had issued it. No treaty process had been followed. It was, legally speaking, a strongly worded email.

Preston Byrne, the attorney representing 4chan, described the regulator’s actions as “an illegal campaign of harassment” directed at American tech firms, and made clear his client would not comply: “4chan has broken no laws in the United States, my client will not pay any penalty.”

By June 2025, Ofcom had opened a formal investigation.

Byrne’s reply was characteristically direct: “Increasing the size of a censorship fine does not cure its legal invalidity in the United States.” He continued: “After an entire year of your agency’s spectacular failure to get the memo, my only suggestion is that you take a first-year course on U.S. constitutional law.”

In August 2025, 4chan and Kiwi Farms took the fight to the US federal courts. The lawsuit, filed in the US District Court for the District of Columbia, argues that the Online Safety Act is not only an unlawful extraterritorial power grab but a direct attack on foundational American liberties. The complaint states: “Where Americans are concerned, the Online Safety Act purports to legislate the Constitution out of existence.”

The platforms argue that Ofcom’s demands, including written “risk assessments,” content moderation systems, removal of speech deemed “illegal” by UK standards, and user identity verification, would require violating the First Amendment and Section 230 of the Communications Decency Act. Byrne told reporters: “American citizens do not surrender our constitutional rights just because Ofcom sends us an email.”

Keep reading

Social Media Panic Lands Joseph Gordon-Levitt a U.N. Gig

Joseph Gordon-Levitt has a new gig, but it’s not in Hollywood. On Tuesday, the actor was appointed as the United Nations’ (U.N.) first global advocate for human-centric digital governance. 

In this role, Gordon-Levitt will “strengthen public understanding of how digital technologies shape everyday life, rights and opportunities,” according to a U.N. press release. In other words, he will be one of the U.N.’s chief advocates for regulating social media platforms.

In a video explaining his jargon-filled title, Gordon-Levitt warned that social media is causing an “epidemic of mental health issues and loneliness,” and a “rise in polarization and extremism and authoritarianism.” He said “governments need to get in the game” and curb these “damaging side effects” from social media. 

This is not the first time Gordon-Levitt has advocated for crackdowns on online platforms. In February, Gordon-Levitt traveled to Capitol Hill, where he urged senators to pass the Sunset Section 230 Act. The bill, introduced by Sens. Lindsey Graham (R–S.C.) and Dick Durbin (D–Ill.), would repeal Section 230—the federal law that limits platforms’ liability for third party speech—two years after the date of enactment. 

The “first step” in combatting the negative influence of Big Tech is to “sunset Section 230,” he said. “I want to see this thing pass 100 to zero. There should be nobody voting to give any more impunity to these tech companies, nobody.”

After receiving backlash for these comments, including from journalist Taylor Lorenz, Gordon-Levitt clarified that he didn’t want to completely scrap Section 230; he only wanted to reform it. 

During his speech on Capitol Hill, Gordon-Levitt invoked his authority as a concerned father of three to push for more online safety regulations. But emotional pleas do not always make for good policy. In fact, protecting children online has motivated more than a dozen bills in the House alone, many of which would infringe on free speech and privacy. 

One of these bills, the Reducing Exploitative Social Media Exposure for Teens (RESET) Act, would ban anyone under the age of 16 from creating or maintaining social media accounts. Another, the App Store Accountability Act, would require age verification for access to app stores and parental consent for users under 18. Most notably, the controversial Kids Online Safety Act (KOSA) would require online platforms to enforce policies and procedures to “address” various “harms to minors.” Reason’s Elizabeth Nolan Brown notes that KOSA would compel platforms to “censor a huge array of content out of fear that the government might decide it contributed to some vague category of harm and then sue.”

What proponents of these bills often fail to recognize is the many benefits that social media can offer kids. According to a 2022 Pew Research Center poll among teenagers, just 9 percent said that social media had a mostly negative effect on their lives. Citing the upsides of friendships and connections, 32 percent said social media had a mostly positive effect on them. Another study found that disconnection was a greater threat to adolescents’ self-esteem than heavy social media use, challenging the narrative that social media causes isolation. 

Keep reading

US Government Registers Domain Name ‘Aliens.gov’, Sparking Expectations That Trump May Be About To Release Classified Documents on UFOs

Space is the final frontier, and Aliens are the biggest mystery.

We have been reporting here on TGP about Donald J. Trump’s directive to his administration to release the classified UFO files to the public.

But now, in a development that has gone viral, the domain name ‘Aliens.gov’ has been included in the government’s official website registry.

The discreet development ignited speculation that Trump could be about to disclose US intelligence information on whether or not we’re alone in the universe.

The New York Post reported:

“The domain name discovery comes almost exactly a month after President Trump announced he was ordering administration officials to release government files related to UFOs and extraterrestrials.

Though the website is not yet live, the government has reserved the domain name for an as-yet-unknown purpose, registry records show.”

Keep reading

Russia’s Rumored Telegram Block Appears Underway As Outage Reports Surge

Reports are flooding in from across Russia that Telegram is suddenly going dark, fueling speculation that the Kremlin may already be testing a nationwide block ahead of a rumored planned crackdown next month.

“Over the last 24 hours, Telegram has effectively stopped working through some providers if you are using Russian IP addresses,” tech sector observer Vladislav Voytenko told Kommersant FM on Monday. “As for using Telegram via mobile internet, you can basically forget about it,” he added.

Russia’s Main Radio Frequency Center, an arm of media watchdog Roskomnadzor, said a surge of complaints began appearing over the weekend, with at least one-third coming from Moscow, followed by St. Petersburg and other cities spread across the country’s vast 11 time zones.

Regional media has tracked user reports on outage monitors such as Downdetector and Sboi.rf, which show complaints spiking sharply over the weekend as the app began failing across multiple regions.

Some Russian users have described the platform is barely functioning “in any form”. They complain the app won’t open, messages won’t send, and neither will photos and videos load.

Keep reading

Canada’s Bill C-22 Mandates Mass Metadata Surveillance of Canadians

Canada’s Liberal government has introduced Bill C-22, the Lawful Access Act, 2026, a surveillance bill that compels electronic service providers to store Canadians’ metadata for a year and hands police and intelligence agencies new tools to access it.

We obtained a copy of the bill for you here.

The bill follows a failed first attempt, Bill C-2, which collapsed under the weight of near-universal criticism from opposition parties, rights groups, and the tech industry.

This is a mandatory data retention regime that forces companies to hold location data, device information, and other sensitive metadata on every Canadian, not just those suspected of crimes, ready for law enforcement retrieval via warrant. The logic is familiar: build the haystack first, search it later.

Keep reading