Reddit Now Requires Age Verification In UK To Comply With Nation’s Online Safety Act

The news and social media aggregation platform Reddit now requires its United Kingdom based users to provide age verification to access “mature content” hosted on its website.

Users must prove they are eighteen years or older to read or contribute such content.

UK regulator Ofcom stated “We expect other companies to follow suit, or face enforcement if they fail to act.” Internet content providers who fail to adopt such measures can face fines of up to eighteen million pounds or ten percent of their worldwide revenue, whichever is greater.

For continued violations or serious cases, UK regulators may petition the courts to order “business disruption measures” such as forcing advertisers to end contracts or preventing payment providers to provide revenue for the platforms. Internet service providers can be required to block access to their users.

Reddit announced a partnership with Persona to provide an age verification service. Users will be able to upload a “selfie” image or a photograph of their government issued identification or passport as proof of majority. The company stated the age verification is a one-time process and that it will only retain users’ date of birth and verification status. Persona proffered they would only retain the photos for seven days.

David Greene, civil liberties director at the Electronic Frontier Foundation, called the UK’s Online Safety Act a real tragedy: “UK users can no longer use the internet without having to provide their papers, as it were.”

The rules come as no surprise given the regulatory over-reach of many European governments.

The canards of Protecting the Children or Online Safety provide indirect tools to deny access or curtail speech, tools too tempting or useful for pro-censorship politicians and officials.

Keep reading

Court rules Mississippi’s social media age verification law can go into effect

A Mississippi law that requires social media users to verify their ages can go into effect, a federal court has ruled. A tech industry group has pledged to continue challenging the law, arguing it infringes on users’ rights to privacy and free expression.

A three-judge panel of the 5th Circuit U.S. Court of Appeals overruled a decision by a federal district judge to block the 2024 law from going into effect. It’s the latest legal development as court challenges play out against similar laws in states across the country.

Parents – and even some teens themselves – are growing increasingly concerned about the effects of social media use on young people. Supporters of the new laws have said they are needed to help curb the explosive use of social media among young people, and what researchers say is an associated increase in depression and anxiety.

Mississippi Attorney General Lynn Fitch argued in a court filing defending the law that steps such as age verification for digital sites could mitigate harm caused by “sex trafficking, sexual abuse, child pornography, targeted harassment, sextortion, incitement to suicide and self-harm, and other harmful and often illegal conduct against children.”

Attorneys for NetChoice, which brought the lawsuit, have pledged to continue their court challenge, arguing the law threatens privacy rights and unconstitutionally restricts the free expression of users of all ages.

Keep reading

Backroom Politics and Big Tech Fuel Europe’s New Spy Push

A hastily arranged gathering within the European Union is reigniting fears over a renewed push for sweeping surveillance measures disguised as child protection.

Behind closed doors, a controversial “Chat Control” meeting, scheduled for Wednesday, has raised alarms among digital rights advocates who see it as a thinly veiled attempt to subvert the European Parliament’s current stance, which expressly prohibits the monitoring of encrypted communications.

Despite no formal negotiations underway between the Parliament, Commission, and Council, Javier Zarzalejos, the rapporteur for the regulation and chair of the Parliament’s Civil Liberties Committee (LIBE), has chosen to hold what is being described as a “shadow meeting.”

Notably, this comes over a year after the Parliament reached a compromise aimed at defending fundamental rights by shielding private, encrypted exchanges from warrantless surveillance.

The meeting’s guest list, obtained by netzpolitik.org, painted a lopsided picture.

Government and law enforcement figures from Denmark, including its Justice Ministry, which has put forward an even stricter proposal, are slated to attend, alongside Europol, representatives from Meta and Microsoft, and several pro-surveillance NGOs like ECPAT.

Also expected is Hany Farid, a US academic affiliated with the Counter Extremism Project, an organization known for its close relationships with intelligence agencies.

What was missing from the invitation list until late Monday was any representation from civil liberties groups or organizations that have consistently pushed back against warrantless monitoring.

Keep reading

Court Ruling on TikTok Opens Door to Platform “Safety” Regulation

A New Hampshire court’s decision to allow most of the state’s lawsuit against TikTok to proceed is now raising fresh concerns for those who see growing legal pressure on platforms as a gateway to government-driven interference.

The case, brought under the pretext of safeguarding children’s mental health, could pave the way for aggressive regulation of platform design and algorithmic structures in the name of safety, with implications for free expression online.

Judge John Kissinger of the Merrimack County Superior Court rejected TikTok’s attempt to dismiss the majority of the claims.

We obtained a copy of the opinion for you here.

While one count involving geographic misrepresentation was removed, the ruling upheld core arguments that focus on the platform’s design and its alleged impact on youth mental health.

The court ruled that TikTok is not entitled to protections under the First Amendment or Section 230 of the Communications Decency Act for those claims.

“The State’s claims are based on the App’s alleged defective and dangerous features, not the information contained therein,” Kissinger wrote. “Accordingly, the State’s product liability claim is based on the harm caused by the product: TikTok itself.”

This ruling rests on the idea that TikTok’s recommendation engines, user interface, and behavioral prompts function not as speech but as product features.

As a result, the lawsuit can proceed under a theory of product liability, potentially allowing the government to compel platforms to alter their design choices based on perceived risks.

Keep reading

FDACS removes over 85K illegal hemp products in child safety crackdown

Florida Agriculture Commissioner Wilton Simpson announced results of “Operation Safe Summer,” a statewide enforcement effort resulting in the removal of more than 85,000 hemp packages that were found in violation of state child-protection standards.

In the first three weeks of the operation, hemp-derived products were seized across 40 counties for “violations of Florida’s child-protection standards for packaging, labeling, and marketing,” according to a press release from the Department of Agriculture and Consumer Services.

Simpson said they will continue to “aggressively enforce the law, hold bad actors accountable, and put the safety of Florida’s families over profits.”

The state previously issued announcements advising hemp food establishments on the planned enforcement of amendments to Rule 5K-4.034, Florida Administrative Code, a press release said.

Keep reading

Government REFUSES to release ‘eSafety’ data behind YouTube kids ban

Labor Communications Minister Anika Wells has refused to release the research that underpins the eSafety Commissioner’s push to ban 15-year-olds from using YouTube.

The contentious recommendation, made by eSafety Commissioner Julie Inman Grant, has sparked widespread concern among stakeholders and the public. Yet Wells has declined to release the data informing the advice, citing the regulator’s preference to delay publication.

Sky News reports that the eSafety regulator has repeatedly blocked its attempts to access the full research, instead opting to “drip feed” select findings to the public over several months. This is despite the Albanese government expected to make a final decision in just weeks.

A spokesperson for Wells said: “The minister is taking time to consider the eSafety Commissioner’s advice. The minister has been fully briefed by the eSafety Commissioner including the research methodology behind her advice.”

However, the Commissioner’s own “Keeping Kids Safe Online: Methodology” report reveals several weaknesses in the data. The survey relied entirely on self-reported responses taken at one point in time and used “non-probability-based sampling” from online panels, described in the report as “convenience samples”.

Keep reading

Australia Orders Search Engines to Enforce Digital ID Age Checks

Australia has moved to tighten control over the digital environment with the introduction of three new online safety codes, measures that raise pressing privacy and censorship concerns.

These codes, formalized on June 27 under the Online Safety Act, go beyond introducing digital ID checks for adult websites; they also place substantial obligations on tech companies, from search engines and internet service providers (ISPs) to hosting platforms.

Businesses that fail to comply face the threat of significant financial penalties, with fines reaching as high as 49.5 million Australian dollars, or about $32.5 million US.

The codes seek to restrict Australian users’ exposure to material classified under two categories: Class 1C and Class 2.

Class 1C encompasses “online pornography – material that describes or depicts specific fetish practices or fantasies.”

Class 2 covers a broader range of content, from “online pornography – other sexually explicit material that depicts actual (not simulated) sex between consenting adults” (Class 2A), to “online pornography – material which includes realistically simulated sexual activity between adults. Material which includes high-impact nudity” or “other high-impact material which includes high-impact sex, nudity, violence, drug use, language and themes. ‘Themes’ includes social Issues such as crime, suicide, drug and alcohol dependency, death, serious illness, family breakdown, and racism” (Class 2B).

Schedule 1 – Hosting Services Online Safety Code, companies that provide hosting services within Australia, including social media platforms and web hosts, are compelled to implement six compliance measures.

A core requirement obliges these services to manage the risks posed by significant changes to their platforms that could make Class 1C or Class 2 material more accessible to Australian children.

Schedule 2 – Internet Carriage Services Online Safety Code targets ISPs. It mandates the provision of filtering tools and safety guidance to users and empowers the eSafety Commissioner to order the blocking of material deemed to promote or depict abhorrent violent conduct.

The Commissioner has previously exercised similar powers, as in the directive to block footage of a stabbing circulated on X.

Schedule 3 – Internet Search Engine Services Online Safety Code directs search engine providers to roll out age verification for account creation within six months.

These platforms are also instructed to develop systems capable of detecting and filtering out online pornography and violent material by default, where technically feasible and practicable.

Keep reading

South Dakota Follows Texas with Broader Online Digital ID Law

The Supreme Court’s endorsement of Texas’ age verification law for adult websites has paved the way for a surge of similar online digital ID measures across the country.

South Dakota is the first to follow, as its new statute requiring age verification or estimation for sites distributing adult content takes effect today.

However the South Dakota law is much broader and applies to a wider range of websites, not just those that have a large percentage of adult content.

We obtained a copy of the bill for you here.

The law applies broadly to any platform that regularly deals in explicit material, without setting a specific threshold for how much of the site’s content qualifies.

This contrasts with Texas’ approach, where the rule kicks in if at least one-third of a site’s material is deemed pornographic.

Keep reading

Supreme Court Greenlights Online Digital ID Checks

With a landmark ruling that could shape online content regulation for years to come, the US Supreme Court has upheld Texas’s digital ID age-verification law for adult websites and platforms, asserting that the measure lawfully balances the state’s interest in protecting minors with the free speech rights of adults.

The 6-3 decision, issued on June 27, 2025, affirms the constitutionality of House Bill 1181, a statute that requires adult websites to verify the age of users before granting access to sexually explicit material.

Laws like House Bill 1181, framed as necessary safeguards for children, are quietly eroding the rights of adults to access lawful content or speak freely online without fear of surveillance or exposure.

Under such laws, anyone seeking to view legal adult material online (and eventually even those who want to access social media platforms because may contain content “harmful” to minors) is forced to provide official identification, often a government-issued digital ID or even biometric data, to prove their age.

Supporters claim this is a small price to pay to shield minors from harmful content. Yet these measures create permanent records linking individuals to their browsing choices, exposing them to unprecedented risks.

We obtained a copy of the opinion for you here.

Keep reading

COPPA 2.0: The Age Check Trap That Means Surveillance for Everyone

A new Senate bill designed to strengthen online privacy protections for minors could bring about major changes in how age is verified across the internet, prompting platforms to implement broader surveillance measures in an attempt to comply with ambiguous legal standards.

The Children and Teens’ Online Privacy Protection Act (S.836) (COPPA 2.0), now under review by the Senate Commerce Committee, proposes raising the protected age group from under 13 to under 17. It also introduces a new provision allowing teens aged 13 to 16 to consent to data collection on their own.

The bill has drawn praise from lawmakers across party lines and received backing from several major tech companies.

We obtained a copy of the bill for you here.

Supporters frame the bill as a long-overdue update to existing digital privacy laws. But others argue that a subtle change in how platforms are expected to identify underage users may produce outcomes that are more intrusive and far-reaching than anticipated.

Under the current law, platforms must act when they have “actual knowledge” that a user is a child.

The proposed bill replaces that threshold with a broader and less defined expectation: “knowledge fairly implied on the basis of objective circumstances.” This language introduces uncertainty about what constitutes sufficient awareness, making companies more vulnerable to legal challenges if they fail to identify underage users.

Instead of having to respond only when given explicit information about a user’s age, platforms would be required to interpret behavioral cues, usage patterns, or contextual data. This effectively introduces a negligence standard, compelling platforms to act preemptively to avoid accusations of noncompliance.

As a result, many websites may respond by implementing age verification systems for all users, regardless of whether they cater to minors. These systems would likely require more detailed personal information, including government-issued identification or biometric scans, to confirm users’ ages.

Keep reading