Spotify Threatens to Delete Accounts That Fail Digital ID Checks

Spotify has begun warning users that their accounts could be permanently removed unless they complete a new age verification process, part of a broader shift toward stricter content access and censorship controls on digital platforms.

The company has introduced a system that uses facial recognition technology to estimate a user’s age, with further ID verification required if the software detects someone who appears to be underage.

A notification recently began appearing within the app, instructing listeners to verify their age through Yoti, a third-party application that scans faces via smartphone cameras to assess whether a user meets the required age for access.

If the system concludes that a person might be too young, Spotify will ask for additional documentation and show ID. Anyone who does not complete the verification within 90 days will lose access to their account entirely.

According to Spotify’s updated policy page, “You cannot use Spotify if you don’t meet the minimum age requirements for the market you’re in,” adding that users who cannot confirm their age “will be deactivated and eventually deleted.”

The platform, which allows users as young as 13 to join, said it will begin prompting certain individuals to verify their age when they attempt to view content labeled as suitable only for adults.

“Some users will now have to confirm their age by going through an age assurance process,” Spotify stated. This may occur, for example, when someone tries to watch a music video rated 18+ by the rights-holder.

Spotify’s decision arrives amid a wave of newly mandated age-check measures driven by the UK’s new censorship law, the Online Safety Act, which came into force recently.

Under the law, platforms must restrict access to content not suitable for minors, including pornography and violent material, and enforce age thresholds set out in their own user policies. Companies that fail to comply face fines of up to 10 percent of global turnover.

Keep reading

Australia Bans YouTube for Children Under 16

The government of Australia has reversed its decision to grant YouTube an exemption from its sweeping ban on social media for children under 16. YouTube’s parent company, Google, is threatening legal action, but Australian officials vowed to push ahead with the ban.

“We can’t control the ocean, but we can police the sharks, and that is why we will not be intimidated by legal threats when this is a genuine fight for the wellbeing of Australian kids,” Communications Minister Anika Wells said when Google threatened to sue.

Australia announced its “world-leading” plan to bar children from using social media in November 2024. Despite resistance from Internet freedom advocates, and difficult questions about precisely how such a ban could be implemented, the relevant legislation was quickly passed, and the ban is set to take effect in December 2025.

Prime Minister Anthony Albanese gave a press conference on Wednesday in which he pledged to promote Australia’s social media ban to other countries at the United Nations General Assembly in September.

“I know from the discussions I have had with other leaders that they are looking at this and they are considering what impact social media is having on young people in their respective nations, it is a common experience,” Albanese said, appearing with the parents of children who were bullied to death on social media.

“We don’t do this easily. What we do, though, is respond to something that is needed here,” he said.

YouTube was granted an exemption from the ban when it was passed by Parliament in November, for several reasons. One was that YouTube was viewed as an important source of information for teens, so even though it carried potentially harmful content, the good was thought to outweigh the bad.

LGBTQ groups insisted YouTube was an important resource for gay and lesbian children, while public health groups said they used the platform to distribute important information to young people. Australian parents found YouTube less alarming that competing platforms like TikTok. YouTube also featured less direct interaction between users than most of the social media platforms that troubled Australian regulators.

A final objection to banning YouTube was that logging into the service is not required – visitors can access the vast majority of the platform’s content as “guests.” This meant there was no practical way to hold YouTube accountable for policing the age of its users.

Naturally, many of the platforms that were targeted by Australia’s social media ban resented the exemption granted to YouTube. These complaints might have had some bearing on the government’s decision to cancel YouTube’s exemption.

According to Australia’s ABC News, YouTube was added to the social media ban at the request of eSafety Commissioner Julie Inman Grant, who wrote a letter to Wells asking for YouTube’s exemption to be rescinded. Inman Grant said her recommendation was based on a survey of 2,600 children that found nearly 40 percent of them had been exposed to “harmful content” while using YouTube.

Keep reading

Tea App Leak Shows Why UK’s Digital ID Age Verification Laws are Dangerous

The UK’s Online “Safety” Act, legislation marketed as a safety net for children, was rolled out with all the foresight of a toddler launching a space program. Now, any site hosting “potentially harmful” content could be required to collect real-world ID, face scans, or official documents from users.

What could go wrong? Ask Tea, the women-centric dating gossip app that went viral by promising empowerment, then faceplanted into one of the most dangerous data breaches of the year. Their Firebase server, housing tens of thousands of selfies and government-issued IDs, was left wide open to anyone with a link.

This is the real-world consequence of lawmakers selling digital ID mandates as a solution to online harm: private companies getting access to sensitive personal data with all the discretion of a parade float, and then dropping it into the laps of the entire internet.

Let’s pause for a moment and appreciate the cosmic genius it takes to build an app allegedly designed to protect women, and then expose all of their private data to the world with the finesse of a first-time hacker copying a URL.

Tea, the dating app that rocketed to the top of the App Store by selling anonymity, safety, and empowerment, before face-planting into the Firebase server floor, spraying driver’s licenses and selfies like a busted confetti cannon.

Keep reading

Marijuana Legalization Doesn’t Increase Youth Use, Top Researcher Says At Federal Meeting

At a webinar hosted by the federal Substances and Mental Health Services Administration (SAMHSA) last week, a leading cannabis researcher threw cold water on the notion that legalizing marijuana leads to increases in youth use of the drug. He also touched on problems with roadside assessments of cannabis impairment, the risk of testing positive for THC after using CBD products and the need for more nuanced regulation around cannabinoids themselves.

The public talk, from Ryan Vandry, an experimental psychologist and professor at Johns Hopkins University’s Behavioral Pharmacology Research Unit, was aimed at providing continuing education on marijuana for healthcare professionals. Titled “Behavioral Pharmacology of Cannabis – Trends in Use, Novel Products, and Impact,” it focused primarily on how variables like dosage, product formulation, mode of administration and chemical components such as terpenes can influence the drug’s effects.

Vandry began by noting that marijuana is the most commonly used illicit drug in the United States. While self-reported consumption by adults has risen as more states have legalized in recent years, he noted, use by youth has generally remained flat or fallen.

“Use among youth is one of the biggest areas of concern related to the legalization and increased accessibility of cannabis,” he said, “but surprisingly, that cohort has actually maintained relatively stable [for] both past-year and daily use.”

Pointing to data from California going back to 1996, when the state ended prohibition for medical patients, Vandry said there has “really been no change in the rates of cannabis use among eighth, 10th or 12th graders. And in fact, in very recent years, we’ve seen a decrease in rates of consumption.”

Keep reading

Reddit Now Requires Age Verification In UK To Comply With Nation’s Online Safety Act

The news and social media aggregation platform Reddit now requires its United Kingdom based users to provide age verification to access “mature content” hosted on its website.

Users must prove they are eighteen years or older to read or contribute such content.

UK regulator Ofcom stated “We expect other companies to follow suit, or face enforcement if they fail to act.” Internet content providers who fail to adopt such measures can face fines of up to eighteen million pounds or ten percent of their worldwide revenue, whichever is greater.

For continued violations or serious cases, UK regulators may petition the courts to order “business disruption measures” such as forcing advertisers to end contracts or preventing payment providers to provide revenue for the platforms. Internet service providers can be required to block access to their users.

Reddit announced a partnership with Persona to provide an age verification service. Users will be able to upload a “selfie” image or a photograph of their government issued identification or passport as proof of majority. The company stated the age verification is a one-time process and that it will only retain users’ date of birth and verification status. Persona proffered they would only retain the photos for seven days.

David Greene, civil liberties director at the Electronic Frontier Foundation, called the UK’s Online Safety Act a real tragedy: “UK users can no longer use the internet without having to provide their papers, as it were.”

The rules come as no surprise given the regulatory over-reach of many European governments.

The canards of Protecting the Children or Online Safety provide indirect tools to deny access or curtail speech, tools too tempting or useful for pro-censorship politicians and officials.

Keep reading

Court rules Mississippi’s social media age verification law can go into effect

A Mississippi law that requires social media users to verify their ages can go into effect, a federal court has ruled. A tech industry group has pledged to continue challenging the law, arguing it infringes on users’ rights to privacy and free expression.

A three-judge panel of the 5th Circuit U.S. Court of Appeals overruled a decision by a federal district judge to block the 2024 law from going into effect. It’s the latest legal development as court challenges play out against similar laws in states across the country.

Parents – and even some teens themselves – are growing increasingly concerned about the effects of social media use on young people. Supporters of the new laws have said they are needed to help curb the explosive use of social media among young people, and what researchers say is an associated increase in depression and anxiety.

Mississippi Attorney General Lynn Fitch argued in a court filing defending the law that steps such as age verification for digital sites could mitigate harm caused by “sex trafficking, sexual abuse, child pornography, targeted harassment, sextortion, incitement to suicide and self-harm, and other harmful and often illegal conduct against children.”

Attorneys for NetChoice, which brought the lawsuit, have pledged to continue their court challenge, arguing the law threatens privacy rights and unconstitutionally restricts the free expression of users of all ages.

Keep reading

Backroom Politics and Big Tech Fuel Europe’s New Spy Push

A hastily arranged gathering within the European Union is reigniting fears over a renewed push for sweeping surveillance measures disguised as child protection.

Behind closed doors, a controversial “Chat Control” meeting, scheduled for Wednesday, has raised alarms among digital rights advocates who see it as a thinly veiled attempt to subvert the European Parliament’s current stance, which expressly prohibits the monitoring of encrypted communications.

Despite no formal negotiations underway between the Parliament, Commission, and Council, Javier Zarzalejos, the rapporteur for the regulation and chair of the Parliament’s Civil Liberties Committee (LIBE), has chosen to hold what is being described as a “shadow meeting.”

Notably, this comes over a year after the Parliament reached a compromise aimed at defending fundamental rights by shielding private, encrypted exchanges from warrantless surveillance.

The meeting’s guest list, obtained by netzpolitik.org, painted a lopsided picture.

Government and law enforcement figures from Denmark, including its Justice Ministry, which has put forward an even stricter proposal, are slated to attend, alongside Europol, representatives from Meta and Microsoft, and several pro-surveillance NGOs like ECPAT.

Also expected is Hany Farid, a US academic affiliated with the Counter Extremism Project, an organization known for its close relationships with intelligence agencies.

What was missing from the invitation list until late Monday was any representation from civil liberties groups or organizations that have consistently pushed back against warrantless monitoring.

Keep reading

Court Ruling on TikTok Opens Door to Platform “Safety” Regulation

A New Hampshire court’s decision to allow most of the state’s lawsuit against TikTok to proceed is now raising fresh concerns for those who see growing legal pressure on platforms as a gateway to government-driven interference.

The case, brought under the pretext of safeguarding children’s mental health, could pave the way for aggressive regulation of platform design and algorithmic structures in the name of safety, with implications for free expression online.

Judge John Kissinger of the Merrimack County Superior Court rejected TikTok’s attempt to dismiss the majority of the claims.

We obtained a copy of the opinion for you here.

While one count involving geographic misrepresentation was removed, the ruling upheld core arguments that focus on the platform’s design and its alleged impact on youth mental health.

The court ruled that TikTok is not entitled to protections under the First Amendment or Section 230 of the Communications Decency Act for those claims.

“The State’s claims are based on the App’s alleged defective and dangerous features, not the information contained therein,” Kissinger wrote. “Accordingly, the State’s product liability claim is based on the harm caused by the product: TikTok itself.”

This ruling rests on the idea that TikTok’s recommendation engines, user interface, and behavioral prompts function not as speech but as product features.

As a result, the lawsuit can proceed under a theory of product liability, potentially allowing the government to compel platforms to alter their design choices based on perceived risks.

Keep reading

FDACS removes over 85K illegal hemp products in child safety crackdown

Florida Agriculture Commissioner Wilton Simpson announced results of “Operation Safe Summer,” a statewide enforcement effort resulting in the removal of more than 85,000 hemp packages that were found in violation of state child-protection standards.

In the first three weeks of the operation, hemp-derived products were seized across 40 counties for “violations of Florida’s child-protection standards for packaging, labeling, and marketing,” according to a press release from the Department of Agriculture and Consumer Services.

Simpson said they will continue to “aggressively enforce the law, hold bad actors accountable, and put the safety of Florida’s families over profits.”

The state previously issued announcements advising hemp food establishments on the planned enforcement of amendments to Rule 5K-4.034, Florida Administrative Code, a press release said.

Keep reading

Government REFUSES to release ‘eSafety’ data behind YouTube kids ban

Labor Communications Minister Anika Wells has refused to release the research that underpins the eSafety Commissioner’s push to ban 15-year-olds from using YouTube.

The contentious recommendation, made by eSafety Commissioner Julie Inman Grant, has sparked widespread concern among stakeholders and the public. Yet Wells has declined to release the data informing the advice, citing the regulator’s preference to delay publication.

Sky News reports that the eSafety regulator has repeatedly blocked its attempts to access the full research, instead opting to “drip feed” select findings to the public over several months. This is despite the Albanese government expected to make a final decision in just weeks.

A spokesperson for Wells said: “The minister is taking time to consider the eSafety Commissioner’s advice. The minister has been fully briefed by the eSafety Commissioner including the research methodology behind her advice.”

However, the Commissioner’s own “Keeping Kids Safe Online: Methodology” report reveals several weaknesses in the data. The survey relied entirely on self-reported responses taken at one point in time and used “non-probability-based sampling” from online panels, described in the report as “convenience samples”.

Keep reading