Marijuana Legalization Doesn’t Increase Youth Use, Top Researcher Says At Federal Meeting

At a webinar hosted by the federal Substances and Mental Health Services Administration (SAMHSA) last week, a leading cannabis researcher threw cold water on the notion that legalizing marijuana leads to increases in youth use of the drug. He also touched on problems with roadside assessments of cannabis impairment, the risk of testing positive for THC after using CBD products and the need for more nuanced regulation around cannabinoids themselves.

The public talk, from Ryan Vandry, an experimental psychologist and professor at Johns Hopkins University’s Behavioral Pharmacology Research Unit, was aimed at providing continuing education on marijuana for healthcare professionals. Titled “Behavioral Pharmacology of Cannabis – Trends in Use, Novel Products, and Impact,” it focused primarily on how variables like dosage, product formulation, mode of administration and chemical components such as terpenes can influence the drug’s effects.

Vandry began by noting that marijuana is the most commonly used illicit drug in the United States. While self-reported consumption by adults has risen as more states have legalized in recent years, he noted, use by youth has generally remained flat or fallen.

“Use among youth is one of the biggest areas of concern related to the legalization and increased accessibility of cannabis,” he said, “but surprisingly, that cohort has actually maintained relatively stable [for] both past-year and daily use.”

Pointing to data from California going back to 1996, when the state ended prohibition for medical patients, Vandry said there has “really been no change in the rates of cannabis use among eighth, 10th or 12th graders. And in fact, in very recent years, we’ve seen a decrease in rates of consumption.”

Keep reading

Reddit Now Requires Age Verification In UK To Comply With Nation’s Online Safety Act

The news and social media aggregation platform Reddit now requires its United Kingdom based users to provide age verification to access “mature content” hosted on its website.

Users must prove they are eighteen years or older to read or contribute such content.

UK regulator Ofcom stated “We expect other companies to follow suit, or face enforcement if they fail to act.” Internet content providers who fail to adopt such measures can face fines of up to eighteen million pounds or ten percent of their worldwide revenue, whichever is greater.

For continued violations or serious cases, UK regulators may petition the courts to order “business disruption measures” such as forcing advertisers to end contracts or preventing payment providers to provide revenue for the platforms. Internet service providers can be required to block access to their users.

Reddit announced a partnership with Persona to provide an age verification service. Users will be able to upload a “selfie” image or a photograph of their government issued identification or passport as proof of majority. The company stated the age verification is a one-time process and that it will only retain users’ date of birth and verification status. Persona proffered they would only retain the photos for seven days.

David Greene, civil liberties director at the Electronic Frontier Foundation, called the UK’s Online Safety Act a real tragedy: “UK users can no longer use the internet without having to provide their papers, as it were.”

The rules come as no surprise given the regulatory over-reach of many European governments.

The canards of Protecting the Children or Online Safety provide indirect tools to deny access or curtail speech, tools too tempting or useful for pro-censorship politicians and officials.

Keep reading

Court rules Mississippi’s social media age verification law can go into effect

A Mississippi law that requires social media users to verify their ages can go into effect, a federal court has ruled. A tech industry group has pledged to continue challenging the law, arguing it infringes on users’ rights to privacy and free expression.

A three-judge panel of the 5th Circuit U.S. Court of Appeals overruled a decision by a federal district judge to block the 2024 law from going into effect. It’s the latest legal development as court challenges play out against similar laws in states across the country.

Parents – and even some teens themselves – are growing increasingly concerned about the effects of social media use on young people. Supporters of the new laws have said they are needed to help curb the explosive use of social media among young people, and what researchers say is an associated increase in depression and anxiety.

Mississippi Attorney General Lynn Fitch argued in a court filing defending the law that steps such as age verification for digital sites could mitigate harm caused by “sex trafficking, sexual abuse, child pornography, targeted harassment, sextortion, incitement to suicide and self-harm, and other harmful and often illegal conduct against children.”

Attorneys for NetChoice, which brought the lawsuit, have pledged to continue their court challenge, arguing the law threatens privacy rights and unconstitutionally restricts the free expression of users of all ages.

Keep reading

Backroom Politics and Big Tech Fuel Europe’s New Spy Push

A hastily arranged gathering within the European Union is reigniting fears over a renewed push for sweeping surveillance measures disguised as child protection.

Behind closed doors, a controversial “Chat Control” meeting, scheduled for Wednesday, has raised alarms among digital rights advocates who see it as a thinly veiled attempt to subvert the European Parliament’s current stance, which expressly prohibits the monitoring of encrypted communications.

Despite no formal negotiations underway between the Parliament, Commission, and Council, Javier Zarzalejos, the rapporteur for the regulation and chair of the Parliament’s Civil Liberties Committee (LIBE), has chosen to hold what is being described as a “shadow meeting.”

Notably, this comes over a year after the Parliament reached a compromise aimed at defending fundamental rights by shielding private, encrypted exchanges from warrantless surveillance.

The meeting’s guest list, obtained by netzpolitik.org, painted a lopsided picture.

Government and law enforcement figures from Denmark, including its Justice Ministry, which has put forward an even stricter proposal, are slated to attend, alongside Europol, representatives from Meta and Microsoft, and several pro-surveillance NGOs like ECPAT.

Also expected is Hany Farid, a US academic affiliated with the Counter Extremism Project, an organization known for its close relationships with intelligence agencies.

What was missing from the invitation list until late Monday was any representation from civil liberties groups or organizations that have consistently pushed back against warrantless monitoring.

Keep reading

Court Ruling on TikTok Opens Door to Platform “Safety” Regulation

A New Hampshire court’s decision to allow most of the state’s lawsuit against TikTok to proceed is now raising fresh concerns for those who see growing legal pressure on platforms as a gateway to government-driven interference.

The case, brought under the pretext of safeguarding children’s mental health, could pave the way for aggressive regulation of platform design and algorithmic structures in the name of safety, with implications for free expression online.

Judge John Kissinger of the Merrimack County Superior Court rejected TikTok’s attempt to dismiss the majority of the claims.

We obtained a copy of the opinion for you here.

While one count involving geographic misrepresentation was removed, the ruling upheld core arguments that focus on the platform’s design and its alleged impact on youth mental health.

The court ruled that TikTok is not entitled to protections under the First Amendment or Section 230 of the Communications Decency Act for those claims.

“The State’s claims are based on the App’s alleged defective and dangerous features, not the information contained therein,” Kissinger wrote. “Accordingly, the State’s product liability claim is based on the harm caused by the product: TikTok itself.”

This ruling rests on the idea that TikTok’s recommendation engines, user interface, and behavioral prompts function not as speech but as product features.

As a result, the lawsuit can proceed under a theory of product liability, potentially allowing the government to compel platforms to alter their design choices based on perceived risks.

Keep reading

FDACS removes over 85K illegal hemp products in child safety crackdown

Florida Agriculture Commissioner Wilton Simpson announced results of “Operation Safe Summer,” a statewide enforcement effort resulting in the removal of more than 85,000 hemp packages that were found in violation of state child-protection standards.

In the first three weeks of the operation, hemp-derived products were seized across 40 counties for “violations of Florida’s child-protection standards for packaging, labeling, and marketing,” according to a press release from the Department of Agriculture and Consumer Services.

Simpson said they will continue to “aggressively enforce the law, hold bad actors accountable, and put the safety of Florida’s families over profits.”

The state previously issued announcements advising hemp food establishments on the planned enforcement of amendments to Rule 5K-4.034, Florida Administrative Code, a press release said.

Keep reading

Top police chiefs say smell of cannabis is a ‘sign of crime’ that can make even them feel ‘unsafe’… and frontline officers should ‘do something about it’

Britain’s top police chiefs today urge their officers to crack down on cannabis.

The country’s longest-serving chief constable admits the smell of the drug is a ‘sign of crime and disorder’ which makes even him ‘feel unsafe’.

Sir Andy Marsh, who leads the College of Policing, said frontline officers should ‘do something about it’.

He is backed by Greater Manchester Police Chief Sir Stephen Watson and Merseyside Chief Constable Serena Kennedy.

In a joint intervention following recent calls for decriminalisation, they tell future police leaders they must listen to their communities and be prepared to take a tougher line.

Launching a new leadership programme for policing, they acknowledged forces were in a ‘foot race for public confidence’ and officers can no longer ignore what has traditionally been perceived as the ‘little stuff’. 

Sir Andy, who is the officer in charge of police standards, said: ‘In my community, my kids are too frightened to use the bus stop because it always stinks of cannabis.’

He told the Mail ‘policing is about creating an environment that people feel safe in’ and said: ‘I’m speaking from personal experience and people I talk to, if I walk through a town, city, or even village centre and I smell cannabis, it does actually have an impact on how safe I feel.

‘One definition of what police should be doing is – [if] something [is] happening which does not feel right, someone ought to do something about it.’

He added: ‘For me, the smell of cannabis around communities, it feels like a sign of crime and disorder.’

The call for action comes after figures on Sunday revealed that three in four people caught with the drug last year were let off with an informal warning or community resolution.

In the year to September 2024, 68,513 people were found in possession of cannabis, but only 17,000 were charged, according to data released under Freedom of Information laws.

Mayor of London Sir Sadiq Khan has called for the decriminalisation of possession when it involves small amounts of the drug. 

But recently judges have warned that cannabis is ‘not a benign drug’ after a series of horrific cases, including a samurai sword rampage in Hainault, east London, where a schoolboy was killed and four others seriously injured by a drug-crazed Brazilian who had a £100-a-day habit.

The head of Merseyside Police said of cannabis: ‘The public should absolutely expect us to take positive action around those things and hold us to account over it. 

‘We have to work with our communities, it’s no longer good enough to inflict priorities on them, we have to hear their voices and make them part of the problem-solving.’

Keep reading

COPPA 2.0: The Age Check Trap That Means Surveillance for Everyone

A new Senate bill designed to strengthen online privacy protections for minors could bring about major changes in how age is verified across the internet, prompting platforms to implement broader surveillance measures in an attempt to comply with ambiguous legal standards.

The Children and Teens’ Online Privacy Protection Act (S.836) (COPPA 2.0), now under review by the Senate Commerce Committee, proposes raising the protected age group from under 13 to under 17. It also introduces a new provision allowing teens aged 13 to 16 to consent to data collection on their own.

The bill has drawn praise from lawmakers across party lines and received backing from several major tech companies.

We obtained a copy of the bill for you here.

Supporters frame the bill as a long-overdue update to existing digital privacy laws. But others argue that a subtle change in how platforms are expected to identify underage users may produce outcomes that are more intrusive and far-reaching than anticipated.

Under the current law, platforms must act when they have “actual knowledge” that a user is a child.

The proposed bill replaces that threshold with a broader and less defined expectation: “knowledge fairly implied on the basis of objective circumstances.” This language introduces uncertainty about what constitutes sufficient awareness, making companies more vulnerable to legal challenges if they fail to identify underage users.

Instead of having to respond only when given explicit information about a user’s age, platforms would be required to interpret behavioral cues, usage patterns, or contextual data. This effectively introduces a negligence standard, compelling platforms to act preemptively to avoid accusations of noncompliance.

As a result, many websites may respond by implementing age verification systems for all users, regardless of whether they cater to minors. These systems would likely require more detailed personal information, including government-issued identification or biometric scans, to confirm users’ ages.

Keep reading

Federal Prosecutors Are Starting To Sound Like Campus Activists About Sex and Consent

The Department of Justice (DOJ) is now embracing ideas about coercion and consent that rose to prominence on college campuses during the Barack Obama administration.

That’s the implication of the OneTaste case, in which a jury has returned a guilty verdict against Rachel Cherwitz and Nicole Daedone, who stood accused of a conspiracy to commit forced labor during their time with the sexual and spiritual self-help organization.

I have written many words about this case already, and I’m going to try to refrain from rehashing all of the details in today’s newsletter. (If you’re new to the case and want to dive deep, here you go. If you want a couple of overviews of how the trial played out, see here and here.)

What I want to focus on right now is the larger implications of this case. They’re not pretty.

From College Campuses to #MeToo to the DOJ

If these ideas about coercion and consent didn’t start on the college campuses of the 2010s, that’s at least when they became fully institutionalized —adopted as not just the framework favored by activist students and women’s studies professors but by college administrators and the Title IX offices they were beholden to. There was affirmative consent, sure, but also a broader suspicion of consent as a worthwhile standard, or at least a willingness to dismiss it for more arcane ideas about sexual permissibility.

Suddenly it wasn’t enough to say no and it wasn’t even enough to say yes—one had to consider a complex set of power dynamics, alcohol consumption levels, subtle nonverbal cues, and so on, to determine if consent counted. It stopped just short of taking astrological signs into account.

We went from a reasonable corrective (acknowledging that sexual assault needn’t necessarily involve force or violence) to women getting support for claims of sexual coercion and violation even when they seemed to willingly go along with sexual activity at the time but later said that they weren’t enthusiastic enough about it and a partner should have known that and stopped. Basically, it was only consensual if a woman felt deep down in her heart, during and after, that everything had been OK.

We saw this idea migrate from campus newspapers and Title IX offices to the broader world during the #MeToo movement. It’s perhaps best exemplified by a story about the actor Aziz Ansari. A young woman went to dinner with him, then back to his house, and later excoriated him in Babe magazine for not reading her cues about not wanting to fool around and allegedly pressuring her to do so. The piece called it sexual misconduct and a violation. But when the woman explicitly told Ansari no, he stopped, per her account of things. And when she wanted to go, she left.

The Babe article provoked a huge debate about whether this sort of thing—which in another era we might have just called a bad date or caddish behavior—was a form of sexual assault and where responsibility lies here. Are sexual partners supposed to be mind readers? Do women have any responsibility for explicitly making their wishes known?

Keep reading

France considers requiring Musk’s X to verify users’ age

The French government is considering designating X as a porn platform — a move that will likely have the platform implementing strict age verification requirements.

Such a designation could effectively ban children from accessing the social media app unless it curtailed adult content. Paris has recently upped its efforts to protect kids online by requiring age verification by porn platforms.

“X has indicated since 2024 that it accepts the distribution of pornographic content. It must therefore be treated as such,” Digital Minister Clara Chappaz’s office told POLITICO.

Her team has been tasked with “examining the designation of X in the decree concerning pornographic sites that must verify the age of their users.”

The confirmation follows an appearance by Chappaz on French TV show “Quotidien” on Thursday evening, where she said X will soon receive “the same pretty papers as YouPorn” instructing X to ban adult content or implement age screening.

Porn platforms serving content in France are required to implement age verification measures with a final deadline of June 7, although some are protesting.

Failure to comply could see sites fined, delisted from search engines or blocked completely.

Keep reading