A New Hampshire court’s decision to allow most of the state’s lawsuit against TikTok to proceed is now raising fresh concerns for those who see growing legal pressure on platforms as a gateway to government-driven interference.
The case, brought under the pretext of safeguarding children’s mental health, could pave the way for aggressive regulation of platform design and algorithmic structures in the name of safety, with implications for free expression online.
Judge John Kissinger of the Merrimack County Superior Court rejected TikTok’s attempt to dismiss the majority of the claims.
We obtained a copy of the opinion for you here.
While one count involving geographic misrepresentation was removed, the ruling upheld core arguments that focus on the platform’s design and its alleged impact on youth mental health.
The court ruled that TikTok is not entitled to protections under the First Amendment or Section 230 of the Communications Decency Act for those claims.
“The State’s claims are based on the App’s alleged defective and dangerous features, not the information contained therein,” Kissinger wrote. “Accordingly, the State’s product liability claim is based on the harm caused by the product: TikTok itself.”
This ruling rests on the idea that TikTok’s recommendation engines, user interface, and behavioral prompts function not as speech but as product features.
As a result, the lawsuit can proceed under a theory of product liability, potentially allowing the government to compel platforms to alter their design choices based on perceived risks.