France Just Redefined Global Speech on TikTok

TikTok’s decision to block the “SkinnyTok” hashtag across its entire platform followed direct intervention from the French government, revealing how national pressure is increasingly shaping global online speech, even when the content in question is not illegal.

French Digital Minister Clara Chappaz just claimed victory, celebrating the platform’s removal of the term often associated with extreme dieting and weight loss trends. “This is a first collective victory,” she wrote on X after TikTok confirmed the ban was now global.

A spokesperson for the platform stated the hashtag was removed as part of ongoing safety reviews and due to its“link to “unhealthy weight loss content.”

While the move has been portrayed as a step forward for user safety, particularly for young audiences, it also raises deeper concerns about the role of governments in controlling speech on private platforms.

The “SkinnyTok” content, though considered by some to be harmful, does not violate any laws. Still, the French government managed to pressure TikTok into removing it worldwide. This maneuver highlights a growing trend in which authorities seek to influence online content standards beyond their own borders, often using platforms as enforcers.

Rather than work through the European Commission or wait for outcomes from the ongoing investigation under the Digital Services Act (DSA), France chose to confront TikTok directly.

Keep reading

EU Tech Laws Erect Digital Iron Curtain

Over the past decades, Europe has created little of real relevance in terms of technological platforms, social networks, operating systems, or search engines.

In contrast, it has built an extensive regulatory apparatus designed to limit and punish those who have actually innovated.

Rather than producing its own alternatives to American tech giants, the EU has chosen to suffocate existing ones through regulations such as the Digital Services Act (DSA) and the Digital Markets Act (DMA).

The DSA aims to control the content and internal functioning of digital platforms, requiring the rapid removal of content deemed “inappropriate” in what amounts to a modern form of censorship, as well as the disclosure of how algorithms work and restrictions on targeted advertising. The DMA, in turn, seeks to curtail the power of so-called gatekeepers by forcing companies like Apple, Google, or Meta to open their systems to competitors, avoid self-preferencing, and separate data flows between products.

These two regulations could potentially have a greater impact on U.S. tech companies than any domestic legislation, as they are rules made in Brussels but applied to American companies in an extraterritorial manner. And they go far beyond fines: they force structural changes to the design of systems and functionalities, something that no sovereign state should be imposing on foreign private enterprise.

In April 2025, Meta was fined €200 million under the Digital Markets Act for allegedly imposing a “consent or pay” model on European users of Facebook and Instagram, without offering a real alternative. Beyond the fine, it was forced to separate data flows between platforms, thereby compromising the personalized advertising system that sustains its profitability. This was a blatant interference in its business model.

That same month, Apple was fined €500 million for preventing platforms like Spotify from informing users about alternative payment methods outside the App Store. The company was required to remove these restrictions, opening iOS to external app stores and competing payment systems. Once again, this was an unwelcome intrusion and a direct attack on the exclusivity-based model of the Apple ecosystem.

Other companies like Amazon, Google, Microsoft and even X are also under scrutiny, with the latter particularly affected by DSA rules, having been the target of a formal investigation in 2023 for alleged noncompliance in content moderation.

Keep reading

Fifth Circuit Affirms Reasonable Expectation of Privacy in Cloud Storage in Dropbox Case

A federal appeals court has ruled that state officials violated the Fourth Amendment when they orchestrated the covert retrieval of documents from a nonprofit’s Dropbox folder, an outcome that significantly strengthens legal protections for digital privacy in cloud-based environments.

In a 25-page decision issued May 28, 2025, the US Court of Appeals for the Fifth Circuit held that The Heidi Group, a Texas-based pro-life healthcare organization, had a reasonable expectation of privacy in its digital files and that a state investigator’s role in acquiring them without judicial authorization amounted to an unconstitutional search.

We obtained a copy of the decision for you here.

Writing for the court, Judge Andrew S. Oldham emphasized that the constitutional right to be free from unreasonable searches extends to “the content of stored electronic communications,” including files housed in commercial cloud platforms.

“Heidi has a reasonable expectation of privacy in its documents and files uploaded to Dropbox,” the opinion stated. “Heidi’s records are analogous to letters, phone calls, emails, and social media messages: Each contains information content transmitted through or stored with an intermediary that is not intended to ‘be broadcast to the world.’”

The controversy arose after Phyllis Morgan, a former employee of The Heidi Group, exploited her lingering access to the organization’s Dropbox folder for nearly a year after being terminated.

Rather than reporting the breach or seeking lawful channels to obtain the data, a senior investigator from the Texas Health and Human Services Commission’s Office of Inspector General (OIG), Gaylon Dacus, allegedly encouraged the ex-employee to continue accessing the nonprofit’s confidential materials and forward them to the state.

Keep reading

EU Commissioner Defends EU’s Censorship Law While Downplaying Brussels’ Indirect Influence Over Online Speech

As the European Union moves aggressively to shape online discourse through the Digital Services Act (DSA), EU Commissioner for Technology Henna Virkkunen has been deflecting scrutiny abroad, pointing fingers at the United States for what she describes as a more extensive censorship regime.

Relying on transparency data, she argues that platforms like Meta and X primarily remove content based on their own terms and conditions rather than due to DSA directives. But this framing misrepresents how enforcement works in practice, and downplays the EU’s systemic role in pushing platforms toward silence through legal design, not open decrees.

Virkkunen highlighted that between September 2023 and April 2024, 99 percent of content takedowns occurred under platform terms of service, with only 1 percent resulting from “trusted flaggers” authorized under the DSA. A mere 0.001 percent were direct orders from state authorities.

On paper, this paints a picture of platform autonomy. But in reality, the architecture of the DSA ensures that removals appear “voluntary” precisely because they are incentivized by looming regulatory consequences.

Under the DSA, platforms are held legally accountable for failing to remove certain types of content.

This liability drives a strong incentive to err on the side of over-removal, creating a culture where companies preemptively censor to minimize risk. Virkkunen frames these decisions as internal, but in truth, many of them reflect anticipatory compliance with European legal expectations.

The fact that content is flagged and removed “under T&Cs” does not indicate independence, it reflects a strategy of risk avoidance in response to EU enforcement pressure.

This dynamic is by design. The DSA doesn’t rely on high numbers of direct takedown orders from governments. Instead, it outsources content control to the platforms themselves, embedding speech restrictions in the guise of corporate policy.

The regulatory burden falls on private actors, but the agenda is shaped by Brussels. Delegating enforcement doesn’t dilute state influence; it conceals it. The veneer of decentralization does not remove the fact that the state has created the framework and exerts ongoing leverage over what platforms consider acceptable.

Keep reading

Texas Ban On Social Media For Under 18s Fails To Pass Senate

Legislation that would have banned anyone under the age of 18 from using or creating social media accounts in Texas stalled in the Senate this week after lawmakers failed to vote on it.

House Bill 186, filed by state Rep. Jared Patterson (R-Frisco), would have prohibited minors from creating accounts on social media sites such as Instagram, TikTok, Facebook, Snapchat, and others by requiring the platforms to verify users’ age.

The measure previously passed the GOP-controlled state House with broad bipartisan support in April, but momentum behind the bill slowed at the eleventh hour in the state Senate this week as lawmakers face a weekend deadline to send bills to Gov. Greg Abbott’s desk.

The legislative session ends on Monday.

In a statement on the social media platform X late Thursday, Patterson said the bill’s failure to pass in the Senate was “the biggest disappointment of my career,” adding that no other bill filed this session “would have protected more kids in more ways than this one.”

The Republican lawmaker said he believed its failure to pass meant “I’ve failed these kids and their families.”

I felt the weight of an entire generation of kids who’ve had their mental health severely handicapped as a result of the harms of social media,”  the lawmaker said. “And then there’s the others – the parents of Texas kids who’ve died as a result of a stupid social media ‘challenge’ or by suicide after being pulled down the dangerous rabbit holes social media uses to hook their users, addict them on their products, and drive them to depression, anxiety, and suicidal ideation.”

“Finally, there’s the perfectly happy and healthy teens in Texas today, who will find themselves slowly falling off the edge before the legislature meets again in 2027,” he stated.

Patterson suggested he would try and pass the measure again when the Texas Legislature meets in 2027.

House Bill 186 would have prohibited a child from entering into a contract with a social media platform to become an account holder and required platforms to verify that a person seeking to become an account holder is 18 years of age or older before allowing them to create an account.

The legislation would have also required social media platforms to delete accounts belonging to individuals under the age of 18 at a parent or guardian’s request.

Keep reading

Centralized AI Threatens A Democratic Digital Future

Major corporations control decentralized AI (DeAI) companies, leaving decentralized AI in the dust. To build a more decentralized world, the sector must actively execute upon a focused DeAI strategy, with shared standards between projects, without compromise.

In April, a UN report warned that AI’s $4.8-trillion market is dominated by a mere 100 companies, most of which are based in the US and China. Centralized AI incumbents have the money and the connections to control this massive new industry, which means significant implications for society. 

These companies, all employing centralized AI technology, have run into their fair share of headaches. For example, Microsoft’s Copilot garnered attention for creating explicit, inappropriate images, such as children in compromising scenarios. This sparked a public and regulatory backlash.

Although Microsoft created stricter moderation, it had already demonstrated that centralized AI can harbor problems in part due to its closed-source code. 

Citadel was wrapped up in an AI trading scandal in the financial sector, as algorithms allegedly manipulated stock prices via artificial volume creation.

Google’s Project Maven, a Pentagon pilot program used in military tech, has raised ethical questions.

“We believe that Google should not be in the business of war,” reads a letter penned by Google employees and addressed to Sundar Pichai, the company’s CEO. The employees requested that Google leave Project Maven.

“We ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology,” the letter states.

So much for “Don’t be evil” — the company’s old slogan.

These situations give us clear examples of the potential failures of centralized AI, including ethical lapses, opaque decision-making and monopolistic control. DeAI’s open-source ethos, community governance, audit trails and computer facilities can give more than a few massive corporations an edge in the future of AI. 

Keep reading

European Commission Launches €5.69M European Fact-Checking Funding Network to Advance “Democracy Shield” and Expand Censorship Infrastructure

The European Commission has launched a €5 million initiative presented as a fact-checking support program; but beneath the surface, it reads as yet another calculated step toward institutionalizing censorship across the European Union.

This call for proposals is marketed as a tool to “protect democracy” and combat “disinformation,” but the structure, goals, and affiliations of the program point clearly to the opposite: a top-down, publicly funded apparatus for narrative enforcement.

Slated to run until September 2, 2025, the project is open not only to EU Member States but also to candidate countries like Ukraine and Moldova; jurisdictions framed as highly vulnerable to “foreign interference,” especially pro-Kremlin disinformation.

This strategic framing serves a dual purpose: justifying increased surveillance of content and securing narrative dominance in geopolitically sensitive areas.

The program’s core deliverables; protecting fact-checkers from so-called “harassment,” creating a centralized repository of “fact-checks,” and building emergency “response capacity;” sound benign to some. But stripped of the euphemism, this is a blueprint for constructing a continent-wide content control grid.

The “protection scheme” offers legal and cyber assistance to fact-checkers, but more crucially it reinforces the narrative that opposition to these groups constitutes abuse rather than legitimate disagreement.

The “fact-check repository” enables centralized curation of what counts as “truth,” and the “emergency response” function gives the Commission a pretext to fast-track suppression efforts in politically sensitive moments.

Most telling is the program’s requirement that participating organizations be certified by either the European Fact-Checking Standards Network (EFCSN) or the International Fact-Checking Network (IFCN).

Many of their members, such as AFP and Full Fact, already work directly with major social media platforms like Meta under third-party moderation schemes. This effectively means the EC is reinforcing an exclusive gatekeeper class, already aligned with corporate censorship programs, now endowed with taxpayer funds and the backing of the European bureaucracy.

At least 60% of the funding will go to third parties, who must co-finance their participation.

Keep reading

Texas Mandates Digital ID To Access App Store Apps

Texas has moved to the forefront of a national campaign to regulate children’s access to digital platforms by mandating that Apple and Google verify the ages of all users on their app stores.

Under a new law signed by Governor Greg Abbott, set to take effect January 1, 2026, those under 18 will be required to obtain parental consent before downloading apps or making in-app purchases. The measure has been pitched as a way to protect minors, but privacy advocates warn it could come at the expense of everyone’s digital freedom.

We obtained a copy of the bill for you here.

The requirement places app store operators in the role of gatekeepers, forcing them to gather and store sensitive personal information to determine user ages.

Opponents argue that such age checks do not just affect young users, (explicit adult content apps are already banned from app stores) and they also undermine anonymity online by tying a person’s digital presence to a verified real-world identity. That level of surveillance risks chilling free expression and stifling dissent by making it harder for people to speak or access information without fear of being identified.

Efforts to regulate youth access to apps and online services are gaining traction elsewhere as well. Utah enacted a similar policy earlier this year, and Congress is weighing federal legislation. Texas lawmakers are also advancing a separate bill that would prohibit users under 18 from accessing social media altogether.

While the law does state that app developers should delete the personal data provided by the app store provider, the wider problem is that users will have to trust that an app developer will actually do so. App store providers such as Apple and Google will have to retain sensitive data on its users.

Supporters of the Texas law argue that app stores are uniquely positioned to serve as the central checkpoint for age validation. Meta, Snap, and X have praised the move.

Keep reading

New UN Treaty Allows for Virtual Child P*rn

A new UN treaty to combat cybercrimes would allow predators and tech giants to profit from the sexual exploitation of children, including through images created using AI.

These new threats are emerging while the United Nations launches a new treaty to address cybercrime, but the new treaty only addresses some of the threats from sexual exploitation.

While the new treaty calls for criminalizing non-consensual sharing of intimate pictures, it still allows for a broad swath of sexualized content involving children. For instance, while the treaty criminalizes what is newly called “child sexual abuse material,” this term refers narrowly to images of real children. The new term allows for child porn created through Artificial Intelligence. As is now widely known, AI images are shockingly real.

Such images, soon to be allowed by UN treaty, would still be in violation of U.S. federal law. Specifically, in several sections, the new UN treaty allows countries to de-criminalize virtual child pornography in all circumstances as well as private sexting by minors, even to adults.

The General Assembly adopted the treaty on December 24, 2024.  Now countries must sign and ratify it before it goes into force. A signing ceremony for the new treaty will take place at a Summit in July in Hanoi. The treaty will enter into force after forty countries ratify it.

Supporters of the treaty argue that legalizing sexting is compassionate because adolescents have a right to sexual expression. Some argue that letting pedophiles satisfy their sexual preferences with virtual material would make it less likely that they would prey on real children. And they say that dropping the term “child pornography” is necessary to avoid re-victimizing those who have been exploited. They call all this part of a “trauma-informed” and “harm-reduction” approach, based on new theories in behavioral therapy.

Regardless of the merits of such arguments, they would appear to conflict with the priority of law enforcement of preventing abusers from harming future victims. There is no evidence that such new approaches make law enforcement more effective. Until recently, U.S. Justice Department experts argued against it.

Moreover, there is evidence that allowing sexual predators to engage with virtual pornography leads to more child sexual abuse, not less. And anti-trafficking advocates are all too familiar with how underage girls are lured into pornography and eventually the sex industry through sexting.

recent investigation of the Wall Street Journal uncovered how Meta chat bots pose a danger to children and how executives at the company deliberately allowed the chat bots to engage children sexually and to pose as children willing to engage in sexual acts. The investigation found that sexual predators and tech giants have a common interest in ensuring that children can be sexualized online.

Meta programmers were being pushed by the Meta executive suite not to impose excessive limits on sexual content, including involving children as users and objects, because of the high engagement it generates and the profits this would generate. As a result, existing firewalls to protect children were ineffective by design. Chat bots lured children into sexual conversations that eventually lead to progressively more explicit and degrading sexual content. And chat bots also posed as children who are willing to entertain lewd and even violent sexual behavior from their adult and child users.

Keep reading

Is your favorite influencer’s opinion bought and sold?

Your addictive doomscrolling on X, TikTok or Instagram may also be the latest nexus for millions of dollars in secret political corruption.

Over the last month, the problem has come into sharp relief. Newly surfaced documents show that more than 500 social media creators were part of a covert electioneering effort by Democratic donors to shape the presidential election in favor of Kamala Harris. Payments went to party members with online followings but also to non-political influencers — people known for comedy posts, travel vlogs or cooking YouTubes — in exchange for “positive, specific pro-Kamala content” meant to create the appearance of a groundswell of support for the former vice president.

Meanwhile, a similar pay-to-post effort among conservative influencers publicly unraveled. The goal was to publish messages in opposition to Health and Human Services Secretary Robert F. Kennedy Jr.’s push to remove sugary soda beverages from eligible SNAP food stamp benefits. Influencers were allegedly offered money to denounce soda restrictions as “an overreach that unfairly targets consumer choice” and encouraged to post pictures of President Trump enjoying Coca-Cola products. After right-leaning reporter Nick Sortor pointed out the near-identical messages on several prominent accounts, posts came down and at least one of the influencers apologized: “That was dumb of me. Massive egg on my face. In all seriousness, it won’t happen again.”

In both schemes, on the left and the right, those creating the content made little to no effort to disclose that payments could be involved. For ordinary users stumbling on the posts and videos, what they saw would have seemed entirely organic.

In the influencers’ defense, they didn’t break any rules — because none exist.

Keep reading