French lawmakers vote to ban social media use by under-15s

French lawmakers have passed a bill that would ban social media use by under-15s, a move championed by president Emmanuel Macron as a way to protect children from excessive screen time.

The lower national assembly adopted the text by a vote of 130 to 21 in a lengthy overnight session from Monday to Tuesday.

It will now go to the Senate, France’s upper house, ahead of becoming law.

Macron hailed the vote as a “major step” to protect French children and teenagers in a post on X.

The legislation, which also provides for a ban on mobile phones in high schools, would make France the second country to take such a step following Australia’s ban for under-16s in December.

As social media has grown, so has concern that too much screen time is harming child development and contributing to mental health problems.

Keep reading

EU Defends Censorship Law While Commission Staff Shift to Auto-Deleting Signal Messages

A senior European Union official responsible for enforcing online speech rules is objecting to what he describes as intimidation by Washington, even as his own agency advances policies that expand state involvement in digital expression and private communications.

Speaking Monday at the University of Amsterdam, Prabhat Agarwal, who leads enforcement of the Digital Services Act at the European Commission, urged regulators and civil society groups not to retreat under pressure from the United States. His remarks followed the February 3 release of a report by the US House Judiciary Committee that included the names and email addresses of staff involved in enforcing and promoting Europe’s censorship laws.

“Don’t let yourself be scared. We at the Commission stand by the European civil society organizations that have been threatened, and we stand by our teams as well,” Agarwal said, as reported by Politico.

The report’s publication came shortly after Washington barred a former senior EU official and two civil society representatives from entering the United States. European officials interpreted those moves as an effort to deter implementation of the DSA, the bloc’s flagship content regulation framework governing large online platforms.

Keep reading

Berlin Court Orders X to Hand Over Hungarian Election Data to Researchers

A Berlin court has ordered X to hand over data on Hungarian election activity to researchers, ruling in favor of Democracy Reporting International after the platform refused the group’s access requests in November.

The ruling turns on the EU’s censorship law, the Digital Services Act, which requires large platforms to give external researchers access to data for scrutiny of election interference risks. X ignored that obligation. The European Commission fined it €40 ($47) million for that refusal, as part of a broader €120 ($141) million levy, in December.

X’s position throughout has been straightforward: don’t share the data. No response to press inquiries, no compliance, no engagement.

Hungary votes in April in what amounts to a test of Viktor Orbán’s power as he faces his rival Péter Magyar.

Keep reading

Macron Calls Online Free Speech Argument “Pure Bullshit”

European governments framing social media bans for minors as child protection are quiet about what those bans actually require: identity checks for everyone. Every adult who wants to use Instagram, TikTok, or YouTube in France, Spain, or Germany would need to verify their real-world identity to access the platform. Anonymity, one of the oldest protections for dissenting speech, goes with it.

That’s the context Emmanuel Macron left out when he called free speech online “pure bullshit” in New Delhi on Wednesday.

The French president was addressing companies and their American backers as European governments push social media restrictions, as well as curbs on “hate speech,” a move the Trump administration has criticized as censorship.

Macron’s counterargument is based on algorithmic opacity. “Having no clue about how their algorithm is made, how it’s tested, trained, and where it will guide you, the democratic consequences of this bias could be huge,” he said.

Keep reading

Meta’s Zuckerberg denies at LA trial that Instagram targets kids

Meta Platforms chief executive Mark Zuckerberg on Feb 18 repeatedly said during 

a landmark trial over youth social media addiction that the Facebook and Instagram operator does not allow kids under 13 on its platforms, despite being confronted with evidence suggesting they were a key demographic.

Mr Mark Lanier, a lawyer for a woman suing Instagram and Google’s YouTube for harming her mental health when she was a child, pressed Mr Zuckerberg over his statement to Congress in 2024 that users under 13 are not allowed on the platform.

Mr Lanier confronted Mr Zuckerberg with internal Meta documents.

The case involves a California woman who started using Instagram and YouTube as a child. She alleges the companies sought to profit by hooking kids on their services despite knowing social media could harm their mental health.

She alleges the apps fuelled her depression and suicidal thoughts and is seeking to hold the companies liable.

Meta and Google have denied the allegations, and pointed to their work to add features that keep users safe.

“If we want to win big with teens, we must bring them in as tweens,” read one internal Instagram presentation from 2018.

Keep reading

META Granted Patent That Allows Facebook Users to Post After They Die

A patent granted to META in December gives an inside look at how Facebook users may still be able to post beyond the grave.

According to the patent, the large language model will be “simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

The AI model would essentially replicate the deceased person’s digital presence by analyzing data such as their writing style, tone, and expressed beliefs, allowing it to continue posting content and even commenting on friends’ posts in their voice.

It gets stranger.

The patent also referenced technology that would allow users to have phone calls with the deceased or even video calls.

Per Business Insider:

In the patent, Meta lays out why it thinks people might need this.

If you’re no longer posting online — whether that’s because you need a break from social media or … you … die — your followers’ user experience will be affected. In short, they’ll miss you.

“The impact on the users is much more severe and permanent if that user is deceased and can never return to the social networking platform,” the document says.

To fill that void, Meta would essentially create a digital clone of your social media presence, training a model on “user-specific” data — including historical platform activity, such as comments, likes, or content — to understand how you would (or rather, did) behave.

The patent was filed by META’s CTO, Andrew Bosworth.

Despite being granted the patent, META has claimed it has no plans to implement the technology.

In recent years, the idea of using AI to bring someone back from the dead digitally has gained some momentum.

In 2025, the family of Christopher Pelkey, who was killed in a road rage incident in Arizona, used an AI-generated video of Pelkey as an impact statement in the courtroom.

Keep reading

AI Safety Researcher Resigns With ‘World Is in Peril’ Warning

An artificial intelligence (AI) safety researcher has resigned with a cryptic warning that the “world is in peril.”

Mrinank Sharma, who joined large language model developer Anthropic in 2023, announced his departure on X in an open letter to colleagues on Feb. 9. He was the leader of a team that researches AI safeguards.

In his letter, Sharma said he had “achieved what I wanted to here,” citing contributions such as investigating why generative AI models prioritize flattering users over providing accurate information, developing defenses to prevent terrorists from using AI to design biological weapons, and trying to understand “how AI assistants could make us less human.”

Although he said he took pride in his work at Anthropic, the 30-year-old AI engineer wrote that “the time has come to move on,” adding that he had become aware of a multitude of crises that extend beyond AI.

“I continuously find myself reckoning with our situation,” Sharma wrote. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.

“[Throughout] my time here, I’ve repeatedly seen how hard it is truly let our values govern actions,” he added. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

Keep reading

UK Regulator Ofcom Proposes Second Fine Against US Platform 4chan

Britain’s speech regulator, Ofcom, has proposed another financial penalty against 4chan under the Online Safety Act, deepening a censorship dispute that stretches from London to Washington.

4chan is an American platform, hosted in the United States, with no presence in Britain. Yet under the Online Safety Act, Ofcom believes that this falls under its authority.

Tensions increased after Ofcom declined to provide 4chan with a copy of its provisional decision before announcing the outcome publicly. According to the platform’s legal team, this decision limited its ability to respond in real time.

Preston Byrne, counsel for 4chan, stated that the regulator’s refusal was intended “to deny us the opportunity for a public rebuttal.”

He further accused the regulator of engaging in “domestic narrative control” by withholding advance access to the decision while preparing to publish its conclusions.

Ofcom announced that it has escalated its enforcement action against 4chan, stating: “In accordance with section 130 of the Online Safety Act 2023, we have today issued 4chan Community Support LLC with a provisional notice of contravention.”

Keep reading

UK Fines US Platform Imgur For Lack of Age Verification

Imgur’s decision to suspend access for UK users in September 2025 was an early signal that regulatory pressure was building. The platform’s parent company has now learned the financial cost of that pressure.

The UK Information Commissioner’s Office has fined MediaLab, which operates image hosting company Imgur, £247,590 ($337,000) for violations of the UK GDPR.

According to the regulator, the company processed children’s personal data without a lawful basis, failed to implement effective age assurance measures, and did not complete a required data protection impact assessment.

The ICO’s findings focus on how children under 13 were able to use the service without verified parental consent or “any other lawful basis.”

The regulator also determined that the company lacked meaningful age checks. That means the platform did not reliably verify whether users were children before collecting and processing their data. Additionally, MediaLab did not conduct a formal risk assessment to examine how its service might affect minors’ rights and freedoms.

“MediaLab failed in its legal duties to protect children, putting them at unnecessary risk,” said UK Information Commissioner John Edwards. “For years, it allowed children to use Imgur without any effective age checks, while collecting and processing their data, which in turn exposed them to harmful and inappropriate content. Age checks help organizations keep children’s personal information safe.”

He added, “Ignoring the fact that children use these services, while processing their data unlawfully, is not acceptable. Companies that choose to ignore this can expect to face similar enforcement action.”

The ICO says it has the authority to impose fines of up to £17.5 million or 4 percent of an organization’s annual global revenue, whichever is higher. In setting the penalty at £247,590, the office stated that it “took into consideration the number of children affected by this breach, the degree of potential harm caused, the duration of the contraventions, and the company’s global turnover.”

This enforcement action sits within a broader UK policy change toward mandatory online age verification.

Lawmakers and regulators have increasingly pressed platforms to deploy age assurance tools that can include document checks, facial age estimation, or third-party verification services. All-privacy invasive.

While positioned as child protection measures, these systems often require users to submit government-issued identification or biometric data simply to access online services.

Keep reading

Russia Limits Access to Social Media Platform Telegram as It Pushes State-Run ‘Super-App’ Called Max – UPDATE: WhatsApp and YouTube Fully Blocked by Moscow

Russia turns on Telegram.

All around the world, social media companies are under pressure from state actors, and our hard-won freedom of speech is under threat in the process.

Case in point: Russia.

Having banned US platforms like Facebook, Instagram, and X, and limited access to YouTube, the Russian government now turns on Telegram – a very popular app used by Russian soldiers and war correspondents.

Yesterday (11), Russia’s communications watchdog, Roskomnadzor, started limiting access to Telegram.

Bloomberg reported:

“Measures to slow down access to the messenger service have already begun, the news service reported, citing another person familiar that it didn’t identify. RBC said it sent a request for comment to Roskomnadzor.

The government has been promoting the use of a state-run ‘super-app’ called Max, modeled after China’s WeChat, at the same time as it has choked off access to foreign messenger services. As well as messaging, Max hosts government services and enables document storage, banking and other public and commercial services.”

Keep reading