AI Safety Researcher Resigns With ‘World Is in Peril’ Warning

An artificial intelligence (AI) safety researcher has resigned with a cryptic warning that the “world is in peril.”

Mrinank Sharma, who joined large language model developer Anthropic in 2023, announced his departure on X in an open letter to colleagues on Feb. 9. He was the leader of a team that researches AI safeguards.

In his letter, Sharma said he had “achieved what I wanted to here,” citing contributions such as investigating why generative AI models prioritize flattering users over providing accurate information, developing defenses to prevent terrorists from using AI to design biological weapons, and trying to understand “how AI assistants could make us less human.”

Although he said he took pride in his work at Anthropic, the 30-year-old AI engineer wrote that “the time has come to move on,” adding that he had become aware of a multitude of crises that extend beyond AI.

“I continuously find myself reckoning with our situation,” Sharma wrote. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.

“[Throughout] my time here, I’ve repeatedly seen how hard it is truly let our values govern actions,” he added. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

Keep reading

UK Regulator Ofcom Proposes Second Fine Against US Platform 4chan

Britain’s speech regulator, Ofcom, has proposed another financial penalty against 4chan under the Online Safety Act, deepening a censorship dispute that stretches from London to Washington.

4chan is an American platform, hosted in the United States, with no presence in Britain. Yet under the Online Safety Act, Ofcom believes that this falls under its authority.

Tensions increased after Ofcom declined to provide 4chan with a copy of its provisional decision before announcing the outcome publicly. According to the platform’s legal team, this decision limited its ability to respond in real time.

Preston Byrne, counsel for 4chan, stated that the regulator’s refusal was intended “to deny us the opportunity for a public rebuttal.”

He further accused the regulator of engaging in “domestic narrative control” by withholding advance access to the decision while preparing to publish its conclusions.

Ofcom announced that it has escalated its enforcement action against 4chan, stating: “In accordance with section 130 of the Online Safety Act 2023, we have today issued 4chan Community Support LLC with a provisional notice of contravention.”

Keep reading

UK Fines US Platform Imgur For Lack of Age Verification

Imgur’s decision to suspend access for UK users in September 2025 was an early signal that regulatory pressure was building. The platform’s parent company has now learned the financial cost of that pressure.

The UK Information Commissioner’s Office has fined MediaLab, which operates image hosting company Imgur, £247,590 ($337,000) for violations of the UK GDPR.

According to the regulator, the company processed children’s personal data without a lawful basis, failed to implement effective age assurance measures, and did not complete a required data protection impact assessment.

The ICO’s findings focus on how children under 13 were able to use the service without verified parental consent or “any other lawful basis.”

The regulator also determined that the company lacked meaningful age checks. That means the platform did not reliably verify whether users were children before collecting and processing their data. Additionally, MediaLab did not conduct a formal risk assessment to examine how its service might affect minors’ rights and freedoms.

“MediaLab failed in its legal duties to protect children, putting them at unnecessary risk,” said UK Information Commissioner John Edwards. “For years, it allowed children to use Imgur without any effective age checks, while collecting and processing their data, which in turn exposed them to harmful and inappropriate content. Age checks help organizations keep children’s personal information safe.”

He added, “Ignoring the fact that children use these services, while processing their data unlawfully, is not acceptable. Companies that choose to ignore this can expect to face similar enforcement action.”

The ICO says it has the authority to impose fines of up to £17.5 million or 4 percent of an organization’s annual global revenue, whichever is higher. In setting the penalty at £247,590, the office stated that it “took into consideration the number of children affected by this breach, the degree of potential harm caused, the duration of the contraventions, and the company’s global turnover.”

This enforcement action sits within a broader UK policy change toward mandatory online age verification.

Lawmakers and regulators have increasingly pressed platforms to deploy age assurance tools that can include document checks, facial age estimation, or third-party verification services. All-privacy invasive.

While positioned as child protection measures, these systems often require users to submit government-issued identification or biometric data simply to access online services.

Keep reading

Russia Limits Access to Social Media Platform Telegram as It Pushes State-Run ‘Super-App’ Called Max – UPDATE: WhatsApp and YouTube Fully Blocked by Moscow

Russia turns on Telegram.

All around the world, social media companies are under pressure from state actors, and our hard-won freedom of speech is under threat in the process.

Case in point: Russia.

Having banned US platforms like Facebook, Instagram, and X, and limited access to YouTube, the Russian government now turns on Telegram – a very popular app used by Russian soldiers and war correspondents.

Yesterday (11), Russia’s communications watchdog, Roskomnadzor, started limiting access to Telegram.

Bloomberg reported:

“Measures to slow down access to the messenger service have already begun, the news service reported, citing another person familiar that it didn’t identify. RBC said it sent a request for comment to Roskomnadzor.

The government has been promoting the use of a state-run ‘super-app’ called Max, modeled after China’s WeChat, at the same time as it has choked off access to foreign messenger services. As well as messaging, Max hosts government services and enables document storage, banking and other public and commercial services.”

Keep reading

Brazil Charges Woman for 2020 Social Media Posts Under Court-Defined “Transphobia”

Brazil is preparing to put a woman on trial for words she typed online nearly five years ago, a case that illustrates how speech regulation now functions through judicial interpretation rather than legislation.

Isadora Borges, a resident of Paraíba, is accused of committing the crime of “transphobia” after posting comments on social media in November 2020 about sex, biology, and gender identity.

Her full name is Isadora Borges de Aquino Silva. She is 34 years old, a veterinary student, and is a self-described feminist.

Federal prosecutors argue that those posts warrant criminal prosecution. If convicted on all counts, Borges could receive a prison sentence ranging from four to ten years.

The posts appeared on X, then operating as Twitter, during a period of intense online debate over gender theory. One message stated that “transgender” women “were obviously born male.”

Another said: “A person who identifies as transgender retains their birth DNA. No surgery, synthetic hormone, or clothing change will change this fact…” The remarks were widely shared and circulated beyond Borges’s own account.

After the posts gained traction, a complaint was filed with federal police by Erika Hilton, a politician and transgender woman, who has been central to other similar free speech cases. That complaint initiated a criminal process that remained dormant for years.

Borges learned in September 2025 that prosecutors had formally charged her with two counts of “transphobia,” each carrying a possible sentence of two to five years. Her first court hearing is scheduled for tomorrow, February 10.

She is being represented with the support of ADF International. Julio Pohl, legal counsel for the organization, said the case reflects a deep problem in how Brazil now treats political and social expression. “No one should face a decade behind bars for expressing an opinion on a matter of public concern,” he said, in a press statement sent to Reclaim The Net. “Weaponising Brazil’s expansive ‘transphobia’ laws to punish peaceful expression is a profound violation of freedom of speech.”

Borges has spoken publicly about why she addressed the subject in the first place: “I commented on the issue because I care about the truth and protecting women. No one should ever fear going to prison for recognizing biological reality. I hope that my case can serve as a turning point in fighting censorship in Brazil. Brazilians deserve the freedom to speak openly without punishment.”

Federal prosecutors argue that publishing and amplifying those views constitutes criminal conduct. A conviction would bring fines and incarceration. Even without a guilty verdict, the legal process itself imposes high financial and personal costs.

The charges rely on a legal structure created by Brazil’s pro-censorship Supreme Court rather than by Parliament.

Keep reading

California Teacher Placed on Administrative Leave After Angry Mob Went After Her for Celebrating ICE on Facebook

A Northern California special education teacher has been placed on administrative leave after posting a comment endorsing U.S. Immigration and Customs Enforcement (ICE) on Facebook.

Sarai Jimenez, a teaching intern at MacQuiddy Elementary School in Watsonville, made the comment last month with her now-deleted Facebook account.

Responding to news of ICE presence in the area, she wrote simply, “Yay!!! We need ICE in Watsonville!! It’s been getting out of hand.”

The post quickly caught the attention of local activists, and the backlash intensified after recent ICE arrests in the town.

According to a report from the New York Post, Jimenez was threatened by multiple people, with one vowing to “pop her tires.”

Another outraged anti-ICE liberal labeled her a “shameful disgraceful disgusting woman” and suggested she might harm immigrant children in her care.

“Who knows what you’ll do behind closed doors to the kids of parents that are immigrants,” the person wrote.

Of course, liberal activists began rallying to make calls to the school and demand her termination.

Keep reading

“Kids Off Social Media Act” Opens the Door to Digital ID by Default

Congress is once again stepping into the role of digital caretaker, this time through the Kids Off Social Media Act, with a proposal from Rep. Anna Paulina Luna that seeks to impose federal rules on how young people interact with the world.

The house companion bill (to go along with the senate bill) attempts to set national limits on who can hold social media accounts, how platforms may structure their systems, and what kinds of data they are allowed to use when dealing with children and teenagers.

Framed as a response to growing parental concern, the legislation reflects a broader push to regulate online spaces through age-based access and design mandates rather than direct content rules.

The proposal promises restraint while quietly expanding Washington’s reach into the architecture of online speech. Backers of the bill will insist it targets corporate behavior rather than expression itself. The bill’s mechanics tell a more complicated story.

The bill is the result of a brief but telling legislative evolution. Early versions circulated in 2024 were framed as extensions of existing child privacy rules rather than participation bans. Those drafts focused on limiting data collection, restricting targeted advertising to minors, and discouraging algorithmic amplification, while avoiding hard access restrictions or explicit age enforcement mandates.

That posture shifted as the bill gained bipartisan backing. By late 2024, lawmakers increasingly treated social media as an inherently unsafe environment for children rather than a service in need of reform. When the bill was reintroduced in January 2025, it reflected that change. The new version imposed a categorical ban on accounts for users under 13, restricted recommendation systems for users under 17, and strengthened enforcement through the Federal Trade Commission and state attorneys general, with Senate sponsorship led by Ted Cruz and Brian Schatz.

Keep reading

Discord to Demand Face Scan or ID to Access All Features

Discord is preparing to make age classification a constant background process across its platform. Beginning next month, every account will default to a teen-appropriate experience unless the user takes steps to prove adulthood.

Age determination will sit underneath routine activity, shaping what people can see, say, and join.

For accounts that are not verified as adult, access will narrow immediately. Age-restricted servers and channels will be blocked, voice participation in live “stage” channels will be disabled, and automated filters will apply to content Discord identifies as graphic or sensitive.

Friend requests from unfamiliar users will trigger warning prompts, and direct messages from unknown accounts will be routed into a separate inbox.

Core features such as direct messages with known contacts and servers without age restrictions will continue to function. Age-restricted servers will effectively disappear until verification is completed, including servers that a user joined years earlier.

The global rollout reflects a broader regulatory environment that is pushing platforms toward more aggressive age controls. Discord has already tested similar systems.

Last year, age checks were introduced in the UK and Australia.

For many adult users, the concern is less about access to content and more about surveillance and the ability to communicate anonymously. Verification systems introduce new forms of monitoring, whether through documents, facial analysis, or ongoing behavioral assessment.

Keep reading

Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda

A familiar storyline is hardening into regulatory doctrine across Europe: frame social media use as addiction, then require platforms to reengineer themselves around age segregation and digital ID.

The European Commission’s preliminary case against TikTok, announced today, shows how that narrative is now being operationalized in policy, with consequences that reach well beyond one app.

European regulators have accused TikTok of breaching the Digital Services Act by relying on what they describe as “addictive design” features, including infinite scroll, autoplay, push notifications, and personalized recommendations.

Officials argue these systems drive compulsive behavior among children and vulnerable adults and must be structurally altered.

What sits beneath that argument is a quieter requirement. To deliver different “safe” experiences to minors and adults, platforms must first determine who is a minor and who is not. Any mandate to offer different experiences to minors and adults depends on a reliable method of telling those groups apart.

Platforms cannot apply separate algorithms, screen-time limits, or nighttime restrictions without determining a user’s age with a level of confidence regulators will accept.

Commission spokesman Thomas Regnier described the mechanics bluntly, saying TikTok’s design choices “lead to the compulsive use of the app, especially for our kids, and this poses major risks to their mental health and wellbeing.” He added: “The measures that TikTok has in place are simply not enough.”

The enforcement tool behind those statements is the Digital Services Act, the EU’s platform rulebook that authorizes Brussels to demand redesigns and impose fines of up to 6% of global annual revenue.

Keep reading

France’s Raid on X Opens New Front in Europe’s War Over Online Speech

French prosecutors staged a morning raid at the Paris offices of social media platform X, part of a criminal investigation coordinated with Europol.

The operation, launched in 2025, targets allegations ranging from the alleged distribution of sexual deepfakes to algorithmic manipulation.

The cybercrime division in Paris is exploring whether X’s automated systems may have been used in an “organized structure” to distort data or suppress information.

The alleged offenses are as follows:

  • Denial of crimes against humanity (Holocaust denial)
  • Fraudulent extraction of data from an ⁠automated data processing system ​by an organized group
  • Falsification of the operation ‌of ‌an automated data processing system by an organized group
  • Defamation of a person’s image (deepfakes of ​sexual nature, including minors)
  • Operating of an illegal online platform by an organized group

Prosecutors have now summoned Elon Musk and former CEO Linda Yaccarino for questioning in April. “Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,” the office said.

Yaccarino, who left in mid-2025, might find herself reliving the company’s most volatile months, when X faced regulatory crossfire across the continent for refusing to comply with what it called political censorship demands.

The case actually began with two complaints in January 2025, including one from French lawmaker Eric Bothorel, who accused X of narrowing “diversity of voices and options” after Musk’s takeover.

Bothorel cited “personal interventions” in moderation decisions, a line that seemed more about ideology than algorithms.

As the investigation grew, prosecutors took interest in Grok, X’s AI system, which allegedly produced “Holocaust denial content” and “sexual deepfakes.” The Paris prosecutor’s office soon announced it was examining “biased algorithms.”

Musk called the whole affair a “politically-motivated criminal investigation,” and considering Europe’s recent appetite for speech regulation, it’s not a stretch to see why he’d think that.

Keep reading