Brazil Charges Woman for 2020 Social Media Posts Under Court-Defined “Transphobia”

Brazil is preparing to put a woman on trial for words she typed online nearly five years ago, a case that illustrates how speech regulation now functions through judicial interpretation rather than legislation.

Isadora Borges, a resident of Paraíba, is accused of committing the crime of “transphobia” after posting comments on social media in November 2020 about sex, biology, and gender identity.

Her full name is Isadora Borges de Aquino Silva. She is 34 years old, a veterinary student, and is a self-described feminist.

Federal prosecutors argue that those posts warrant criminal prosecution. If convicted on all counts, Borges could receive a prison sentence ranging from four to ten years.

The posts appeared on X, then operating as Twitter, during a period of intense online debate over gender theory. One message stated that “transgender” women “were obviously born male.”

Another said: “A person who identifies as transgender retains their birth DNA. No surgery, synthetic hormone, or clothing change will change this fact…” The remarks were widely shared and circulated beyond Borges’s own account.

After the posts gained traction, a complaint was filed with federal police by Erika Hilton, a politician and transgender woman, who has been central to other similar free speech cases. That complaint initiated a criminal process that remained dormant for years.

Borges learned in September 2025 that prosecutors had formally charged her with two counts of “transphobia,” each carrying a possible sentence of two to five years. Her first court hearing is scheduled for tomorrow, February 10.

She is being represented with the support of ADF International. Julio Pohl, legal counsel for the organization, said the case reflects a deep problem in how Brazil now treats political and social expression. “No one should face a decade behind bars for expressing an opinion on a matter of public concern,” he said, in a press statement sent to Reclaim The Net. “Weaponising Brazil’s expansive ‘transphobia’ laws to punish peaceful expression is a profound violation of freedom of speech.”

Borges has spoken publicly about why she addressed the subject in the first place: “I commented on the issue because I care about the truth and protecting women. No one should ever fear going to prison for recognizing biological reality. I hope that my case can serve as a turning point in fighting censorship in Brazil. Brazilians deserve the freedom to speak openly without punishment.”

Federal prosecutors argue that publishing and amplifying those views constitutes criminal conduct. A conviction would bring fines and incarceration. Even without a guilty verdict, the legal process itself imposes high financial and personal costs.

The charges rely on a legal structure created by Brazil’s pro-censorship Supreme Court rather than by Parliament.

Keep reading

California Teacher Placed on Administrative Leave After Angry Mob Went After Her for Celebrating ICE on Facebook

A Northern California special education teacher has been placed on administrative leave after posting a comment endorsing U.S. Immigration and Customs Enforcement (ICE) on Facebook.

Sarai Jimenez, a teaching intern at MacQuiddy Elementary School in Watsonville, made the comment last month with her now-deleted Facebook account.

Responding to news of ICE presence in the area, she wrote simply, “Yay!!! We need ICE in Watsonville!! It’s been getting out of hand.”

The post quickly caught the attention of local activists, and the backlash intensified after recent ICE arrests in the town.

According to a report from the New York Post, Jimenez was threatened by multiple people, with one vowing to “pop her tires.”

Another outraged anti-ICE liberal labeled her a “shameful disgraceful disgusting woman” and suggested she might harm immigrant children in her care.

“Who knows what you’ll do behind closed doors to the kids of parents that are immigrants,” the person wrote.

Of course, liberal activists began rallying to make calls to the school and demand her termination.

Keep reading

“Kids Off Social Media Act” Opens the Door to Digital ID by Default

Congress is once again stepping into the role of digital caretaker, this time through the Kids Off Social Media Act, with a proposal from Rep. Anna Paulina Luna that seeks to impose federal rules on how young people interact with the world.

The house companion bill (to go along with the senate bill) attempts to set national limits on who can hold social media accounts, how platforms may structure their systems, and what kinds of data they are allowed to use when dealing with children and teenagers.

Framed as a response to growing parental concern, the legislation reflects a broader push to regulate online spaces through age-based access and design mandates rather than direct content rules.

The proposal promises restraint while quietly expanding Washington’s reach into the architecture of online speech. Backers of the bill will insist it targets corporate behavior rather than expression itself. The bill’s mechanics tell a more complicated story.

The bill is the result of a brief but telling legislative evolution. Early versions circulated in 2024 were framed as extensions of existing child privacy rules rather than participation bans. Those drafts focused on limiting data collection, restricting targeted advertising to minors, and discouraging algorithmic amplification, while avoiding hard access restrictions or explicit age enforcement mandates.

That posture shifted as the bill gained bipartisan backing. By late 2024, lawmakers increasingly treated social media as an inherently unsafe environment for children rather than a service in need of reform. When the bill was reintroduced in January 2025, it reflected that change. The new version imposed a categorical ban on accounts for users under 13, restricted recommendation systems for users under 17, and strengthened enforcement through the Federal Trade Commission and state attorneys general, with Senate sponsorship led by Ted Cruz and Brian Schatz.

Keep reading

Discord to Demand Face Scan or ID to Access All Features

Discord is preparing to make age classification a constant background process across its platform. Beginning next month, every account will default to a teen-appropriate experience unless the user takes steps to prove adulthood.

Age determination will sit underneath routine activity, shaping what people can see, say, and join.

For accounts that are not verified as adult, access will narrow immediately. Age-restricted servers and channels will be blocked, voice participation in live “stage” channels will be disabled, and automated filters will apply to content Discord identifies as graphic or sensitive.

Friend requests from unfamiliar users will trigger warning prompts, and direct messages from unknown accounts will be routed into a separate inbox.

Core features such as direct messages with known contacts and servers without age restrictions will continue to function. Age-restricted servers will effectively disappear until verification is completed, including servers that a user joined years earlier.

The global rollout reflects a broader regulatory environment that is pushing platforms toward more aggressive age controls. Discord has already tested similar systems.

Last year, age checks were introduced in the UK and Australia.

For many adult users, the concern is less about access to content and more about surveillance and the ability to communicate anonymously. Verification systems introduce new forms of monitoring, whether through documents, facial analysis, or ongoing behavioral assessment.

Keep reading

Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda

A familiar storyline is hardening into regulatory doctrine across Europe: frame social media use as addiction, then require platforms to reengineer themselves around age segregation and digital ID.

The European Commission’s preliminary case against TikTok, announced today, shows how that narrative is now being operationalized in policy, with consequences that reach well beyond one app.

European regulators have accused TikTok of breaching the Digital Services Act by relying on what they describe as “addictive design” features, including infinite scroll, autoplay, push notifications, and personalized recommendations.

Officials argue these systems drive compulsive behavior among children and vulnerable adults and must be structurally altered.

What sits beneath that argument is a quieter requirement. To deliver different “safe” experiences to minors and adults, platforms must first determine who is a minor and who is not. Any mandate to offer different experiences to minors and adults depends on a reliable method of telling those groups apart.

Platforms cannot apply separate algorithms, screen-time limits, or nighttime restrictions without determining a user’s age with a level of confidence regulators will accept.

Commission spokesman Thomas Regnier described the mechanics bluntly, saying TikTok’s design choices “lead to the compulsive use of the app, especially for our kids, and this poses major risks to their mental health and wellbeing.” He added: “The measures that TikTok has in place are simply not enough.”

The enforcement tool behind those statements is the Digital Services Act, the EU’s platform rulebook that authorizes Brussels to demand redesigns and impose fines of up to 6% of global annual revenue.

Keep reading

France’s Raid on X Opens New Front in Europe’s War Over Online Speech

French prosecutors staged a morning raid at the Paris offices of social media platform X, part of a criminal investigation coordinated with Europol.

The operation, launched in 2025, targets allegations ranging from the alleged distribution of sexual deepfakes to algorithmic manipulation.

The cybercrime division in Paris is exploring whether X’s automated systems may have been used in an “organized structure” to distort data or suppress information.

The alleged offenses are as follows:

  • Denial of crimes against humanity (Holocaust denial)
  • Fraudulent extraction of data from an ⁠automated data processing system ​by an organized group
  • Falsification of the operation ‌of ‌an automated data processing system by an organized group
  • Defamation of a person’s image (deepfakes of ​sexual nature, including minors)
  • Operating of an illegal online platform by an organized group

Prosecutors have now summoned Elon Musk and former CEO Linda Yaccarino for questioning in April. “Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,” the office said.

Yaccarino, who left in mid-2025, might find herself reliving the company’s most volatile months, when X faced regulatory crossfire across the continent for refusing to comply with what it called political censorship demands.

The case actually began with two complaints in January 2025, including one from French lawmaker Eric Bothorel, who accused X of narrowing “diversity of voices and options” after Musk’s takeover.

Bothorel cited “personal interventions” in moderation decisions, a line that seemed more about ideology than algorithms.

As the investigation grew, prosecutors took interest in Grok, X’s AI system, which allegedly produced “Holocaust denial content” and “sexual deepfakes.” The Paris prosecutor’s office soon announced it was examining “biased algorithms.”

Musk called the whole affair a “politically-motivated criminal investigation,” and considering Europe’s recent appetite for speech regulation, it’s not a stretch to see why he’d think that.

Keep reading

EU Records Reveal Absurd Justifications for $150 Million Fine Against X

Newly disclosed internal records, obtained by the US House Judiciary Committee, reveal that Brussels privately warned X that it could be blocked from operating in the European Union unless it obeyed a set of Digital Services Act demands.

We obtained a copy of the records for you here.

The decision, stretching across 184 pages, became the foundation for a fine of nearly $150 million. Buried in the text is a clear threat: if X failed to comply, the Commission could “disable access to the infringing service.” That phrase, lifted straight from Article 75(3) of the DSA, turns regulatory oversight into a power switch.

The fines themselves read like parodies of seriousness. €45 million for “misappropriating” the blue checkmark. Somehow, allowing people to pay to show they’re a real person and get a checkmark supposedly distorted “cross-industry visual standards.”

€35 million for an ad repository deemed too limited. €40 million for withholding data from “qualified researchers,” some based outside the EU. We all know what type of “researcher” that is.

Even the supporting evidence borders on comic. One example cited a parody of a Donald Duck account. Regulators claimed the cartoon’s blue checkmark could “mislead users” into believing the fictional duck was real. In Brussels, satire is treated as a compliance issue.

Keep reading

TikTok CEO Reveals Coordination With 2 Dozen Jewish Groups to Police Speech

A chilling blueprint for the censorship of pro-Palestine voices on social media has been exposed, directly from the mouth of a top tech executive. Adam Presser, the newly installed CEO of TikTok’s U.S. operations following its forced sale to a consortium led by billionaire Larry Ellison, detailed in a recent resurfaced video how the platform systematically silenced critics by labeling their speech as hateful. This admission confirms the worst fears of free speech advocates and reveals a coordinated effort to shield Israeli government actions from public scrutiny by conflating political criticism with bigotry.

The video, originally presented to the World Jewish Congress, features Presser, who was then TikTok’s Head of Operations and Trust & Safety, outlining specific policy changes. “We made a change to designate the use of the term Zionist as a proxy for a protected attribute as hate speech,” Presser stated. In practice, this means using “Zionist” in a negative context could get a user banned, while phrases like “proud Zionist” remain permitted. This creates a politically motivated double standard where one side of a heated geopolitical debate is granted linguistic immunity.

A tripling of bans and outside influence

Presser boasted of aggressive enforcement, revealing that TikTok “tripled the amount of accounts that we were banning for hateful activity” over the course of 2024. This timeline coincides directly with the global outcry following Israel’s military offensive in Gaza. He further explained that “over two dozen Jewish organizations” are “constantly feeding us intelligence and information when they spot violative trends,” and that these groups help inform TikTok on “what is hate speech.” This outsourcing of content moderation decisions to explicitly partisan advocates strips away any pretense of neutrality, effectively allowing pro-Israel groups to police and silence their critics on a global platform.

The consequences of this policy are not theoretical. Award-winning Palestinian journalist Bisan Owda, who had built an audience of 1.4 million followers on TikTok while documenting the war from Gaza, recently found her account permanently banned. In a video, Owda connected her ban directly to Presser’s remarks and to comments from Israeli Prime Minister Benjamin Netanyahu, who last year called the TikTok purchase “consequential” and stated, “We have to fight with the weapons that apply to the battlefield in which we engage, and the most important ones are social media.”

Keep reading

Social Media Working to Protect ICE Clampdown in Minneapolis

There was a time, not terribly long ago, when the right claimed that the big social media companies weren’t just skewed to the left in terms of moderation, but that they were actually acting in the direct interests of the Democratic administration (House Judiciary Committee, 5/1/24).

When right-wing billionaire Elon Musk bought Twitter, eventually rebranding it as X, the right believed that he’d show the world that the popular site was a tool of the Democratic agenda (New Yorker1/11/23). The move increased Musk’s profile as a conservative crusader against social progress and economic populism before his brief stint as President Donald Trump’s federal jobs hatchet man in 2025 (Roosevelt Institute, 5/29/25).

Before a forced sale by its Beijing-based parent company, TikTok was attacked by both Democrats and Republicans because of its ownership, with both sides claiming that this not only gave the Chinese government the ability to spy on Americans, but also to skew political discourse away from Washington’s interests (FAIR.org11/13/235/8/241/3/25).

At Meta, founder Mark Zuckerberg quickly tried to distance his company from the notion that it acted in tandem with the Biden administration. Politico (8/26/24) reported:

Mark Zuckerberg says he regrets that Meta bowed to Biden administration pressure to censor content, saying in a letter that the interference was “wrong,” and he plans to push back if it happens again.

Zuckerberg told podcaster Joe Rogan (Joe Rogan Experience1/10/25) that the Biden administration had been “calling up the guys on our team and yelling at them and cursing and threatening repercussions if we don’t take down things that are true.” He asserted that Meta, and especially Facebook, “had gone too far in complying with such requests, and acknowledged that he and others at the company wrongly bought into the idea” (Axios, 1/10/25).

Keep reading

Wyoming Introduces First-Ever Foreign Censorship Shield Bill

Wyoming has taken a historic step to insulate American speech from foreign interference with the introduction of the Wyoming Guaranteeing Rights Against Novel International Tyranny and Extortion (GRANITE) Act, House Bill 0070, which would be the first US law designed to create a private right of action against foreign censorship enforcement.

Representative Daniel Singh introduced the bill, declaring that “foreign governments have decided they can threaten American citizens and American companies for speech that is protected by our Constitution…Wyoming is drawing a line in the sand.” The measure aims to establish Wyoming as a refuge for free expression and digital innovation, directly challenging what lawmakers describe as an escalating campaign of transnational censorship pressure.

The legislation provides that any Wyoming resident, business, or US person with servers in the state may sue foreign governments or international organizations that attempt to enforce censorship demands against them for First Amendment protected speech. Each violation could cost the offending entity at least $1 million or 10% of its US revenue, whichever is higher.

The GRANITE Act prohibits Wyoming courts and agencies from recognizing or enforcing foreign censorship judgments. It also forbids any state cooperation with such orders, including extradition requests or data demands linked to speech that is constitutionally protected in the US. Under the bill, no Wyoming authority may help a foreign state investigate, penalize, or prosecute individuals over lawful expression.

We obtained a copy of the bill for you here.

Keep reading