California man who went by ‘Billy Badass’ busted for synagogue bomb threats, feds say

A California man who went by “Billy Badass” on social media allegedly threatened to bomb every synagogue within a 20-mile radius — then doubled down with more antisemitic rants even after cops threw him in a psychiatric hold.

Elijah Alexander King, 36, of San Luis Obispo, pleaded not guilty Tuesday to federal charges stemming from his August rampage of hate that has him staring down a decade behind bars.

King used his “Billy Badass” X account to spew the hate messages on Aug. 28. “I’m gonna blow up every synagogue in a 20-mile radius,” King threatened according to federal prosecutors.

“This is a real threat send the police and report me for terrorism,” said a second messaged ten minutes later. King then searched for synagogues nearby on his cellphone, prosecutors said.

Police found King and had him check into a psychiatric hospital for monitoring where he continued to spew hate from the “Billy Badass” handle, states. 

Keep reading

We Must Resist The Rise Of A Global Censorship Regime

The ordeal of Finnish Parliamentarian Päivi Räsänen, who just stood trial a third time – after being acquitted twice – for a 2019 tweet in which she simply shared a Scripture verse and her faith-based views on marriage and sexuality, is a warning to all who value the right to speak freely across the world.

When governments claim the power to police opinions, even peaceful expressions of faith can be dragged through the courts.

And now this promises to be a much more pervasive reality in Europe as a result of the 2022 Digital Services Act (DSA). Ahead of the European Union’s review of the DSA, 113 international experts committed to free speech wrote to the European Commission highlighting the law’s incompatibility with free expression, citing the possibility of worldwide takedown orders. Räsänen was a signatory to the letter, alongside a former vice president of Yahoo Europe, a former U.S. senator, and politicians, academics, lawyers, and journalists from around the globe.

The DSA gives the E.U. authority to enforce moderation of “illegal content” on platforms and search engines with over 45 million monthly users. It enables bureaucrats to control online speech at scale under the guise of “safety” and “protecting democracy.”

However, E.U. member states may have different definitions of illegal content. Thus, under the law, anything deemed illegal under the speech laws of any one E.U. member state could potentially be removed across all of Europe. That means the harshest censorship laws in Europe could soon govern the entire continent, and possibly the internet worldwide. And if platforms fail to comply, they face billions in fines, thus providing clear incentive to censor and none to promote free speech.

Late last month, the E.U. announced that Meta and TikTok will face fines of up to 6 percent of their global sales for accusations of violating the DSA on matters related to transparency. But the well-founded fear is that this law—which grants sweeping authority to European regulators to control online speech across such platforms—including X, YouTube, and Facebook—will enable the kind of censorship endured by Räsänen on a global scale.

Further, citizens in countries outside of the E.U., like the United States, are at risk of facing new levels of censorship, because the DSA applies to large online digital platforms and search engines accessed within the E.U. but that have a global presence. It explicitly states its extraterritorial applicability as it covers platforms used by people “that have their place of establishment or are located in the Union, irrespective of where the providers of those intermediary services [the platforms] have their place of establishment.”

Platforms are incentivized to adapt their international content moderation policies to E.U. censorship. If those platforms deem something “illegal” under E.U. rules, that content may be banned everywhere, even in countries with strong free speech protections.

Keep reading

UK Ofcom Pushes Rules Targeting “Misogynistic” Content, Prompting (Even More) Free Speech Concerns

Britain’s communications regulator, Ofcom, has unveiled a new framework urging social media and technology companies to censor so-called “misogynistic” content as part of its A Safer Life Online for Women and Girls campaign.

The initiative, framed as an effort to protect women from online abuse, further weakens the distinction between “harmful” conduct and lawful expression, a tension Ofcom itself acknowledges in its own documentation.

The regulator’s new guidance encourages platforms to adopt a wide range of “safety” measures, many of which would directly influence what users can post, see, and share.

These include inserting prompts that nudge users to “reconsider” certain comments, suppressing “misogynistic” material in recommendation feeds and search results, temporarily suspending users who post repeated “abuse,” and de-monetizing content flagged under this category.

Moderators would also receive special training on “gender-based harms,” while posting rates could be throttled to slow the spread of unwanted speech.

Ofcom’s document also endorses the use of automated scanning systems like “hash-matching” to locate and delete non-consensual intimate imagery.

While intended to prevent the circulation of explicit photos, such systems typically involve the mass analysis of user uploads and can wrongly flag legitimate material.

Additional proposals include “trusted flagger” partnerships with NGOs, identity verification options, and algorithmic “friction” mechanisms, small design barriers meant to deter impulsive posting.

Some of the ideas, such as warning prompts and educational links, are voluntary.

Yet several major advocacy groups, including Refuge and Internet Matters, are pressing for the government to make them binding on all platforms.

If adopted wholesale, these measures would effectively place Ofcom in a position to oversee the policing of legal speech, with tech firms acting as its enforcement arm.

In a letter announcing the guidance, Ofcom’s Chief Executive Melanie Dawes declared that “the digital world is not serving women and girls the way it should,” describing online misogyny and non-consensual deepfakes as pervasive problems that justify immediate “industry-wide action.”

She stated that Ofcom would “follow up to understand how you are applying this Guidance” and publish a progress report in 2027.

Keep reading

EU Council Approves New “Chat Control” Mandate Pushing Mass Surveillance

European governments have taken another step toward reviving the EU’s controversial Chat Control agenda, approving a new negotiating mandate for the Child Sexual Abuse Regulation in a closed session of the Council of the European Union on November 26.

The measure, presented as a tool for child protection, is once again drawing heavy criticism for its surveillance implications and the way it reshapes private digital communication in Europe.

Unlike earlier drafts, this version drops the explicit obligation for companies to scan all private messages but quietly introduces what opponents describe as an indirect system of pressure.

It rewards or penalizes online services depending on whether they agree to carry out “voluntary” scanning, effectively making intrusive monitoring a business expectation rather than a legal requirement.

Former MEP Patrick Breyer, a long-standing defender of digital freedom and one of the most vocal opponents of the plan, said the deal “paves the way for a permanent infrastructure of mass surveillance.”

According to him, the Council’s text replaces legal compulsion with financial and regulatory incentives that push major US technology firms toward indiscriminate scanning.

He warned that the framework also brings “anonymity-breaking age checks” that will turn ordinary online use into an exercise in identity verification.

The new proposal, brokered largely through Danish mediation, comes months after the original “Chat Control 1.0” regulation appeared to have been shelved following widespread backlash.

It reinstates many of the same principles, requiring providers to assess their potential “risk” for child abuse content and to apply “mitigation measures” approved by authorities. In practice, that could mean pressure to install scanning tools that probe both encrypted and unencrypted communications.

Czech MEP Markéta Gregorová called the Council’s position “a disappointment…Chat Control…opens the way to blanket scanning of our messages.”

Keep reading

The GRANITE ACT: Wyoming Bill Targets Foreign Censors With $10M Penalties

The first cannon shot in a new kind of free speech war came not from Washington or Silicon Valley, but from Cheyenne. Wyoming Representative Daniel Singh last week filed the Wyoming GRANITE Act.

The “Guaranteeing Rights Against Novel International Tyranny & Extortion Act,” passed, would make Wyoming the first state to let American citizens sue foreign governments that try to police what they say online.

The bill traces back to a blog post by attorney Preston Byrne, the same lawyer representing 4chan and Kiwi Farms in their battles against censorship-driven British regulators.

Byrne’s idea was simple: if the UK’s Ofcom or Brazil’s Alexandre de Moraes wanted to fine or threaten Americans over online speech, the US should hit back hard.

Exactly one month after that idea appeared on his blog, it’s now inked into Wyoming legislative paperwork.

Byrne said:

“This bill has a long way to go until it becomes a law, it’s got to make it through legislative services, then to Committee, and then get introduced on the floor for a vote, but the important thing is, the journey of this concept, the idea of a foreign censorship shield law which also creates a civil cause of action against foreign censors, into law has begun.”

That “journey” may be the kind of slow procedural trudge that usually kills most ideas in committee, but the intent here is anything but mild, and, with the growing threat of censorship demands from the UKBrazilEurope, and Australia, there is a lot of momentum here to fight back.

“For the first time, state legislators are moving to implement rules that will allow U.S. citizens to strike back, hard, against foreign countries that want to interfere with Americans’ civil rights online,” Byrne continued.

The Act would let American citizens and companies sue foreign governments or their agents for trying to censor them, and, crucially, it strips away the usual escape hatch of sovereign immunity.

In its legal filing responding to the 4chan and KiwiFarms lawsuit, Ofcom insisted it has “sovereign immunity” and told the court there were “substantial grounds” for throwing out the case on that basis.

The regulator’s lawyers framed Ofcom as a protected arm of the British state, immune from civil claims even when its decisions target a platform based entirely inside the United States.

Ofcom treats the idea of “sovereign immunity” as something substantial but the First Amendment as something that does not exist at all.

Keep reading

“Non-Binary” Maryland Teacher Sparks Outrage After Posting TikToks Flaunting Pregnancy And Breast Implant “Kinks”

Concerns are mounting in Maryland after a male teacher was found posting TikTok videos flaunting what he describes as his pregnancy and breast-implant fetish. Although the teacher has since locked down his social media accounts, the child-safeguarding organization Gays Against Groomers saved one of the clips and reposted it on X, where it has already racked up more than 30,000 views at the time of writing.

Reduxx has identified the teacher in the controversial video as James Roman Stilipec, who instructs Grade 9 English at REACH! Partnership School 341 in Baltimore.

In the clip shared by Gays Against Groomers, Stilipec is seen wearing an exaggerated breast form and an oversized fake pregnant belly beneath a tight green shirt. The video appears to be a repost from a concerned viewer, who captioned it: “this man @allthatjaz22 is a 9th grade teacher and posts this publicly.”

Keep reading

Top MAGA Influencers Accidentally Unmasked as Foreign Trolls

Elon Musk’s social media site X has rolled out a new feature in an effort to increase transparency—and unwittingly revealed that many of the site’s top MAGA influencers are actually foreign actors.

The new “About This Account” feature, which became available to X users on Friday, allows others to see where an account is based, when they joined the platform, how often they have changed their username, and how they downloaded the X app.

Upon rollout, rival factions began to inspect just where their online adversaries were really based on the combative social platform—with dozens of major MAGA and right-wing influencer accounts revealed to be based overseas.

“This is easily one of the greatest days on this platform,” wrote Democratic influencer Harry Sisson.

“Seeing all of these MAGA accounts get exposed as foreign actors trying to destroy the United States is a complete vindication of Democrats, like myself and many on here, who have been warning about this”.

Dozens of major accounts masquerading as “America First” or “MAGA” proponents have been identified as originating in places such as Russia, India, and Nigeria.

In one example, the account MAGANationX—with nearly 400,000 followers and a bio reading “Patriot Voice for We The People”—is actually based in Eastern Europe.

Keep reading

UK Government “Resist” Program Monitors Citizens’ Online Posts

Let’s begin with a simple question. What do you get when you cross a bloated PR department with a clipboard-wielding surveillance unit?

The answer, apparently, is the British Government Communications Service (GCS). Once a benign squad of slogan-crafting, policy-promoting clipboard enthusiasts, they’ve now evolved (or perhaps mutated) into what can only be described as a cross between MI5 and a neighborhood Reddit moderator with delusions of grandeur.

Yes, your friendly local bureaucrat is now scrolling through Facebook groups, lurking in comment sections, and watching your aunt’s status update about the “new hotel down the road filling up with strangers” like it’s a scene from Homeland. All in the name of “societal cohesion,” of course.

Once upon a time, the GCS churned out posters with perky slogans like Stay Alert or Get Boosted Now, like a government-powered BuzzFeed.

But now, under the updated “Resist” framework (yes, it’s actually called that), the GCS has been reprogrammed to patrol the internet for what they’re calling “high-risk narratives.”

Not terrorism. Not hacking. No, according to The Telegraph, the new public enemy is your neighbor questioning things like whether the council’s sudden housing development has anything to do with the 200 migrants housed in the local hotel.

It’s all in the manual: if your neighbor posts that “certain communities are getting priority housing while local families wait years,” this, apparently, is a red flag. An ideological IED. The sort of thing that could “deepen community divisions” and “create new tensions.”

This isn’t surveillance, we’re told. It’s “risk assessment.” Just a casual read-through of what that lady from your yoga class posted about a planning application. The framework warns of “local parental associations” and “concerned citizens” forming forums.

And why the sudden urgency? The new guidance came hot on the heels of a real incident, protests outside hotels housing asylum seekers, following the sexual assault of a 14-year-old girl by Hadush Kebatu, an Ethiopian migrant.

Now, instead of looking at how that tragedy happened or what policies allowed it, the government’s solution is to scan the reaction to it.

What we are witnessing is the rhetorical equivalent of chucking all dissent into a bin labelled “disinformation” and slamming the lid shut.

Keep reading

Court Filings Allege Meta Downplayed Risks to Children and Misled the Public

Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” 

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.

The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.  

“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.” 

The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffs—including children and parents, school districts, and state attorneys general—have joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. 

Keep reading

The Algorithm Accountability Act’s Threat to Free Speech

A new push in Congress is taking shape under the banner of “algorithmic accountability,” but its real effect would be to expand the government’s reach into online speech.

Senators John Curtis (R-UT) and Mark Kelly (D-AZ) have introduced the Algorithm Accountability Act, a bill that would rewrite Section 230 of the Communications Decency Act to remove liability protections from large, for-profit social media platforms whose recommendation systems are said to cause “harm.”

We obtained a copy of the bill for you here.

The proposal applies to any platform with more than a million users that relies on algorithms to sort or recommend content.

These companies would be required to meet a “duty of care” to prevent foreseeable bodily injury or death.

If a user or family member claims an algorithm contributed to such harm, the platform could be sued, losing the legal shield that has protected online speech for nearly three decades.

Although the bill’s authors describe it as a safety measure, the structure of the law would inevitably pressure platforms to suppress or downrank lawful content that might later be portrayed as dangerous.

Most major social networks already rely heavily on automated recommendation systems to organize and personalize information. Exposing them to lawsuits for what those systems display invites broad, quiet censorship under the guise of caution.

Keep reading