Federal Jury Finds Florida TikToker Guilty of Interstate Threats for Calling for Trump Supporters to Be Shot 

The U.S. Attorney’s Office, Middle District of Florida, announced that a federal jury has found Desiree Doreen Segari (41, Sarasota) guilty of interstate communication of a threat to injure.

Segari, who was indicted on September 18, 2025, faces a maximum penalty of five years in federal prison.

In one of the videos she shared on TikTok, Segari stated, “So if we all get our guns and use our second amendment right … and you see somebody with a MAGA hat, ‘pew pew’ that’s what we do, that’s the way.”

“It’s the only way.”

U.S. Attorney Gregory W. Kehoe made the announcement:

According to evidence presented at trial, on August 17, 2025, Segari posted a video on TikTok calling for MAGA supporters to be shot on sight. Segari stated, “so if we all get our guns and use our second amendment right…and you see somebody with a MAGA hat, ‘pew pew’ that’s what we do, that’s the way, it’s the only way.”

While saying “pew pew,” Segari used hand gestures mimicking the firing of a gun.

She further stated, “Put them back in their basements, make them scared again to be racist, homophobic, and terrible just awful [expletive],” and “MAGA people deserve to be terrified and scared to walk in the streets because they should know that real Americans are gonna [mouths expletive] kill them.”

When Segari posted the video, she included a caption: “#seemagapewpewmaga starting a new trend, hope it catches on. Please spread the word. Share this video. Repost it. Use the hashtag all over the internet. Let’s go guys. It’s time to fight back in a potentially effective manner.”

The next day, Segari posted another video on TikTok, in which she stated, “See MAGA pew pew MAGA, see MAGA pew pew MAGA, see MAGA pew pew MAGA so these [expletive] know we ain’t here to play” while again using hand gestures to mimic the firing of a gun.

This case was investigated by the Federal Bureau of Investigation. It is being prosecuted by Assistant United States Attorney Michael Sinacore.

The FBI’s Tampa office also shared the verdict.

Keep reading

ADL Orders Advertisers to Bail on Twitter, Calls the Bible an ‘Antisemitic Conspiracy Theory’

The far-left Anti-Defamation League has ordered advertisers to “pause Twitter spending” after accusing the platform’s users, and its new owner Elon Musk, of “antisemitism” and “hate” amidst a surge in free speech. Among the “hate” tweets cited by the ADL is a Bible verse posted to Twitter by Ye, formerly known as Kanye West.

After a week of back and forth between Elon Musk and the ADL regarding free speech on the Twitter platform, the far-left pressure group that’s increasingly seen as being a Jewish supremacist organization authored a long tweet thread calling on advertisers to ditch the platform.

Allowing free speech, the ADL claimed in the thread, is not only “toxic,” but part of a “hate for profit” scheme.

“Today, we are joining dozens of other groups to ask advertisers to pause Twitter spending because we are profoundly concerned about antisemitism and hate on the platform,” reads the ADL’s initial tweet, authored Friday, November 4th.

“Here’s why we’re asking advertisers to #StopHateForProfit and #StopToxicTwitter,” it went on, introducing the thread that called the Holy Bible an “antisemitic conspiracy theory.”

The ADL’s call for collectively multi-billion dollar advertising sponsors to leave Twitter high and dry came just days after Elon Musk was slammed for groveling to the anti-speech group that threatened “dire consequences” for allowing free speech and re-instating accounts they don’t like.

On Wednesday, Musk announced that he’d met with ADL leadership following their threat, as well as the NAACP, and even members of the Bush political machine.

Musk said at the time that his Twitter platform would continue enforcing anti “hate” and “election integrity” policies at the ADL’s behest.

But, “since that meeting,” the ADL claimed in their tweet thread, “Musk permitted @KanyeWest to start posting again,” which, apparently, is a grave offense.

Among Ye’s tweets that the ADL is most vehemently opposed to is his sharing of Holy Bible verse John 19:19, something the ADL claims is equivalent to posting “antisemitic conspiracy theories.”

Keep reading

Crushing the Right to Conscientiously Object

As the U.S. and Israel’s deeply unpopular war with Iran enters its second week, social media platform X is censoring the accounts of people providing information to military servicemembers on how they can refuse to serve. This is particularly relevant as fears have grown that U.S. ground troops may enter the conflict.

The Center on Conscience & War, an 80-year-old nonprofit that, according to its website, “advocates for the rights of conscience, opposes military conscription, and serves all conscientious objectors to war,” was banned on X for 12 hours. The center’s executive director, Mike Prysner, shared a notice that the center received from X which labeled their posts as having “violated X rules” against “illegal and regulated behaviors.”

Prysner wrote: “This is the post @CCW4COs was suspended for, informing service members of their legal right under DoDI 1332.14 to report “failure to adapt” within first 365 days of service and receive an entry-level discharge.”

It remains legal to conscientiously object to military service. The only conceivable way that the post could be framed as encouraging illegal or irregular behavior would be to recast such objections as mutiny, which is exactly what pro-Israeli voices on social media have been frantically doing in the last few days.

In response to conservative commentator Candace Owens also encouraging those in the U.S. military to conscientiously object to serving in Iran, pro-Israel journalist Emily Schrader wrote on X:

“This is illegal. She is literally advocating mutiny. Under 18 U.S.C. § 2387 (Advocating overthrow or disloyalty in the armed forces). It is a crime for any person, including civilians, to willfully advocate or attempt to cause:
• insubordination in the armed forces
• disloyalty among service members
• mutiny or refusal of duty
It also criminalizes distributing materials intended to encourage those outcomes.
The penalty can be up to 10 years in prison and fines.”

Other pro-Israel voices like Bill Ackman, the billionaire hedge-fund manager, reposted Shrader’s sentiments.

The social media ban on the Center for Conscience and War came less than 24 hours after its executive director, Prysner, also wrote via social media regarding anecdotal evidence of troops being readied for combat:

“I just spoke with the mother of a service member in this unit. They were given one last call home before having to turn in their phones. He told his mom they were going ‘boots on the ground’ tonight.”

Keep reading

Turkey Blocks 41 Social Media Accounts Over Iran War Posts

Turkey’s government blocked 41 social media accounts on X, Facebook, and Instagram last Friday, deleted content from 75 more, and launched criminal proceedings against account holders, all on the grounds that they spread what officials called “disinformation and provocative content.”

The crackdown followed the start of attacks on Iran. Presidential Communications Director Burhanettin Duran framed the deletions as a national security response, saying the targeted accounts had been “systematically sharing unverified content aimed at creating fear, panic and uncertainty in society.”

Who decided the content was disinformation? The government. Who gets to define “provocative content”? The government. Who determines what threatens “public order, social peace, and our national security”? Also, the government; the same government that ordered the blocks.

The operation involved the Turkish Presidency’s Communications Directorate, the cybercrime department of the Security Directorate General, the Information and Communication Technologies Authority, and the chief public prosecutors’ offices. A coordinated state apparatus, mobilized to silence social media accounts during a regional conflict.

Keep reading

Roblox Introduces AI System That Rewrites Users’ Chat Messages in Real Time

Roblox has started rewriting its users’ chat messages in real time using AI, altering what people actually typed into something the platform considers more appropriate.

The feature, rolling out now, goes further than the existing filter that replaces flagged words with “#” symbols. Under the new system, banned language gets silently reworded into what Roblox calls “more respectful language that remains closer to the user’s original intent.”

The platform’s example: type “Hurry TF up!” and the message your recipient sees reads “Hurry up!” Roblox says everyone in the chat is notified when this happens, though the person who typed the original message has no way to stop the substitution before it goes out.

The definition of “banned language” extends beyond profanity. It covers “misspellings, special characters, or other methods to evade detection of profanity,” meaning the AI is also tasked with catching deliberate workarounds and rewriting those too.

Roblox is simultaneously expanding its text filtering system to “detect more variations of language that break its Community Standards,” so the net is getting wider at the same time, and the consequences of being caught in it are changing.

What Roblox has built is a system that goes beyond blocking speech. It replaces it. The message that leaves your keyboard is not the message that arrives. The recipient reads words you didn’t choose, attributed to you, with a notification that your original phrasing was deemed unacceptable. The platform decides what you said.

Keep reading

Spain JAILS Seven Citizens For Calling Migrants ‘SCUM’ On Facebook

Spain’s Supreme Court has upheld prison sentences for seven individuals over Facebook comments criticizing unaccompanied foreign minors in the border enclave of Melilla, marking a chilling escalation in the far-left government’s war on free speech amid skyrocketing migrant-related crime.

The ruling, which imposes terms ranging from eight months to one year and ten months, stems from posts that prosecutors deemed as promoting hostility toward the group of mostly North African migrants. 

Charges were initially dropped, but an appeal led to convictions under Spain’s hate crime laws.

This case exemplifies the inverted priorities under Prime Minister Pedro Sánchez’s Socialist-led government, which has faced mounting criticism for prioritizing mass migration over native safety and free expression.

Just months ago, Alex Soros heaped praise on Sánchez for granting amnesty to up to 500,000 illegal migrants via royal decree, bypassing parliament entirely. Soros called it “real leadership,” urging more nations to follow suit in flooding their borders.

Keep reading

TikTok Says Privacy Makes Users Less Safe

Over the past five years, the largest social platforms settled on a clear position about private messaging. Lock it down. Facebook turned on end-to-end encryption. Instagram and Messenger did the same. X joined the club. Yes, metadata is still an issue and the protocols used matter; but, generally speaking, the move was toward more privacy of actual messages.

TikTok looked at that trend and made a different choice. Then it scheduled a briefing in London with the BBC to explain the reasoning.

The explanation was safety.

In the UK, TikTok belongs to ByteDance, a Chinese technology company that operates under Beijing’s jurisdiction. China maintains strict limits on end-to-end encryption inside its borders. TikTok, after its own review of the issue, reached the same policy outcome for its messaging system.

Alan Woodward, a cybersecurity professor at Surrey University, raised that point directly. The company’s “Chinese influence might be behind the decision,” he said, adding that end-to-end encryption is “largely banned in China.”

TikTok declined to engage with that suggestion, of course. The remark hung in the air. However, it’s worth adding that the US operation of TikTok has made no indication that it is moving towards private messaging standards either.

End-to-end encryption is simple in theory. Only the people in a conversation can read the messages. The platform running the service cannot access the content. Governments cannot request it. Engineers inside the company cannot view it.

TikTok’s system operates in a different way. Messages on the platform remain readable to the company. Employees can access them under defined circumstances. Law enforcement agencies can request them through legal channels.

TikTok argues that readable messages allow the company to identify harmful activity.

The debate turns on a basic technical fact. “We can read your messages to catch predators,” and “we can read your messages” describe the same system.

Keep reading

Britain is Trying to Censor Americans – But America is Fighting Back

Ofcom has confirmed it is referring 4chan to a final enforcement decision under the Online Safety Act. The target is a Delaware company that runs an entirely anonymous imageboard from the United States, with no offices, staff, servers or assets in Britain. The demand: install age-verification systems and content filters so that British children cannot access the site or face daily fines levied from London on an American platform. This case is not an outlier. It is the clearest real-world demonstration of what the new generation of “online safety” laws requires: private companies must build automated filters that decide, in advance, which legal speech is too harmful for minors to see. The question the regulators never quite answer is simple: what exactly does the filter catch?

In the early 2020s, a political consensus formed on both sides of the Atlantic: social media is harming children and something must be done. The result in Washington was the Kids’ Online Safety Act (KOSA); in Westminster, the Online Safety Act (OSA), which received Royal Assent in October 2023 and began enforcement in 2025. The political appeal of both measures is genuine. Adolescent mental health deteriorated in the 2010s, parents are alarmed and platforms have appeared indifferent. But good intentions do not make good law, and the form these interventions took is constitutionally and morally indefensible. Both KOSA and the OSA rest on a duty-of-care model: platforms must take “reasonable measures” or implement “proportionate systems” to prevent minors from encountering content associated with depression, anxiety, eating disorders, self-harm and suicide. This is not a regulation of conduct. It is a mandate to suppress speech based on its topic and its predicted emotional effect on a reader: the very definition of content-based regulation.

The American Civil Liberties Union (ACLU) stated the constitutional problem plainly in its July 2023 letter opposing KOSA: the bill “is a content-based regulation of constitutionally protected speech” that “will silence important conversations, limit minors’ access to potentially vital resources and violate the First Amendment”.  Under Reed v. Town of Gilbert, a law is content-based if it “applies to particular speech because of the topic discussed or the idea or message expressed”. Content-based regulations are “presumptively unconstitutional”.

Keep reading

UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech

The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.

The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.

The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.

The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.

Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”

The actual policy tools being considered are a different matter.

Age verification, as a mechanism, works by proving identity. Every user proves who they are.

A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.

The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.

Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.

Keep reading

UK COUNTER-TERROR Police Ad Warns Teens Sharing ‘Funny’ Content Could Be TERRORISM

The UK’s Counter Terrorism Police have released a disturbing advertisement depicting a white teenager facing police seizure of devices and a potential criminal record simply for sharing a link he found “funny”—content, we are told, was later deemed terrorist material.

This move, part of the broader Prevent anti-radicalization strategy, underscores the UK regime’s push to police online activity among youth, framing it as a gateway to extremism while ignoring surging real-world dangers from mass migration.

In the ad, a teen laments: “I just got all my device taken away by the police… My mom couldn’t believe it. I might get a criminal record and not be able to go to college.” He then explains: “I only shared a link. I just thought it was funny, but it was terrorist content.”

Counter Terrorism Policing describes itself as “a collaboration of UK police forces working with the UK intelligence community to help protect the public and our national security by preventing, deterring, and investigating terrorist activity.”

A recent academic analysis in the Journal of Policing, Intelligence and Counter Terrorism highlights the escalating involvement of family courts and Prevent in childhood radicalization cases, noting “the number of children referred to Prevent and Channel due to concerns that they might be at risk of, or from, radicalisation has been steadily increasing since 2015.”

It adds that professionals like teachers are “legally obligated to refer that child to the police under the auspices of Prevent” if suspecting risk.

Government guidance on Prevent duty in schools urges communication with parents to spot signs, but also empowers referrals if family members show vulnerability. As one factsheet states, referrals can come from “a family member, friend, colleague, or a professional.”

Keep reading