Massive TikTok Fine Threat Advances Europe’s Digital ID Agenda

A familiar storyline is hardening into regulatory doctrine across Europe: frame social media use as addiction, then require platforms to reengineer themselves around age segregation and digital ID.

The European Commission’s preliminary case against TikTok, announced today, shows how that narrative is now being operationalized in policy, with consequences that reach well beyond one app.

European regulators have accused TikTok of breaching the Digital Services Act by relying on what they describe as “addictive design” features, including infinite scroll, autoplay, push notifications, and personalized recommendations.

Officials argue these systems drive compulsive behavior among children and vulnerable adults and must be structurally altered.

What sits beneath that argument is a quieter requirement. To deliver different “safe” experiences to minors and adults, platforms must first determine who is a minor and who is not. Any mandate to offer different experiences to minors and adults depends on a reliable method of telling those groups apart.

Platforms cannot apply separate algorithms, screen-time limits, or nighttime restrictions without determining a user’s age with a level of confidence regulators will accept.

Commission spokesman Thomas Regnier described the mechanics bluntly, saying TikTok’s design choices “lead to the compulsive use of the app, especially for our kids, and this poses major risks to their mental health and wellbeing.” He added: “The measures that TikTok has in place are simply not enough.”

The enforcement tool behind those statements is the Digital Services Act, the EU’s platform rulebook that authorizes Brussels to demand redesigns and impose fines of up to 6% of global annual revenue.

Keep reading

TikTok CEO Reveals Coordination With 2 Dozen Jewish Groups to Police Speech

A chilling blueprint for the censorship of pro-Palestine voices on social media has been exposed, directly from the mouth of a top tech executive. Adam Presser, the newly installed CEO of TikTok’s U.S. operations following its forced sale to a consortium led by billionaire Larry Ellison, detailed in a recent resurfaced video how the platform systematically silenced critics by labeling their speech as hateful. This admission confirms the worst fears of free speech advocates and reveals a coordinated effort to shield Israeli government actions from public scrutiny by conflating political criticism with bigotry.

The video, originally presented to the World Jewish Congress, features Presser, who was then TikTok’s Head of Operations and Trust & Safety, outlining specific policy changes. “We made a change to designate the use of the term Zionist as a proxy for a protected attribute as hate speech,” Presser stated. In practice, this means using “Zionist” in a negative context could get a user banned, while phrases like “proud Zionist” remain permitted. This creates a politically motivated double standard where one side of a heated geopolitical debate is granted linguistic immunity.

A tripling of bans and outside influence

Presser boasted of aggressive enforcement, revealing that TikTok “tripled the amount of accounts that we were banning for hateful activity” over the course of 2024. This timeline coincides directly with the global outcry following Israel’s military offensive in Gaza. He further explained that “over two dozen Jewish organizations” are “constantly feeding us intelligence and information when they spot violative trends,” and that these groups help inform TikTok on “what is hate speech.” This outsourcing of content moderation decisions to explicitly partisan advocates strips away any pretense of neutrality, effectively allowing pro-Israel groups to police and silence their critics on a global platform.

The consequences of this policy are not theoretical. Award-winning Palestinian journalist Bisan Owda, who had built an audience of 1.4 million followers on TikTok while documenting the war from Gaza, recently found her account permanently banned. In a video, Owda connected her ban directly to Presser’s remarks and to comments from Israeli Prime Minister Benjamin Netanyahu, who last year called the TikTok purchase “consequential” and stated, “We have to fight with the weapons that apply to the battlefield in which we engage, and the most important ones are social media.”

Keep reading

Meta, TikTok, YouTube Face Trial Over Youth Addiction Claims

Three of the world’s biggest tech companies face a landmark trial in Los Angeles starting this week over claims that their platforms — Meta’s Instagram, ByteDance’s TikTok and Google’s YouTube — deliberately addict and harm children.

Jury selection starts this week in the Los Angeles County Superior Court. It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms.

The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.

At the core of the case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out.

She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.

KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits.

This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.

Keep reading

‘We Need To Kill These People’: Left-Wing TikTok User Calls For ‘More’ Violence Against ICE Agents

A left-wing TikTok user urged his followers to “get violent” and to “kill” United States Immigration and Customs Enforcement (ICE) agents in a video that rapidly spread across social media Monday.

Resistance to ICE enforcement has grown more and more violent nationwide, with agents fired on and targeted in multiple states amid increasingly heated rhetoric. Tensions have only escalated further in the wake of Wednesday’s fatal shooting of Minnesota resident Renee Nicole Good by an ICE agent during an enforcement operation in Minneapolis.

The TikTok user, who posts under the username monkeydbeans0 and uses they/them/theirs pronouns, said ICE agents are just mall cops and Proud Boys and cited Good’s death as justification for his call to murder federal agents.

“I’m just going to come out and say it. I don’t really care about the consequences anymore. I don’t care. We need to kill these people,” the green-haired TikTok user said. “There’s — there’s just no alternative.”

Keep reading

Polish Deputy Minister Urges EU Investigation Into TikTok Over Videos Promoting “Polexit”

A senior Polish official is pressing the European Commission to take action against TikTok, claiming the platform is hosting a growing number of artificial intelligence-generated videos that urge Poland to withdraw from the European Union.

His appeal, directed to Brussels’ top digital regulator, calls for what amounts to a censorship regime over AI-generated speech.

Deputy Minister of Digital Affairs Dariusz Standerski wrote to Executive Vice-President Henna Virkkunen, who oversees the EU’s Tech Sovereignty, Security, and Democracy portfolio, insisting that the European Commission open a Digital Services Act (DSA) investigation into TikTok.

He accused the company of failing to build “appropriate mechanisms” to detect and moderate AI-created content and of neglecting to provide “effective” transparency tools that could trace how such material is produced.

The letter went further, urging the Commission to introduce “interim measures aimed at limiting the further dissemination of artificial intelligence-generated content that encourages Poland to withdraw from the European Union.”

Keep reading

“Non-Binary” Maryland Teacher Sparks Outrage After Posting TikToks Flaunting Pregnancy And Breast Implant “Kinks”

Concerns are mounting in Maryland after a male teacher was found posting TikTok videos flaunting what he describes as his pregnancy and breast-implant fetish. Although the teacher has since locked down his social media accounts, the child-safeguarding organization Gays Against Groomers saved one of the clips and reposted it on X, where it has already racked up more than 30,000 views at the time of writing.

Reduxx has identified the teacher in the controversial video as James Roman Stilipec, who instructs Grade 9 English at REACH! Partnership School 341 in Baltimore.

In the clip shared by Gays Against Groomers, Stilipec is seen wearing an exaggerated breast form and an oversized fake pregnant belly beneath a tight green shirt. The video appears to be a repost from a concerned viewer, who captioned it: “this man @allthatjaz22 is a 9th grade teacher and posts this publicly.”

Keep reading

TikTok Zombie Brain Rot Confirmed By Major Study

A bombshell Griffith University study has validated a long suspected reality: short-form videos (SFVs) like TikToks and Instagram Reels are frying brains, slashing attention spans, and crippling cognitive endurance.

Such content is turning a generation into scatterbrained zombies unable to tackle real-world complexities amid algorithmic dopamine traps.

The meta-analysis, reviewing 71 studies and data from 98,299 participants, uncovered a “consistent pattern” of harm from heavy SFV consumption. 

Such content is turning a generation into scatterbrained zombies unable to tackle real-world complexities amid algorithmic dopamine traps.

Keep reading

FBI Arrests Active Duty U.S. Marine For Allegedly Kidnapping 12-Year-Old Girl He Met on TikTok

The FBI has reported that an active-duty U.S. Marine has been arrested after allegedly kidnapping a 12-year-old girl from Indiana.

According to the FBI’s press release, William Roy, 24, who was stationed at Camp Lejeune, North Carolina, flew to Chicago to meet up with a 12-year-old girl at a park.

Later, Roy allegedly took the young girl to a hotel where they stayed overnight.

He faces charges of enticing and transporting a minor to travel for illicit sexual activity and interstate travel with the intent to engage in sexual activity with a minor.

Per FBI Indianapolis:

An active-duty U.S. Marine has been arrested after allegedly traveling from North Carolina to Indiana to meet, kidnap, and sexually assault a 12-year-old Hammond girl. William Richard Roy, 24, of Camp Lejeune, North Carolina, was taken into custody by the FBI in Durham, North Carolina, on October 26, 2025.

The investigation began on October 25, 2025, when the FBI Indianapolis Field Office’s Merrillville Resident Agency received information from the Hammond Police Department regarding a missing 12-year-old girl believed to be traveling with Roy. The child’s grandmother had reported her missing earlier that day.

Through the coordinated efforts of the FBI, Hammond Police Department, and U.S. Marshals Service, it was determined Roy allegedly flew to Chicago on October 24, 2025. The following day, he took an Uber to Hammond, where he met the child at a park. The two then traveled by Uber back to Chicago, stayed overnight at a hotel, and later boarded a Greyhound bus bound for Durham, North Carolina.

When the bus arrived in Durham on October 26, agents from the FBI Charlotte Field Office’s Greensboro and Raleigh Resident Agencies took Roy into custody and safely recovered the child.

The arrest of Roy marks the second arrest of a Marine stationed at Camp Lejeune in recent months.

Keep reading

‘Massive legal siege’ against social media companies looms

Thousands of plaintiffs’ complaints, millions of pages of internal documents and transcripts of countless hours of depositions are about to land in U.S. courtrooms, threatening the future of the biggest social media companies.

The blizzard of paperwork is a byproduct of two consolidated lawsuits accusing Snap Inc.’s Snapchat; Meta Platforms Inc.’s Facebook and Instagram; ByteDance Ltd.’s TikTok; and Alphabet Inc.’s YouTube of knowingly designing their platforms to addict users — allegedly resulting in youth depression, anxiety, insomnia, eating disorders, self-harm and even suicide.

The litigation, brewing for more than three years, has had to overcome numerous hurdles, including the liability shield that has protected social media platforms from facing user-harm lawsuits. The social media companies have filed multiple motions to dismiss the cases on the grounds that Section 230 of the Communications Decency Act prevents them from being held accountable for content posted on their sites.

Those motions have been largely unsuccessful, and courtrooms across the country are poised to open their doors for the first time to the alleged victims of social media. The vast majority of cases have been folded into two multijurisdictional proceedings, one in state and the other in federal court, to streamline the pretrial discovery process.

The first bellwether trial is scheduled to begin in Los Angeles Superior Court in late January. It involves a 19-year-old woman from Chico, California, who says she’s been addicted to social media for more than a decade and that her nonstop use of the platforms has caused anxiety, depression and body dysmorphia. Two other trials will follow soon after, with thousands more waiting in the wings. If successful, these cases could result in multibillion-dollar settlements — akin to tobacco and opioid litigation — and change the way minors interact with social media.

“This is going to be one of the most impactful litigations of our lifetime,” said Joseph VanZandt, an attorney at Beasley Allen Law Firm in Montgomery, Alabama, and co-lead plaintiffs’ attorney for the coordinated state cases. “This is about large corporations targeting vulnerable populations — children — for profit. That’s what we saw with the tobacco companies; they were also targeting adolescents and trying to get them addicted while they were young.”

Matthew Bergman, founder of the Social Media Victims Law Center in Seattle, makes a similar comparison to tobacco litigation in the Bloomberg documentary Can’t Look Away: The Case Against Social Media. “In the case of Facebook, you have internal documents saying ‘tweens are herd animals,’ ‘kids have an addict’s narrative’ and ‘our products make girls feel worse about themselves.’ You have the same kind of corporate misconduct,” Bergman says in the film, which will be available to view on Bloomberg’s platforms on October 30.

Bergman’s firm was the first to file user-harm cases against social media companies, in 2022, after Frances Haugen, a former Meta product manager-turned-whistleblower, released a trove of internal documents showing the company knew social media was negatively impacting youth mental health. The first case, which is part of the consolidated federal litigation, alleged that an 11-year-old Connecticut girl killed herself after suffering from extreme social media addiction and sexual exploitation by online predators.

What set that case apart was how it got around Section 230’s immunity blanket. Bergman argued that his case wasn’t about third-party content, which the federal law protects. Instead, he said it hinged on the way social media companies were intentionally designing their products to prioritize engagement and profit over safety.

Keep reading

Indonesian Government Restores TikTok’s License After Platform Shares User Data

TikTok has handed over a broad collection of user data to Indonesian authorities, a move that cleared the way for the platform’s operations to resume in the country after a brief suspension.

The company acted under pressure from regulators following its failure to cooperate during a volatile stretch of anti-government demonstrations in late August.

According to government officials, the data shared included analytics on traffic behavior and signs of potential online gambling.

The request was triggered by a surge in livestream activity tied to the protests, which erupted after public anger over political perks collided with the fatal police killing of a motorcycle delivery driver.

Currently owned by Chinese tech giant ByteDance, TikTok is deeply embedded in Indonesia’s digital life, with over 100 million users and a rapidly growing e-commerce arm.

Authorities revoked its license last week after the platform did not supply information about user activity during the period from August 25 to 30. Police reported that some TikTok users live-streamed protest scenes and used the broadcasts to solicit digital gifts or payments from viewers.

On Monday, officials confirmed that TikTok submitted data showing the number of such livestreams and how much money they generated.

Alexander Sabar, a director general at the Ministry of Communication and Digital Affairs, said in a written message, and as reported by Bloomberg: “The summary data provided is aggregate data, not specific to a particular user, and therefore cannot be used to track or monitor individuals — including accounts broadcasting demonstrations.”

He said the ministry’s primary interest was in activity tied to online gambling and other prohibited uses of the platform.

Technology companies around the world are often cautious when responding to government data requests.

Firms such as Meta and Google typically weigh these demands against potential backlash from users and the risk of exposing proprietary systems.

Keep reading