Chat Control 2.0: EU Moves Toward Ending Private Communication

Between the coffee breaks and the diplomatic niceties of Brussels bureaucracy, a quiet dystopian revolution might be taking place. On November 26, a roomful of unelected officials could nod through one of the most consequential surveillance laws in modern European history, without ever having to face the public.

The plan, politely titled EU Moves to End Private Messaging with Chat Control 2.0, sits on the agenda of the Committee of Permanent Representatives, or Coreper, a club of national ambassadors whose job is to prepare legislation for the European Council. This Wednesday, they may “prepare” it straight into existence.

According to MEP Martin Sonneborn, Coreper’s diplomats could be ready to endorse the European Commission’s digital surveillance project in secret.

It was already due for approval a week earlier before mysteriously vanishing from the schedule. Now it’s back, with privacy advocates watching like hawks who suspect the farmer’s got a shotgun.

The Commission calls Chat Control 2.0 a child-protection measure. The branding suggests moral urgency; the text suggests mass surveillance. The proposal would let governments compel messaging services such as WhatsApp or Signal to scan users’ messages before they’re sent.

Officials insist that the newest version removes mandatory scanning, which is a bit like saying a loaded gun is safer because you haven’t pulled the trigger yet.

Keep reading

The GRANITE ACT: Wyoming Bill Targets Foreign Censors With $10M Penalties

The first cannon shot in a new kind of free speech war came not from Washington or Silicon Valley, but from Cheyenne. Wyoming Representative Daniel Singh last week filed the Wyoming GRANITE Act.

The “Guaranteeing Rights Against Novel International Tyranny & Extortion Act,” passed, would make Wyoming the first state to let American citizens sue foreign governments that try to police what they say online.

The bill traces back to a blog post by attorney Preston Byrne, the same lawyer representing 4chan and Kiwi Farms in their battles against censorship-driven British regulators.

Byrne’s idea was simple: if the UK’s Ofcom or Brazil’s Alexandre de Moraes wanted to fine or threaten Americans over online speech, the US should hit back hard.

Exactly one month after that idea appeared on his blog, it’s now inked into Wyoming legislative paperwork.

Byrne said:

“This bill has a long way to go until it becomes a law, it’s got to make it through legislative services, then to Committee, and then get introduced on the floor for a vote, but the important thing is, the journey of this concept, the idea of a foreign censorship shield law which also creates a civil cause of action against foreign censors, into law has begun.”

That “journey” may be the kind of slow procedural trudge that usually kills most ideas in committee, but the intent here is anything but mild, and, with the growing threat of censorship demands from the UKBrazilEurope, and Australia, there is a lot of momentum here to fight back.

“For the first time, state legislators are moving to implement rules that will allow U.S. citizens to strike back, hard, against foreign countries that want to interfere with Americans’ civil rights online,” Byrne continued.

The Act would let American citizens and companies sue foreign governments or their agents for trying to censor them, and, crucially, it strips away the usual escape hatch of sovereign immunity.

In its legal filing responding to the 4chan and KiwiFarms lawsuit, Ofcom insisted it has “sovereign immunity” and told the court there were “substantial grounds” for throwing out the case on that basis.

The regulator’s lawyers framed Ofcom as a protected arm of the British state, immune from civil claims even when its decisions target a platform based entirely inside the United States.

Ofcom treats the idea of “sovereign immunity” as something substantial but the First Amendment as something that does not exist at all.

Keep reading

Top MAGA Influencers Accidentally Unmasked as Foreign Trolls

Elon Musk’s social media site X has rolled out a new feature in an effort to increase transparency—and unwittingly revealed that many of the site’s top MAGA influencers are actually foreign actors.

The new “About This Account” feature, which became available to X users on Friday, allows others to see where an account is based, when they joined the platform, how often they have changed their username, and how they downloaded the X app.

Upon rollout, rival factions began to inspect just where their online adversaries were really based on the combative social platform—with dozens of major MAGA and right-wing influencer accounts revealed to be based overseas.

“This is easily one of the greatest days on this platform,” wrote Democratic influencer Harry Sisson.

“Seeing all of these MAGA accounts get exposed as foreign actors trying to destroy the United States is a complete vindication of Democrats, like myself and many on here, who have been warning about this”.

Dozens of major accounts masquerading as “America First” or “MAGA” proponents have been identified as originating in places such as Russia, India, and Nigeria.

In one example, the account MAGANationX—with nearly 400,000 followers and a bio reading “Patriot Voice for We The People”—is actually based in Eastern Europe.

Keep reading

UK Government “Resist” Program Monitors Citizens’ Online Posts

Let’s begin with a simple question. What do you get when you cross a bloated PR department with a clipboard-wielding surveillance unit?

The answer, apparently, is the British Government Communications Service (GCS). Once a benign squad of slogan-crafting, policy-promoting clipboard enthusiasts, they’ve now evolved (or perhaps mutated) into what can only be described as a cross between MI5 and a neighborhood Reddit moderator with delusions of grandeur.

Yes, your friendly local bureaucrat is now scrolling through Facebook groups, lurking in comment sections, and watching your aunt’s status update about the “new hotel down the road filling up with strangers” like it’s a scene from Homeland. All in the name of “societal cohesion,” of course.

Once upon a time, the GCS churned out posters with perky slogans like Stay Alert or Get Boosted Now, like a government-powered BuzzFeed.

But now, under the updated “Resist” framework (yes, it’s actually called that), the GCS has been reprogrammed to patrol the internet for what they’re calling “high-risk narratives.”

Not terrorism. Not hacking. No, according to The Telegraph, the new public enemy is your neighbor questioning things like whether the council’s sudden housing development has anything to do with the 200 migrants housed in the local hotel.

It’s all in the manual: if your neighbor posts that “certain communities are getting priority housing while local families wait years,” this, apparently, is a red flag. An ideological IED. The sort of thing that could “deepen community divisions” and “create new tensions.”

This isn’t surveillance, we’re told. It’s “risk assessment.” Just a casual read-through of what that lady from your yoga class posted about a planning application. The framework warns of “local parental associations” and “concerned citizens” forming forums.

And why the sudden urgency? The new guidance came hot on the heels of a real incident, protests outside hotels housing asylum seekers, following the sexual assault of a 14-year-old girl by Hadush Kebatu, an Ethiopian migrant.

Now, instead of looking at how that tragedy happened or what policies allowed it, the government’s solution is to scan the reaction to it.

What we are witnessing is the rhetorical equivalent of chucking all dissent into a bin labelled “disinformation” and slamming the lid shut.

Keep reading

Court Filings Allege Meta Downplayed Risks to Children and Misled the Public

Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” 

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.

The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.  

“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.” 

The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffs—including children and parents, school districts, and state attorneys general—have joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. 

Keep reading

The Algorithm Accountability Act’s Threat to Free Speech

A new push in Congress is taking shape under the banner of “algorithmic accountability,” but its real effect would be to expand the government’s reach into online speech.

Senators John Curtis (R-UT) and Mark Kelly (D-AZ) have introduced the Algorithm Accountability Act, a bill that would rewrite Section 230 of the Communications Decency Act to remove liability protections from large, for-profit social media platforms whose recommendation systems are said to cause “harm.”

We obtained a copy of the bill for you here.

The proposal applies to any platform with more than a million users that relies on algorithms to sort or recommend content.

These companies would be required to meet a “duty of care” to prevent foreseeable bodily injury or death.

If a user or family member claims an algorithm contributed to such harm, the platform could be sued, losing the legal shield that has protected online speech for nearly three decades.

Although the bill’s authors describe it as a safety measure, the structure of the law would inevitably pressure platforms to suppress or downrank lawful content that might later be portrayed as dangerous.

Most major social networks already rely heavily on automated recommendation systems to organize and personalize information. Exposing them to lawsuits for what those systems display invites broad, quiet censorship under the guise of caution.

Keep reading

UK woman said she wanted to be tortured and killed on fetish site — her body was found in shallow grave in the US

The boyfriend of a woman who allegedly paid someone to torture and kill her said that she had been suffering from mental illness before the shocking events, according to Florida authorities.

Sonia Exelby was reported missing in October before police were able to trace her to an Airbnb in Reddick and found her remains nearby.

Exelby boarded a flight to Florida and arrived on Oct. 10, according to an investigation by the Florida Department of Law Enforcement. The FDLE said that Exelby posted on a fetish website that she was willing to pay someone to torture and kill her.

A week after she arrived in Florida, her remains were found in a shallow grave.

Investigators linked her to a man named Dwain Hall, who had used her bank card and tried to use her credit cards.

When they interviewed him, he gave conflicting accounts of how they met.

Police said they gathered evidence pointing to Hall as Exelby’s alleged killer.

Authorities said Hall purchased rope and gun cleaner among other items at a Walmart in Gainesville on Oct. 10. After that purchase, he made a second purchase of a shovel. He then allegedly went to pick up Exelby at the airport, and they both went to an Airbnb that he had rented.

The next day, he charged $1,200 to Exelby’s bank card.

Authorities said he recorded a video of Exelby showing her with cuts and bruises, and asking her to say that she consented to being stabbed.

Exelby sent a message to a friend via the Discord app expressing regret.

“I’m so, so scared. I’m so broken and in so much pain. … I thought he’d do it quick and not give my mind time to stew,” she wrote.

On Oct. 14, Hall allegedly sent a package to a friend that authorities said contained a knife that had traces of Exelby’s blood. It also had a bracelet with DNA from both Exelby and Hall.

Keep reading

Google and US government battle over the future of internet advertising

Google will confront the U.S. government’s latest attempt to topple its internet empire in federal court on Friday as a judge considers how to prevent the abusive tactics that culminated in parts of its digital ad network being branded as an illegal monopoly.

The courtroom showdown in Alexandria, Virginia, will pit lawyers from Google and the U.S. Department of Justice against each other in closing proceedings focused on the complex technology that distributes millions of digital ads across the internet each day.

After a lengthy trial last year, U.S. District Judge Leonie Brinkema ruled in April that pieces of Google’s ad technology had been rigged in a way that made it an illegal monopoly. That set up another 11-day trial earlier this fall to help Brinkema determine how to remedy its anti-competitive practices.

Friday’s closing arguments will give both Google and the Justice Department a final chance to sway Brinkema before she issues a ruling that probably won’t come until early next year.

The Justice Department wants Brinkema to force Google to sell some of the ad technology that it has spent nearly 20 years assembling, contending a breakup is the only way to rein in a company that the agency’s lawyers condemned as a “recidivist monopolist” in filings leading up to Friday’s hearing.

The condemnation refers not only to Google’s practices in digital advertising but also to the illegal monopoly that it unleashed through its dominant search engine. Federal prosecutors also sought a breakup in the search monopoly case, but the judge handling that issue rejected a proposal that would have required Google to sell its popular Chrome web browser.

Although Google is still being ordered to make reforms that it’s resisting, the outcome in the search monopoly case has been widely seen as a proverbial slap on the wrist. The belief that Google got off easy in the search case is the main reason the market value of its parent company Alphabet surged by about $950 billion, or 37%, to nearly $3.5 trillion since U.S. District Judge Amit Mehta’s decision came out in early September.

That setback hasn’t discouraged the Justice Department from arguing for a breakup of an ad tech system that handles 55 million requests per second, according to estimates provided by Google in court filings.

The huge volume of digital ads priced and distributed through Google’s technology is one of the main reasons that the company’s lawyers contend it would be too risky to force a dismantling of the intricate system.

“This is technology that absolutely has to keep working for consumers,” Google argues in documents leading up to Friday’s hearing. The company’s lawyers blasted the Justice Department’s proposal as a package of “legally unprecedented and unsupported divestitures.”

Besides arguing that its own proposed changes will bring more price transparency and foster more competition, Google is also citing market upheaval triggered by artificial intelligence as another reason for the judge to proceed cautiously with her decision.

In his decision in the search monopoly case, Mehta reasoned that AI was already posing more competition to Google.

Keep reading

EU’s Weakened “Chat Control” Bill Still Poses Major Privacy and Surveillance Risks, Academics Warn

On November 19, the European Union stands poised to vote on one of the most consequential surveillance proposals in its digital history.

The legislation, framed as a measure to protect children online, has drawn fierce criticism from a bloc of senior European academics who argue that the proposal, even in its revised form, walks a perilous line. It invites mass surveillance under a veil of voluntarism and does so with little evidence that it will improve safety.

This latest draft of the so-called “Chat Control” law has already been softened from its original form. The Council of the European Union, facing mounting public backlash, stripped out provisions for mandatory on-device scanning of encrypted communications.

But for researchers closely following the legislation, the revised proposal is anything but a retreat.

“The proposal reinstates the option to analyze content beyond images and URLs – including text and video – and to detect newly generated CSAM,” reads the open letter, signed by 18 prominent academics from institutions such as ETH Zurich, KU Leuven, and the Max Planck Institute.

We obtained a copy of the letter for you here.

The argument, in essence, is that the Council’s latest version doesn’t eliminate the risk. It only rebrands it.

Keep reading

Texas: ID Will Be Linked to Every Google Search! New Law Requires Age Verification

Texas SB2420, known as the App Store Accountability Act, requires app stores to verify the age of users and obtain parental consent for those under 18. This law aims to enhance protections for minors using mobile applications and is set to take effect on January 1, 2026.

Texas has joined a multi-state crusade to enforce digital identification in America—marketed as a way to “protect children.”

Yet privacy experts say the real goal isn’t child protection—it’s control. 

Roblox insists its new “age estimation” system improves safety, but it relies on biometric and government data—creating the foundation for permanent digital tracking. With Texas now the fifth state to join the campaign, one question remains: how long before “protecting kids” becomes the excuse to monitor everyone?

From Reclaim the Net:

Texas Sues Roblox Over Child Safety Failures, Joining Multi-State Push for Digital ID

Texas has become the latest state to take legal action against Roblox, joining a growing number of attorneys general who accuse the gaming platform of failing to protect children.

The case also renews attention on the broader push for online age verification, a move that would lead to widespread digital ID requirements.

Attorney General Ken Paxton filed the lawsuit on November 6, alleging that Roblox allowed predators to exploit children while misleading families about safety protections.

We obtained a copy of the lawsuit for you here.

Keep reading