House Report: EU Pushed Tech Giants to Police US Speech

A newly released report from the House Judiciary Committee reveals a coordinated effort by European Union regulators to pressure major technology companies into enforcing censorship standards that extend far beyond Europe’s borders.

The findings, drawn from thousands of internal documents and communications, detail a long-running strategy to influence global content moderation policies through regulatory coercion and the threat of punishment under Europe’s Digital Services Act (DSA).

The Committee’s latest publication, “The EU Censorship Files, Part II,” coincides with a scheduled February 4 hearing titled “Europe’s Threat to American Speech and Innovation: Part II.”

We obtained a copy of the report for you here

According to the materials, European officials have been meeting privately with social media companies since at least 2015 to “adapt their terms and conditions” to align with EU political priorities, including restricting certain kinds of lawful political expression in the United States.

Internal records from TikTok, then-Twitter, and other firms show that the Commission’s so-called “voluntary” DSA election guidelines were in fact treated as mandatory conditions for doing business in Europe.

Keep reading

France’s Raid on X Opens New Front in Europe’s War Over Online Speech

French prosecutors staged a morning raid at the Paris offices of social media platform X, part of a criminal investigation coordinated with Europol.

The operation, launched in 2025, targets allegations ranging from the alleged distribution of sexual deepfakes to algorithmic manipulation.

The cybercrime division in Paris is exploring whether X’s automated systems may have been used in an “organized structure” to distort data or suppress information.

The alleged offenses are as follows:

  • Denial of crimes against humanity (Holocaust denial)
  • Fraudulent extraction of data from an ⁠automated data processing system ​by an organized group
  • Falsification of the operation ‌of ‌an automated data processing system by an organized group
  • Defamation of a person’s image (deepfakes of ​sexual nature, including minors)
  • Operating of an illegal online platform by an organized group

Prosecutors have now summoned Elon Musk and former CEO Linda Yaccarino for questioning in April. “Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,” the office said.

Yaccarino, who left in mid-2025, might find herself reliving the company’s most volatile months, when X faced regulatory crossfire across the continent for refusing to comply with what it called political censorship demands.

The case actually began with two complaints in January 2025, including one from French lawmaker Eric Bothorel, who accused X of narrowing “diversity of voices and options” after Musk’s takeover.

Bothorel cited “personal interventions” in moderation decisions, a line that seemed more about ideology than algorithms.

As the investigation grew, prosecutors took interest in Grok, X’s AI system, which allegedly produced “Holocaust denial content” and “sexual deepfakes.” The Paris prosecutor’s office soon announced it was examining “biased algorithms.”

Musk called the whole affair a “politically-motivated criminal investigation,” and considering Europe’s recent appetite for speech regulation, it’s not a stretch to see why he’d think that.

Keep reading

Microsoft confirms it will give the FBI your Windows PC data encryption key if asked — you can thank Windows 11’s forced online accounts for that

Microsoft has confirmed in a statement to Forbes that the company will provide the FBI access to BitLocker encryption keys if a valid legal order is requested. These keys enable the ability to decrypt and access the data on a computer running Windows, giving law enforcement the means to break into a device and access its data.

The news comes as Forbes reports that Microsoft gave the FBI the BitLocker encryption keys to access a device in Guam that law enforcement believed to have “evidence that would help prove individuals handling the island’s Covid unemployment assistance program were part of a plot to steal funds” in early 2025.

This was possible because the device in question had its BitLocker encryption key saved in the cloud. By default, Windows 11 forces the use of a Microsoft Account, and the OS will automatically tie your BitLocker encryption key to your online account so that users can easily recover their data in scenarios where they might get locked out. This can be disabled, letting you choose where to save them locally, but the default behavior is to store the key in Microsoft’s cloud when setting up a PC with a Microsoft Account.

Keep reading

Google agrees to $68m settlement over claims it recorded private conversations

Google is facing a class action after its users claimed the company was spying on them.

The company has agreed to pay $68 million to settle a lawsuit as users accuse the company of violating their personal privacy.

Google’s virtual assistant, an AI powered software available on android phones and tablets, has been accused of recording private conversations.

The software activates when users use “wake words”, a verbal cue prompting the device to actively listen to commands, like “Hey Google” or “Okay Google”.

The assistant is designed to only switch from passive monitoring to active listening when it hears wake words.

Keep reading

EU Targets X (Again) in Grok AI Probe

European regulators have launched a new investigation into Elon Musk’s X, focusing on alleged failures to control sexually explicit imagery generated by the company’s AI chatbot, Grok.

The case is being pursued under the European Union’s Digital Services Act (DSA), a law that grants the European Commission expansive powers to police digital platforms for potential “harms.”

In a statement, the Commission said, “The new investigation will assess whether the company properly assessed and mitigated risks associated with the deployment of Grok’s functionalities into X in the EU.”

The agency added that the review includes “risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material.” Officials stated that these threats “seem to have materialized, exposing citizens in the EU to serious harm.”

Keep reading

Meta, TikTok, YouTube Face Trial Over Youth Addiction Claims

Three of the world’s biggest tech companies face a landmark trial in Los Angeles starting this week over claims that their platforms — Meta’s Instagram, ByteDance’s TikTok and Google’s YouTube — deliberately addict and harm children.

Jury selection starts this week in the Los Angeles County Superior Court. It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms.

The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.

At the core of the case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out.

She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.

KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits.

This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.

Keep reading

A Man Bought Meta’s AI Glasses, And Ended Up Wandering The Desert Searching For Aliens To Abduct Him

At age 50, Daniel was “on top of the world.”

“I turned 50, and it was the best year of my life,” he told Futurism in an interview. “It was like I finally figured out so many things: my career, my marriage, my kids, everything.”

It was early 2023, and Daniel — who asked to be identified by only his first name to protect his family’s privacy — and his wife of over three decades were empty nesters, looking ahead to the next chapter of their lives. They were living in an affluent Midwestern suburb, where they’d raised their four children. Daniel was an experienced software architect who held a leadership role at a large financial services company, where he’d worked for more than 20 years. In 2022, he leveraged his family’s finances to realize a passion project: a rustic resort in rural Utah, his favorite place in the world.

“All the kids were out of the house, and it was like, ‘oh my gosh, we’re still young. We’ve got this resort. I’ve got a good job. The best years of our lives are in front of us,” Daniel recounted, sounding melancholy. “It was a wonderful time.”

That all changed after Daniel purchased a pair of AI chatbot-embedded Ray-Ban Meta smart glasses — the AI-infused eyeglasses that Meta CEO Mark Zuckerberg has made central to his vision for the future of AI and computing — which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a “new dawn” for humanity.

And though his delusions have since faded, his journey into a Meta AI-powered reality left his life in shambles — deep in debt, reeling from job loss, isolated from his family, and struggling with depression and suicidal thoughts.

“I’ve lost everything,” Daniel, now 52, told Futurism, his voice dripping with fatigue. “Everything.”

In many ways, Daniel was Meta’s target customer. He was an experienced tech worker and AI enthusiast who had worked on machine learning projects in the past and had purchased the Meta glasses because he was intrigued by their AI features.

“I used Meta [AI] because they were integrated with these glasses,” said Daniel. “And I could wear glasses — which I wore all the time — and then I could speak to AI whenever I wanted to. I could talk to my ear.”

Today, however, as he continues to recover from his mental health breakdown, Daniel describes himself as a “shell” of who he “used to be.”

“My kids don’t talk to me because I got weird. They don’t know how to talk to me,” said the father of four. “I was a cook… I played the guitar. I love music. I love learning.”

But now, he says, he’s “just trying to survive day to day.”

According to Daniel and multiple family members, the 52-year-old had no history of mania or psychosis before encountering Meta AI. He’d struggled with alcoholism, but quit drinking in early 2023, months before he purchased the Meta smart glasses.

Keep reading

The $134 Billion Betrayal: Inside Elon Musk’s Explosive Lawsuit With OpenAI

Elon Musk’s lawsuit against OpenAI and Microsoft has evolved into a high-stakes dispute over whether OpenAI stayed true to the mission it was founded on or quietly outgrew it while relying on that original promise.

Musk is seeking between $79 billion and $134 billion in damages, a figure derived from an expert valuation that treats his early funding and contributions as foundational to what OpenAI later became. While the number is enormous, the heart of the case is simpler: Musk argues he helped create and fund a nonprofit dedicated to AI for the public good, and that OpenAI later abandoned that commitment in a way that amounted to fraud.

According to Musk’s filings, his roughly $38 million in early funding was not just a donation but the financial backbone of OpenAI’s formative years, supplemented by recruiting help, strategic guidance, and credibility. His damages theory, prepared by financial economist C. Paul Wazzan, ties those early inputs to OpenAI’s current valuation of around $500 billion.

The claim is framed as disgorgement rather than repayment, with Musk arguing that the vast gains realized by OpenAI and Microsoft flowed from a nonprofit story that attracted support and trust, only to be discarded once the company reached scale, according to TechCrunch

Much of the public attention has centered on internal documents uncovered during discovery, particularly private notes from OpenAI co-founder Greg Brockman in 2017.

One line has become central to Musk’s argument: “I cannot believe that we committed to non-profit if three months later we’re doing b-corp then it was a lie.”

Keep reading

“Emergency Intervention”: Trump To Cap Residential Electric Bills By Forcing Tech Giants To Pay For Soaring Power Costs

Back in August, when the American population was just waking up to the dire consequences the exponentially growing army of data centers spawned across the country was having on residential electricity bills, we said that the chart of US CPI would soon become the most popular (not in a good way) chart in the financial realm.

One month later we added that it was only a matter of time before Trump, realizing that soaring electricity costs would almost certainly cost Republicans the midterms, would enforce price caps.

Turns out we were right.

And while Trump obviously can not pull a communist rabbit out of his hat, and centrally plan the entire US power grid, what he can do is precisely what he is about to announce. 

According to Bloomberg, Trump and the governors of several US Northeastern states agreed to push for an emergency wholesale electricity auction that would compel technology companies to effectively fund new power plants, effectively putting a cap for residential power prices at the expense of hyperscalers and data centers. Which, come to think of it, we also proposed back in October.

Keep reading

Florida’s “App Store Accountability Act” Would Deputize Big Tech to Verify User IDs for App Access

In Florida, Senator Alexis Calatayud has introduced a proposal that could quietly reshape how millions of Americans experience the digital world.

The App Store Accountability Act (SB 1722), presented as a safeguard for children, would require every app marketplace to identify users by age category, verify that data through “commercially available methods,” and secure recurring parental consent whenever an app’s policies change.

The legislation is ambitious. If enacted, it would take effect in July 2027, with enforcement beginning the following year.

Each violation could carry penalties of up to $7,500, along with injunctions and attorney fees.

On its surface, this is a regulatory measure aimed at strengthening parental oversight and protecting minors from online harms. Yet it hits up against a larger philosophical and rights struggle.

For much of modern political thought, the relationship between authority and liberty has revolved around who decides what constitutes protection. Florida’s proposal situates that question in the hands of private corporations. The bill effectively deputizes Big Tech app store operators, such as Apple and Google, as arbiters of digital identity, compelling them to verify user ages and manage parental permissions across every platform.

Millions of Floridians could be required to submit identifying details or official documents simply to access or update apps. This process, while justified as a measure of security, reintroduces the age-old tension between the protective role of the state and the autonomy of the citizen.

By making identity verification the gateway to digital access, the law risks creating an infrastructure in which surveillance becomes a condition of participation. It is a move from voluntary oversight to systemic authentication, merging the roles of government and corporation in a single mechanism of control.

The proposal may collide with long-established constitutional principles. One of the objections lies in the concept of prior restraint. By conditioning minors’ ability to download or continue using apps on verified Big Tech platforms, the bill requires permission before access, effectively placing all expressive content behind a regulatory gate.

Apps today are not mere entertainment; they are conduits of news, art, religion, and political discourse. Restricting that access risks transforming a parental safeguard into a systemic filter for speech.

The burden falls most heavily on minors, whose First Amendment protections are often ignored in public debate.

Keep reading