Congress Takes Another Step Toward Enabling Broad Internet Censorship

The House Energy and Commerce Committee on Tuesday advanced the TAKE IT DOWN Act (S. 146) , a bill that seeks to speed up the removal of certain kinds of troubling online content. While the bill is meant to address a serious problem—the distribution of non-consensual intimate imagery (NCII)—the notice-and-takedown system it creates is an open invitation for powerful people to pressure websites into removing content they dislike. 

As we’ve written before, while protecting victims of these heinous privacy invasions is a legitimate goal, good intentions alone are not enough to make good policy. 

This bill mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without meaningfully addressing the problem it claims to solve. The “takedown” provision applies to a much broader category of content—potentially any images involving intimate or sexual content at all—than the narrower NCII definitions found elsewhere in the bill. The bill contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. 

The legislation’s 48-hour takedown deadline means that online service providers, particularly smaller ones, will have to comply quickly to avoid legal risks. That time crunch will make it impossible for services to verify the content is in fact NCII. Instead, services will rely on automated filters—infamously blunt tools that frequently flag legal content, from fair-use commentary to news reporting.

Keep reading

Two Million User International Paedophile Child Porn Ring Busted by Europol

A massive international pedophile network spanning across over 30 countries was shut down, and dozens of arrests were made relating to child pornography, the Europol police agency announced on Wednesday.

The European Union Agency for Law Enforcement Cooperation said that the Darknet child porn platform “Kidflix” has been shut down under the direction of the Bavarian State Criminal Police Office in Germany.

Europol said that the platform had an estimated 1.8 million users throughout the world, with investigations being coordinated across 35 counties.

So far, 79 people have been arrested and nearly 1,400 further suspects have been identified, over one hundred of whom were located in Germany, broadcaster NTV reports.

The platform, which was created in 2021, is said to have been one of the largest of its kind in the world and the largest to date to have existed in Europe.

According to the investigators, the site had over 91,000 child porn videos before it was shut down, with an average of three and half videos being uploaded to the site every hour. In exchange for paying a fee to the site, users were able to stream and upload videos of child sex abuse.

Keep reading

Which AI Chatbots Collect The Most Data About You?

The harbinger of the AI revolution, ChatGPT, remains the most popular AI tool on the market, with more than 200 million weekly active users.

But amongst all its competitors, which AI chatbots are collecting the most user data? And why does that matter?

Visual Capitalist’s Marcus Lu visualizes data from Surfshark which identified the most popular AI chatbots and analyzed their privacy details on the Apple App Store.

At first place, Google’s Gemini (released March, 2023) collects 22 different data points across 10 categories, from its users.

Data collected ranges from general diagnostics (that all bots in this study collect) to access to contacts (that no other bot identified collects).

xAI’s Grok (released November, 2023) collects the least unique data points (7).

China’s DeepSeek (released Jan 2025), sits comfortably in the middle of the pack at 11 points.

The kind of data collected by each of these AI tools varies. All of them collected general diagnostics information. However, only Gemini and Perplexity look at purchases.

And then, nearly all but Perplexity.ai and Grok collect user content.

Keep reading

Pentagon Kills Off HR IT Project After 780% Budget Overrun, Years Of Delays

After blowing deadlines and budgets for years, the Pentagon has finally pulled the plug on a troubled project to overhaul its outdated civilian HR IT systems.

Like many government projects before it, the US Defense Civilian Human Resources Management System (DCHRMS) promised big things when it was kicked off nearly a decade ago. According to a memo [PDF] signed by Secretary of Defense Pete Hegseth late last week, the program was intended to streamline a large portion of the DoD’s legacy HR IT systems, but it’s being axed after officials concluded pouring more funds into it would be “throwing more good taxpayer money after bad.”

DCHRMS started in 2018 with a planned development timeline of one year and a budget of $36 million, “but instead it’s taken eight years and is currently $280 million over budget – that’s 780 percent over budget,” Hegseth said in a video announcing the DCHRMS and other spending cuts. “We’re not doing that anymore.”

That’s not to say the DoD is giving up on modernizing its civilian HR systems – the memo noted that the Pentagon still wants a new solution, with Hegseth directing officials to develop a fresh plan within 60 days to achieve the project’s original goals.

While the headline item in the memo is the cancellation of DCHRMS, Hegseth ordered cuts to additional programs, contracts, and grants too.

The memo mentioned the cancellation of more than $360 million in grant programs “in areas of Diversity, Equity, and Inclusion and related social programs, climate change, social science, COVID-19 pandemic response” and the like, stating these efforts were not aligned with the DoD’s current priorities.

We’ve reached out to the Defense Department to get a more complete list of the programs being terminated, but Hegseth did single out a couple in the video. In particular, he pointed to a $6 million grant for decarbonizing the emissions from US Navy ships and a $9 million university grant to develop “equitable AI and machine learning models.”

“I need lethal machine learning models,” Hegseth said. “Not equitable machine learning models.” 

The memo also directed the cancellation of $30 million in contracts with Gartner and McKinsey for analysis products and what Hegseth described as “unused licenses” from “external consulting services.” The move echoes the ongoing scrutiny of federal consulting contracts, such as reviews of deals involving Accenture, IBM, and Deloitte.

Keep reading

“MyTerms” wants to become the new way we dictate our privacy on the web

Author, journalist, and long-time Internet freedom advocate Doc Searls wants us to stop asking for privacy from websites, services, and AI and start telling these things what we will and will not accept.

Draft standard IEEE P7012, which Searls has nicknamed “MyTerms” (akin to “Wi-Fi”), is a Draft Standard for Machine Readable Personal Privacy Terms. Searls writes on his blog that MyTerms has been in the works since 2017, and a fully readable version should be ready later this year, following conference presentations at VRM Day and the Internet Identity Workshop (IIW).

The big concept is that you are the first party to each contract you have with online things. The websites, apps, or services you visit are the second party. You arrive with either a pre-set contract you prefer on your device or pick one when you arrive, and it tells the site what information you will and will not offer up for access to content or services. Presumably, a site can work with that contract, modify itself to meet the terms, or perhaps tell you it can’t do that.

The easiest way to set your standards, at first, would be to pick something from Customer Commons, which is modeled on the copyleft concept of Creative Commons. Right now, there’s just one example up: #NoStalking, which allows for ads but not with data usable for “targeted advertising or tracking beyond the primary service for which you provided it.” Ad blocking is not addressed in Searls’ post or IEEE summary, but it would presumably exist outside MyTerms—even if MyTerms seems to want to reduce the need for ad blocking.

Searls and his group are putting up the standards and letting the browsers, extension-makers, website managers, mobile platforms, and other pieces of the tech stack craft the tools. So long as the human is the first party to a contract, the digital thing is the second, a “disinterested non-profit” provides the roster of agreements, and both sides keep records of what they agreed to, the function can take whatever shape the Internet decides.

Keep reality

Rosie O’Donnell floats bizarre conspiracy theory about Elon Musk after fleeing US after election

Rosie O’Donnell has suggested the 2024 presidential election may have been stolen during her first Irish television interview since leaving the United States for Ireland.

The famously outspoken comedian and former talk show host spoke with Patrick Kielty on Friday’s The Late Late Show on the Irish channel RTE One in which she implicated tech billionaire Elon Musk without naming him directly.

‘I question why for the first time in American history, a president has won every swing state and his largest donor was a man who owns and runs the internet,’ O’Donnell said.  

‘I would hope that would be investigated,’ she added. ‘Whether or not it was an anomaly… or something else that happened on election night in America.’

O’Donnell told Kielty how she found Trump’s win strange because then-Vice President Kamala Harris was ‘filling up stadiums with people who supported her and Donald Trump was not able to do that.’ 

Now self-exiled in Ireland, O’Donnell explained how she fled the US following what she calls a ‘terrifying’ sequence of political events. 

‘The president of the United States has it out for me,’ she said, referring to Donald Trump, with whom she’s had a public feud spanning decades. 

She explained how her critique of Trump’s bankruptcies and sexual assault allegations on TV show The View had made her a long-term target.

Keep reading

Google imports ex-Israeli spies who automated Gaza genocide

On Mar. 18, Google bought Israeli cybersecurity company Wiz for $32 billion. The acquisition will mark the single largest transfer of former Israeli spies into an American company. This is because Wiz is run and staffed by dozens of ex Unit 8200 members, the specialist cyber-spying arm of the IDF.

Unit 8200 wrote the programming and designed the algorithms that automated the genocide of Gaza and was also responsible for the pager attack in Lebanon. Now the men and women who helped design the architecture of apartheid are being swallowed by the US tech-surveillance complex.

The identity of the Wiz founders, all former Unit 8200, is fairly well-documented (by Israeli media at least). One of the founders, Ami Luttwak, boasts on his LinkedIn profile that he led a “mission critical R&D team” for Unit 8200 which won them the “Israel Defence Award 2012.” Less well-documented, however, is the fact that a huge chunk of the Wiz workforce, from office managers, to software engineers to product analysts, are also former Unit 8200. Following my investigation earlier this year into the former Unit 8200 members working in key AI positions for tech companies, I have identified nearly fifty Wiz employees as being ex Unit 8200 operatives.

Keep reading

Top Streamer Says Violent Threats on Reddit Will Lead to Real ID For the Internet

Top streamer Asmongold predicts that the sheer amount of violent threats being posted on Reddit will grease the skids for an Internet ID system that will end online anonymity.

Since Donald Trump took office, the far-left website has seen a massive uptick in threats of violence targeting Trump, people in his administration and conservatives in general.

Last month, Reddit temporarily banned multiple pages after users began posting threats aimed at staff working for the Department of Government Efficiency (DOGE).

“Time to hunt,” one user posted, while another asserted, “Lets drag their necks up by a large coil up rope.”

According to popular streamer Asmongold, the deluge of threats will provide a pretext for the government to mandate tying a person’s real identification to their online user accounts.

Keep reading

FCC Chair Brendan Carr Wants More Control Over Social Media

In his short time as chairman of the Federal Communications Commission (FCC), Brendan Carr has been no stranger to using his power against disfavored entities. The chairman’s targets have primarily included broadcast networks and social media companies.

Recently, Carr revealed a fundamental misunderstanding about one of the most important laws governing the internet and social media.

On February 27, digital news outlet Semafor held a summit in Washington, D.C., titled “Innovating to Restore Trust in News,” which culminated in a conversation between Semafor editor-in-chief Ben Smith and Carr.

“The social media companies got more power over more speech than any institution in history” in recent years, Carr told Smith. “And I think they’re abusing that power. I think it’s appropriate for the FCC to say, let’s take another look at Section 230.”

Section 230 of the Communications Act effectively protects websites and platforms from civil liability for content posted by others. It also protects a platform’s decision to moderate content it finds “objectionable, whether or not such material is constitutionally protected.”

Like many conservatives, Carr looks askance at social media’s latitude to moderate content with what he perceives as impunity. “The FCC should issue an order that interprets Section 230 in a way that eliminates the expansive, non-textual immunities that courts have read into the statute” and “remind courts how the various portions of Section 230 operate,” he wrote in a chapter of The Heritage Foundation’s Mandate for Leadership, more popularly known as Project 2025.

Keep reading

US Tech companies, Including X and Google, Threaten To Leave Starmer’s Leftist Britain Over the Cost of Funding Online ‘Safety’ Censorship

As the ‘Trump Tornado’ is forcefully rearranging things all over Europe, there’s a justified expectation about the Donald J. Trump administration’s reaction to the ill-disguised push for censorship in the upcoming ‘Online Safety Act.’

As of now, Tech companies, including Elon Musk’s X and Google, have warned businesses could leave the PM Keir Starmer’s leftist experiment in Britain over the cost of funding the online safety crackdown.

Google said the fees charged to internet companies will drive services out of the UK, while X says it will ‘disincentivize’ global companies from entering the British market.

The Telegraph reported:

“Ofcom [British Office of Communications] has laid out plans to raise around £70m a year to cover the costs of enforcing the new laws, which take effect in the coming months. They will require tech companies to introduce age checks and limit exposure to harmful content. The bill would almost entirely be borne by the largest five providers – believed to be Meta, Google, Microsoft, Apple and TikTok – [that] would face charges equal to 0.02pc of global revenue.”

Keep reading