House Bill Cuts Federal Funds for Online Censorship

A new House appropriations bill does something unusual for Washington legislation. It tells federal agencies they cannot spend money pressuring platforms, advertisers, or foreign governments to silence speech that Americans are legally allowed to make.

H.R. 8595, the national security and State Department appropriations bill, runs hundreds of pages and buried throughout are provisions that would shut off federal funding to a wide range of speech-suppression activities.

The restrictions cover direct platform pressure, ad boycott campaigns aimed at US media companies, blacklists, and cooperation with foreign censorship regimes that target American tech firms.

We obtained a copy of the bill for you here.

The headline provision is on page 252. It bars the use of any appropriated funds to “deplatform, deboost, demonetize, suppress, or otherwise penalize” online speech, social media activity, or news outlets producing content that would be lawful under US law. The language is deliberately wide and it catches the obvious things, like government agencies asking a platform to take a post down, and the less obvious ones, like funding research projects that pressure advertisers to abandon publishers.

That second category has been doing real damage for years. Brand “safety” programs, hate speech classifiers built with federal grant money, “disinformation” tracking outfits that exist primarily to attach scary labels to inconvenient reporting.

Federal money cannot flow to programs designed to impose “legal, regulatory, financial, reputational, commercial, or political costs” on American tech companies, social media platforms, online intermediaries, or digital publishers for hosting First Amendment protected speech.

There is also a prohibition on funding work that pushes foreign governments to do the censoring instead. American agencies cannot use these appropriations to support foreign laws, regulations, codes, or enforcement mechanisms that punish US platforms for carrying speech that would be lawful here.

The whole architecture of routing American speech restrictions through Brussels or London or Canberra, then importing the results back home through global compliance regimes, runs into a federal funding wall.

Blacklists are out. Censorship cooperation with supranational bodies is out. Inducing advertisers to “cut off, reduce, redirect, or otherwise interfere with advertising, sponsorship, payment, or other revenue on the basis of lawful online speech” is out.

Keep reading

Judge Blocks Enforcement Of Colorado’s New DEI-Driven AI Law

A federal judge has temporarily blocked the State of Colorado from enforcing a first-of-its-kind artificial intelligence law.

Colorado is prohibited from taking enforcement actions on alleged violations of the law occurring up to 14 days after the court issues a ruling on the company xAI’s motion for a preliminary injunction, judge Cyrus Y. Chung ruled on April 27.

The Department of Justice had said the state law, which was set to go into effect on June 30, would have required AI developers and deployers to “discriminate based on race, sex, & religion—all in the name of DEI.”

DEI is an acronym for “diversity, equity, and inclusion.”

Brett Shumate, an assistant attorney general for the DOJ’s Civil Division, called the suspension a “huge win for the American people.”

“Colorado immediately caved and agreed not to enforce the law against ANY AI company,” Shumate wrote in a X post on May 1.

Gov. Jared Polis (D-Colo.) signed into law the Consumer Protections for Artificial Intelligence in May 2024 and issued a statement sharing his reservations about how it could impact Colorado.

In the statement, he urged the General Assembly to revise and delay implementing it until January 2027.

“I am concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike,” Polis wrote.

However, the legislation was not revised; instead, it was delayed until June 30, 2026, which prompted tech billionaire Elon Musk’s company xAI, which created Grok, to sue the state on April 9.

The unedited legislation was months away from going into effect when xAI asked the court to block the law from being enforced.

The Justice Department added its name as a plaintiff alongside xAI on April 24, marking the first time the DOJ had stepped into a case that challenged AI on a state level.

Both alleged that Colorado’s law would have caused unconstitutional “algorithmic discrimination” and asked a court to block it from being enforced.

“Laws that require AI companies to infect their products with woke DEI ideology are illegal,” said Assistant Attorney General Harmeet K. Dhillon, who works under the Justice Department’s Civil Rights Division.

“The Justice Department will not stand on the sidelines while states such as Colorado coerce our nation’s technological innovators into producing harmful products that advance a radical, far-left worldview at odds with the Constitution.”

The Epoch Times has reached out to Polis and Colorado Attorney General Phil Weiser for comment.

Keep reading

Meta raises specter of shutting down service to New Mexico in legal clash over child safety

Meta is raising the prospect of shutting down its social media services in New Mexico in response to a push by state prosecutors for fundamental changes to the company’s platforms, including Instagram, to protect the mental health and safety of children.

The possibility emerged amid legal gamesmanship in the runup to a bench trial next week on allegations that Meta poses a public nuisance. It’s the second phase of a case that already resulted in $375 million in civil penalties on a jury’s determination that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms.

Prosecutors are asking the court to order a series of changes to child accounts on social media aimed at reining in addictive features, improving age verification and preventing child sexual exploitation through default privacy settings and closer oversight.

Meta executives have emphasized that the company continuously improves child safety and addresses compulsive social media use. The company says its being singled out among hundreds of apps that teens use.

In a court filing unsealed Thursday, Meta said it was unfeasible for the company to meet a proposed requirement for 99% accuracy in verifying that child users are at least 13 years old, among other demands.

“As a practical matter, this requirement effectively requires Meta to shut down its services — for all users in the state — or else comply with impossible obligations,” Meta said in the filing.

Such a shutdown across a population of 2.1 million residents in New Mexico could silence personal communication on Meta’s immensely popular platforms, which also include Facebook and WhatsApp, and also impact their use for commercial advertising.

By withdrawing from New Mexico, Meta would satisfy any concerns about harm to children, but the message could appear intentionally hostile and might lead to unintended consequences, said Eric Goldman, codirector of the High Tech Law Institute at Santa Clara University School of Law in California.

Keep reading

Senate Panel Backs GUARD Act, AI Age Verification Bill

The Senate Judiciary Committee voted 22-0 on Thursday to advance the GUARD Act, a bill that would require AI chatbot companies to verify the age of every American who wants to use them.

The legislation, sponsored by Senator Josh Hawley of Missouri, sailed through committee with a tweet from its author celebrating the outcome.

“My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY,” Hawley wrote on X. “No amount of profit justifies the DESTRUCTION of our children. Time to bring this bill to the Senate floor.”

As usual, the framing is about children but the result is age verification/digital ID for everyone.

Under the bill’s text, a “reasonable age verification measure” cannot mean a checkbox or a self-entered birth date. It cannot rely on whether a user shares an IP address or hardware identifier with someone already verified as an adult.

We obtained a copy of the bill for you here.

What it can mean, the legislation makes clear, is a government ID upload, a facial scan, or a financial record tied to your legal name. Every user of every covered chatbot would need to hand one of those over before being allowed in.

The bill defines an “artificial intelligence chatbot” as any service that “produces new expressive content or responses not fully predetermined by the developer or operator” and “accepts open-ended natural-language or multimodal user input.”

That language reaches well beyond the companion apps the press conference focused on. It covers customer service bots, search assistants powered by AI, homework helpers, and the general-purpose tools millions of adults already use without proving who they are.

Keep reading

EU Pushes Age Verification App for All States

The European Commission wants every member state running age verification by the end of 2026, and it wants them running its own app to do it. A recommendation adopted Wednesday tells the bloc’s twenty-seven governments to accelerate deployment of the EU Age Verification App and have it available to citizens before the year is out, regardless of the unease some capitals have expressed about adopting Brussels’ code over their own.

The push lands months after security researchers tore through the same app the Commission is now urging governments to ship. In April, consultant Paul Moore bypassed the app’s protections in under two minutes, demonstrating that the rate-limiting controls were stored in an editable file, biometric authentication could be turned off with a simple configuration change, and sensitive credentials were accessible without secure hardware protection.

The Commission patched the headline issues. It is now telling governments the app is ready for production.

Henna Virkkunen, the Commission’s Executive Vice-President for Tech Sovereignty, Security and Democracy, framed the recommendation as the next step toward shielding minors online. “Effective and privacy-preserving age verification is the next piece of the puzzle that we are getting closer to completing, as we work towards an online space where our children are safe and empowered to use positively and responsibly without restricting the rights of adults,” she said.

Keep reading

Gone in 9 Seconds: AI Coding Agent Deletes Entire Company Database and All Backups

The founder of a software company has issued a public warning after an AI coding assistant erased his company’s entire production database and all backups in just nine seconds.

Tom’s Hardware reports that Jer Crane, founder of PocketOS, a platform serving car rental businesses, experienced what he describes as catastrophic failures when an AI coding agent deleted critical company data that took months to accumulate. The incident occurred when Cursor, an AI coding tool powered by Anthropic’s Claude Opus 4.6, was performing what should have been a routine task in the company’s staging environment.

According to Crane’s detailed account posted on X, the AI agent encountered an obstacle and independently decided to resolve the issue by deleting the production database in Railway through an API call. Railway is the cloud infrastructure provider used by PocketOS, generally considered more user-friendly than major alternatives like Amazon Web Services. The entire deletion process took only nine seconds to complete.

The situation escalated beyond a simple database deletion due to Railway’s infrastructure design. The cloud provider’s system stored backups on the same volume as the source data, meaning when the AI agent deleted the primary database, all backup copies were simultaneously erased. This combination of the AI agent’s unauthorized action and the infrastructure provider’s architecture created what Crane characterizes as a recipe for disaster.

When Crane questioned the AI agent about its actions, he received a response that revealed the extent of the failure. The agent’s explanation began with an acknowledgment of poor judgment. According to the verbatim response Crane shared, the AI stated it had guessed that deleting a staging volume through the API would only affect the staging environment without verifying this assumption or consulting Railway’s documentation on how volumes function across different environments.

The AI agent’s confession continued with an admission of multiple violations of its operational principles. It acknowledged running a destructive action without authorization, failing to understand the consequences before executing the command, and not reading the relevant documentation about Railway’s volume behavior across environments. The agent recognized it should have either asked for permission first or found a non-destructive solution to the credential mismatch it encountered.

Keep reading

Turkey to Ban Anonymous VPNs

Turkey is moving to make anonymous VPN use illegal, and Proton VPN signups in the country have doubled as word spreads. The Turkish government’s plan, reported by local outlet Yeni Şafak, would outlaw unlicensed VPN services and require any approved provider to log what users do and turn those records over to Turkish authorities on request.

A VPN that logs and reports isn’t really a VPN. It’s a second surveillance pipe pointed at the same people the government already watches.

Officials describe the measures as part of a package aimed at protecting children after school attacks in Şanlıurfa and Kahramanmaraş, with attackers reportedly drawn to violent mobile games. Packaged alongside the VPN clampdown are parent-controlled “child SIM” lines and a cap on how many mobile numbers a single person can register.

The child-protection wrapper is the sweetener, because the actual infrastructure being built, licensed VPN providers that log and disclose, reaches every adult in the country, not just children playing shooters on their phones.

Keep reading

Google is Tracking Your Life – Photo Cloud Feeding AI System

There was a time when your photo album sat in a drawer, private, personal, and disconnected from the outside world. Privacy no longer exists in the modern world as personal data will become the key tool of control, and now Google is taking the next step by turning your memories into fuel for artificial intelligence.

According to a recent report, Google has rolled out a major update to its Photos platform that allows its AI system, Gemini, to scan your entire photo library to build what it calls “Personal Intelligence.” What this means in plain English is that your images are no longer just stored, they are analyzed and integrated into a broader behavioral profile. Google openly admits the system can use actual images of you and your loved ones to generate AI content, eliminating the need for users to manually upload reference photos.

This is not a minor tweak to a photo app, but a structural shift in how data is harvested and understood, because every image you have ever taken now becomes part of a living model that attempts to understand who you are, who you associate with, where you go, and how you live your life. What was once private into something continuously processed and categorized.

Keep reading

Beyond Cookies – How To Stop The Invisible Browser Fingerprint That Tracks You Everywhere

For years, the privacy advice was simple: clear your cookies, use incognito mode, or click “Reject All” on those annoying consent banners. That advice is now outdated.

A groundbreaking study published last year has delivered the first peer-reviewed proof that the $600 billion online advertising industry has moved on from cookies. The new tracking method is called browser fingerprinting, and it works even if you never log in, never accept cookies, and have legally opted out under privacy laws.

Researchers from Texas A&M University and Johns Hopkins University built a tool named FPTrace to measure exactly how this works in the wild. They simulated real user sessions, systematically altered browser fingerprints, and watched what happened to the ads being served and the bids advertisers placed in real time. The results were clear: when the fingerprint changed, the price advertisers were willing to pay to target that “user” changed with it. Tracking signals dropped. The system was actively using the fingerprint to follow people across sessions and sites.

And crucially, this happened even in tests where cookies were fully deleted and users were in “opt-out” mode under GDPR and CCPA rules. The law’s exit door for cookies does not cover fingerprinting.

How Browser Fingerprinting Works (No Permission Required)

Every time your browser loads a page, it leaks dozens of tiny, seemingly harmless signals:

  • Screen resolution and color depth
  • Installed fonts
  • GPU model and graphics capabilities
  • Audio processing signatures
  • Browser version, plugins, and language settings
  • Time zone
  • Canvas rendering differences (how it draws hidden shapes)
  • Whether you run an ad blocker
  • Even battery level in some cases

Alone, each detail is common. Combined, they create a unique “fingerprint” that can identify your device with startling precision. No cookies. No login. No pop-up asking for consent. Just loading the page is enough.

Keep reading

Pokémon Go — The Largest Mapped Data Collection Ploy in History

When Pokémon Go was released, it appeared to be a harmless game encouraging people to go outside and explore, yet beneath that surface was a far more sophisticated system that directed human movement into very specific locations where data was needed most, turning millions of users into mobile data collectors. The placement of Pokémon, Gyms, and PokéStops was not random, but concentrated around landmarks, businesses, and dense urban corridors, meaning players were repeatedly funneled into high-value mapping zones, often returning to the same locations over and over again, capturing them from multiple angles, at different times of day, and under varying conditions, which is exactly how high-quality spatial datasets are built.

For many reading this, particularly those who never played the game, it is important to understand what this actually looked like in practice, because this was not some passive background process, it required people to physically walk through neighborhoods, parks, shopping districts, and even residential areas while holding up their phones, actively scanning their surroundings to “catch” virtual creatures that did not exist. The game encouraged users to point their cameras at real-world objects, move around them, and interact with the environment. The system was capturing detailed imagery not just of public landmarks but also of surrounding areas, including streets, entryways, and private homes, all embedded in what appeared to be a simple entertainment experience.

Keep reading