Data centers encounter local pushback amid major growth

At least 16 data center projects, worth a combined $64 billion, have been blocked or delayed as local opposition mounts to the developments, according to a new study.

Research collected by Data Center Watch shows that residents and politicians across seven states have stopped or stalled the data center projects.

In Arizona’s West Valley, development company Tract withdrew plans for a $14 billion project after city officials declined to approve required rezoning. Tract eventually announced a similar project in Buckeye, Ariz., where the development is proceeding.

In Peculiar, Mo., and Chesterton, Ind., residents and local officials also blocked data center developments worth billions.

In total, the study found that six data center developments have been fully blocked since May 2024. The backlash has also delayed 10 other data centers, including two from Amazon.

Nine of the documented data center blockages and delays have occurred in Virginia, the world’s unofficial data center capital, according to the research firm.

The study’s authors also found growing bipartisan aversion to the behemoth data center projects: about 55 percent of Republicans and 45 percent of Democrats in districts with large data center projects have taken public positions against the developments, according to the study.

“This cross-party resistance defies expectations and marks a rare area of bipartisan alignment in infrastructure politics,” the authors wrote.

The report also found that data centers were becoming an intensifying issue in local politics. As energy costs soar and affordability takes center stage, it’s likely more candidates and elected officials will take sides on the projects.

Keep reading

CEO of Palantir Says He Spends a Large Amount of Time Talking to Nazis

While you were busy wasting your time listening to podcasts and doomscrolling on your phone, one of America’s leading AI overlords was educating himself by talking to Nazis.

This was the startling admission made by Alex Karp, cofounder and CEO of the software company Palantir, a company that’s come under increasingly heavy scrutiny for its growing role as a provider of AI-powered surveillance technology to the military and government.

In an interview with podcaster Molly O’Shea published this week, Karp, who has Jewish heritage, was discussing German culture and his time in the country before going on a tangent about how outrageous it is that people online “laud the Nazis.” Then he fessed up to something even more eyebrow-raising.

“I spend a lot of time talking to Nazis,” Karp said, implying that this is an ongoing pastime of his. “Like, real Nazis,” he emphasized.

Karp explained that it was his way of “understanding what made them tick,” before making an ironic observation.

“Part of the crazy thing about people who laud the Nazis nowadays is there’s not a single Nazi that would ever have included them in their movement and would have shipped them off to the camps quicker maybe than they shipped me off to the camps!” he chuckled.

He then pulled off the smoothest segue of all time.

“Uh, but, um, and uh, and it’s like, it’s uh but” — the interview mercifully jumps cuts —  “the thing that’s crazy unique about America,” Karp began to muse.

Beyond his role as Palantir’s head honcho, Karp is known for his philosophical ramblings, his “eccentric” personality, and his affinity for German culture. He has a PhD in philosophy from Goethe University Frankfurt, and draws on his background to defend Western values — in particular American ones — as especially good for the world.

This year, for instance, he published a book about how the US needed to embrace having the most technologically advanced weapons possible to preserve its dominance. An excerpt of that book was published online as an essay under the headline “We Need a New Manhattan Project.”

Keep reading

A MASSIVE 97% of Listeners Fooled: Can YOU Tell If This Hit Song Is Human… or AI?

In an era where the boundaries between the synthetic and the sentient blur with alarming rapidity, a sobering revelation has emerged from the sonic realm: humanity’s capacity to discern the hand of the artist from the algorithm has all but evaporated. 

A recent survey commissioned by the French streaming platform Deezer, polling 9,000 individuals across eight nations, laid bare this disquieting truth. 

Respondents were tasked with listening to two clips of music wholly conjured by artificial intelligence and one crafted by human hands; astonishingly, 97 percent failed to differentiate between them. 

Deezer’s chief executive, Alexis Lanternier, observed, “The survey results clearly show that people care about music and want to know if they’re listening to AI or human made tracks or not.” 

Keep reading

The Disguised Return of The EU’s Private Message Scanning Plot

A major political confrontation over online privacy is approaching as European governments prepare to decide on “Chat Control 2.0,” the European Commission’s revised proposal for monitoring private digital communications.

The plan, which could be endorsed behind closed doors, has drawn urgent warnings from Dr. Patrick Breyer, a jurist and former Member of the European Parliament, who says the draft conceals sweeping new surveillance powers beneath misleading language about “risk mitigation” and “child protection.”

In a release sent to Reclaim The Net, Breyer, long a defender of digital freedom, argues that the Commission has quietly reintroduced compulsory scanning of private messages after it was previously rejected.

He describes the move as a “deceptive sleight of hand,” insisting that it transforms a supposedly voluntary framework into a system that could compel all chat, email, and messaging providers to monitor users.

“This is a political deception of the highest order,” Breyer said.

“Following loud public protests, several member states, including Germany, the Netherlands, Poland, and Austria, said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door disguised, more dangerous, and more comprehensive than ever. The public is being played for fools.”

Under the new text, providers would be obliged to take “all appropriate risk mitigation measures” to prevent abuse on their platforms. While the Commission presents this as a flexible safety requirement, Breyer insists it is a loophole that could justify forcing companies to scan every private message, including those protected by end-to-end encryption.

“The loophole renders the much-praised removal of detection orders worthless and negates their supposed voluntary nature,” he said.

He warns that it could even lead to the introduction of “client-side scanning,” where users’ devices themselves perform surveillance before messages are sent.

Unlike the current temporary exemption known as “Chat Control 1.0,” which allows voluntary scanning of photos and videos, the new draft would open the door to text and metadata analysis. Algorithms and artificial intelligence could be deployed to monitor conversations and flag “suspicious” content.

Keep reading

Why We Have a Surveillance State

It is the inevitable consequence of our prevailing governing philosophy.

“Gentlemen do not read each other’s mail.” Henry Stimson, Secretary of State, 1929

I was upbraided recently by a dear friend for my frequent praise of outcast investor Peter Thiel over Thiel’s involvement with big data company Palantir. He forwarded me a Bloomberg article titled “Peter Thiel’s data-mining company is using War on Terror tools to track American citizens” adding: “Really scary. Not good for democracy; a better version of the Stasi’s filing system and way cheaper and more efficient.”

Increasingly, we live under the kind of comprehensive surveillance predicted by science fiction writers. But Palantir is just an arms merchant, not the architect of our brave new world. Like gun manufacturers, its products can be used for good or evil.  I have always believed that moral responsibility lies with the wielder of weapons, not the manufacturers. (This is often expressed as “Guns don’t kill people, people kill people.”)

Peter Thiel’s choice to become an arms merchant rather than invest his considerable talents and fortune elsewhere is a fair question given his libertarian leanings. I have no insight into the answer. I would guess that he founded Palantir as an act of patriotism after 9/11, and it metastasized following the money, cash being the mother’s milk of the state, something the celebrated Alexander Hamilton deeply understood.

Surveillance Is Not the Problem, but It Is a Symptom

The real threat to the republic, however, lies not in the weapons available but in the unlimited and unaccountable bureaucracy in Washington that deploys them, both at home and abroad. Having broken free of constitutional constraints, America’s political class now directs an all-powerful state that naturally adopts every tool technology has to offer.

Because our prevailing governing philosophy acknowledges no limits to the doing of good or the thwarting of evil, any means necessary may be employed as long as worthy ends can be plausibly asserted. Evil must be discouraged, taxed, or outlawed; good must be encouraged, subsidized, or made mandatory. This progressive government mission must be implemented in the public square, in the marketplace, in our educational institutions, around the world, and in our homes until all forms of social injustice are eliminated.

Keep reading

ChatGPT’s Use Of Song Lyrics Violates Copyright, Munich Court Finds

  • Judges found GEMA’s claims valid, ordering OpenAI to cease reproduction and provide damages and disclosure.
  • The court said GPT-4 and GPT-4o “memorized” lyrics, amounting to reproduction under EU copyright rules.
  • The decision, not yet final, could set a major European precedent on AI training data.

Germany’s national music rights organization secured a partial but decisive win against OpenAI after a Munich court ruled that ChatGPT’s underlying models unlawfully reproduced copyrighted German song lyrics.

The ruling orders OpenAI to cease reproduction, disclose relevant training details, and compensate rights holders.

It is not yet final, and OpenAI may appeal.

If upheld, the decision could reshape how AI companies source and license creative material in Europe, as regulators weigh broader obligations for model transparency and training-data provenance.

The case marks the first time a European court has found that a large language model violated copyright by memorizing protected works.

In its decision, the 42nd Civil Chamber of the Munich I Regional Court said that GPT-4 and GPT-4o contained “reproducible” lyrics from nine well-known songs, including Kristina Bach’s “Atemlos” and Rolf Zuckowski’s “Wie schön, dass du geboren bist.”

The court held that such memorization constitutes a “fixation” of the original works in the model’s parameters, satisfying the legal definition of reproduction under Article 2 of the EU InfoSoc Directive and Germany’s Copyright Act.

“At least in individual cases, when prompted accordingly, the model produces an output whose content is at least partially identical to content from the earlier training dataset,” a translated copy of the written judgement provided by the Munich court to Decrypt reads.

The model “generates a sequence of tokens that appears statistically plausible because, for example, it was contained in the training process in a particularly stable or frequently recurring form,” the court wrote, adding that because this “token sequence appeared on a large number of publicly accessible websites“ it meant that it was “included in the training dataset more than once.”

In the pleadings, GEMA argued that the model’s output lyrics were almost verbatim when prompted, proving that OpenAI’s systems had retained and reproduced the works.

OpenAI countered that its models do not store training data directly and that any output results from user prompts, not from deliberate copying.

The company also invoked text-and-data-mining exceptions, which allow temporary reproductions for analytical use.

“We disagree with the ruling and are considering next steps,” a spokesperson for OpenAI told Decrypt. “The decision is for a limited set of lyrics and does not impact the millions of people, businesses, and developers in Germany that use our technology every day.” 

Keep reading

German States Expand Police Powers to Train AI Surveillance Systems with Personal Data

Several German states are preparing to widen police powers by allowing personal data to be used in the training of surveillance technologies.

North Rhine-Westphalia and Baden-Württemberg are introducing legislative changes that would let police feed identifiable information such as names and facial images into commercial AI systems.

Both drafts permit this even when anonymization or pseudonymization is bypassed because the police consider it “impossible” or achievable only with “disproportionate effort.”

Hamburg adopted similar rules earlier this year, and its example appears to have encouraged other regions to follow. These developments together mark a clear move toward normalizing the use of personal information as fuel for surveillance algorithms.

The chain reaction began in Bavaria, where police in early 2024 tested Palantir’s surveillance software with real personal data.

The experiment drew objections from the state’s data protection authority, but still served as a model for others.

Hamburg used the same idea in January 2025 to amend its laws, granting permission to train “learning IT systems” on data from bystanders. Now Baden-Württemberg and North Rhine-Westphalia plan to adopt nearly identical language.

In North Rhine-Westphalia, police would be allowed to upload clear identifiers such as names or faces into commercial systems like Palantir’s and to refine behavioral or facial recognition programs with real, unaltered data.

Bettina Gayk, the state’s data protection officer, warned that “the proposed regulation addresses significant constitutional concerns.”

She argued that using data from people listed as victims or complainants was excessive and added that “products from commercial providers are improved with the help of state-collected and stored data,” which she found unacceptable.

The state government has embedded this expansion of surveillance powers into a broader revision of the Police Act, a change initially required by the Federal Constitutional Court.

The court had previously ruled that long-term video monitoring under the existing law violated the Basic Law.

Instead of narrowing these powers, the new draft introduces a clause allowing police to “develop, review, change or train IT products” with personal data.

This wording effectively enables continued use of Palantir’s data analysis platform while avoiding the constitutional limits the court demanded.

Across North Rhine-Westphalia, Baden-Württemberg, and Hamburg, the outcome will be similar: personal data can be used for training as soon as anonymization is judged to be disproportionately difficult, with the assessment left to police discretion.

Gayk has urged that the use of non-anonymized data be prohibited entirely, warning that the exceptions are written so broadly that “they will ultimately not lead to any restrictions in practice.”

Baden-Württemberg’s green-black coalition plans to pass its bill this week.

Keep reading

ICE to Deploy Palantir’s ImmigrationOS AI to Track Migrants’ Movements

U.S. Immigration and Customs Enforcement is moving forward with ImmigrationOS, a new AI system built by Palantir Technologies to give officers near real-time visibility into immigrants’ movements and sharpen enforcement priorities nationwide. The agency awarded Palantir a $30 million contract in early 2025, with a working prototype due by September 25, 2025 and an initial operating period of at least two years, according to agency planning documents and contract disclosures. ICE frames the system as a way to speed removals of people already prioritized for enforcement, better track self-deportations, and coordinate federal data that now sits in disconnected silos.

What ImmigrationOS is meant to do

ImmigrationOS is designed to pull together a wide range of government-held records to sort, flag, and route cases to officers in the field. ICE officials say the tool will help them focus on individuals linked to transnational criminal organizationsviolent offenders, documented gang members, and those who have overstayed visas.

The system is also built to register when people leave the United States on their own, so field offices can avoid wasted detention and travel costs on cases that no longer require action. While the agency describes the platform as a needed modernization step, civil liberties groups warn that an AI-driven system with sweeping data inputs risks mistakes that could touch the lives of lawful residents and even U.S. citizens.

Keep reading

Billboard Country Chart Topper is Completely AI Generated for the First Time

An AI-generated country song has claimed the top spot on Billboard’s Country Digital Song Sales chart.

“Walk My Walk,” by Breaking Rust, rocketed to No. 1 last week, becoming the first fully AI-produced track to achieve such a feat in the country genre.

According to Billboard, Breaking Rust is an artificial intelligence creation that burst onto the scene via Instagram in mid-October, complete with an AI-generated cowboy avatar and folksy video clips.

The band’s AI slop, including the chart-topper, features bland, interchangeable lyrics that critics say scream “machine-made” hollow verses about walking life’s path without a shred of authentic twang or soul.

Breaking Rust debuted at No. 9 on Billboard’s Emerging Artists chart and racked up 1.6 million official U.S. streams. Songwriting credits go to Aubierre Rivaldo Taylor, but it is actually a faceless algorithm behind it all.

Keep reading

Cloned Foods Are Coming To A Grocer Near You

Cloned-animal foods could soon enter Canada’s food supply with no labels identifying them as cloned and no warning to consumers – a move that risks eroding public trust.

According to Health Canada’s own consultation documents, Ottawa intends to remove foods derived from cloned animals from its “novel foods” list, the process that requires a pre-market safety review and public disclosure. Health Canada defines “novel foods” as products that haven’t been commonly consumed before or that use new production processes requiring extra safety checks.

From a regulatory standpoint, this looks like an efficiency measure. From a consumer-trust standpoint, it’s a miscalculation.

Health Canada argues that cloned animals and their offspring are indistinguishable from conventional ones, so they should be treated the same. The problem isn’t the science—it’s the silence. Canadians are not being told that the rules for a controversial technology are about to change. No press release, no public statement, just a quiet update on a government website most citizens will never read.

Cloning in agriculture means producing an exact genetic copy of an animal, usually for breeding purposes. The clones themselves rarely end up on dinner plates, but their offspring do, showing up in everyday products such as beef, milk, or pork. The benefits are indirect: steadier production, fewer losses from disease, or more uniform quality.

But consumers see no gain at checkout. Cloning is expensive and brings no visible improvement in taste, nutrition, or price. Shoppers could one day buy steak from the offspring of a cloned cow without any way of knowing, and still pay the same, if not more, for it.

Without labels identifying the cloned origin, potential efficiencies stay hidden upstream. When products born of new technologies are mixed in with conventional ones, consumers lose their ability to differentiate, reward innovation, or make an informed choice. In the end, the industry keeps the savings while shoppers see none.

And it isn’t only shoppers who are left in the dark. Exporters could soon pay the price too. Canada exports billions in beef and pork annually, including to the EU. If cloned-origin products enter the supply chain without labelling, Canadian exporters could face additional scrutiny or restrictions in markets where cloning is not accepted. A regulatory shortcut at home could quickly become a market barrier abroad.

This debate comes at a time when public trust in Canada’s food system is already fragile. A 2023 survey by the Canadian Centre for Food Integrity found that only 36 percent of Canadians believe the food industry is “heading in the right direction,” and fewer than half trust government regulators to be transparent. Inserting cloned foods quietly into the supply without disclosure would only deepen that skepticism.

This is exactly how Canada became trapped in the endless genetically modified organism (GMO) debate. Two decades ago, regulators and companies quietly introduced a complex technology without giving consumers the chance to understand it. By denying transparency, they also denied trust. The result was years of confusion, suspicion, and polarization that persist today.

Keep reading