Data centers encounter local pushback amid major growth

At least 16 data center projects, worth a combined $64 billion, have been blocked or delayed as local opposition mounts to the developments, according to a new study.

Research collected by Data Center Watch shows that residents and politicians across seven states have stopped or stalled the data center projects.

In Arizona’s West Valley, development company Tract withdrew plans for a $14 billion project after city officials declined to approve required rezoning. Tract eventually announced a similar project in Buckeye, Ariz., where the development is proceeding.

In Peculiar, Mo., and Chesterton, Ind., residents and local officials also blocked data center developments worth billions.

In total, the study found that six data center developments have been fully blocked since May 2024. The backlash has also delayed 10 other data centers, including two from Amazon.

Nine of the documented data center blockages and delays have occurred in Virginia, the world’s unofficial data center capital, according to the research firm.

The study’s authors also found growing bipartisan aversion to the behemoth data center projects: about 55 percent of Republicans and 45 percent of Democrats in districts with large data center projects have taken public positions against the developments, according to the study.

“This cross-party resistance defies expectations and marks a rare area of bipartisan alignment in infrastructure politics,” the authors wrote.

The report also found that data centers were becoming an intensifying issue in local politics. As energy costs soar and affordability takes center stage, it’s likely more candidates and elected officials will take sides on the projects.

Keep reading

CEO of Palantir Says He Spends a Large Amount of Time Talking to Nazis

While you were busy wasting your time listening to podcasts and doomscrolling on your phone, one of America’s leading AI overlords was educating himself by talking to Nazis.

This was the startling admission made by Alex Karp, cofounder and CEO of the software company Palantir, a company that’s come under increasingly heavy scrutiny for its growing role as a provider of AI-powered surveillance technology to the military and government.

In an interview with podcaster Molly O’Shea published this week, Karp, who has Jewish heritage, was discussing German culture and his time in the country before going on a tangent about how outrageous it is that people online “laud the Nazis.” Then he fessed up to something even more eyebrow-raising.

“I spend a lot of time talking to Nazis,” Karp said, implying that this is an ongoing pastime of his. “Like, real Nazis,” he emphasized.

Karp explained that it was his way of “understanding what made them tick,” before making an ironic observation.

“Part of the crazy thing about people who laud the Nazis nowadays is there’s not a single Nazi that would ever have included them in their movement and would have shipped them off to the camps quicker maybe than they shipped me off to the camps!” he chuckled.

He then pulled off the smoothest segue of all time.

“Uh, but, um, and uh, and it’s like, it’s uh but” — the interview mercifully jumps cuts —  “the thing that’s crazy unique about America,” Karp began to muse.

Beyond his role as Palantir’s head honcho, Karp is known for his philosophical ramblings, his “eccentric” personality, and his affinity for German culture. He has a PhD in philosophy from Goethe University Frankfurt, and draws on his background to defend Western values — in particular American ones — as especially good for the world.

This year, for instance, he published a book about how the US needed to embrace having the most technologically advanced weapons possible to preserve its dominance. An excerpt of that book was published online as an essay under the headline “We Need a New Manhattan Project.”

Keep reading

A MASSIVE 97% of Listeners Fooled: Can YOU Tell If This Hit Song Is Human… or AI?

In an era where the boundaries between the synthetic and the sentient blur with alarming rapidity, a sobering revelation has emerged from the sonic realm: humanity’s capacity to discern the hand of the artist from the algorithm has all but evaporated. 

A recent survey commissioned by the French streaming platform Deezer, polling 9,000 individuals across eight nations, laid bare this disquieting truth. 

Respondents were tasked with listening to two clips of music wholly conjured by artificial intelligence and one crafted by human hands; astonishingly, 97 percent failed to differentiate between them. 

Deezer’s chief executive, Alexis Lanternier, observed, “The survey results clearly show that people care about music and want to know if they’re listening to AI or human made tracks or not.” 

Keep reading

ChatGPT’s Use Of Song Lyrics Violates Copyright, Munich Court Finds

  • Judges found GEMA’s claims valid, ordering OpenAI to cease reproduction and provide damages and disclosure.
  • The court said GPT-4 and GPT-4o “memorized” lyrics, amounting to reproduction under EU copyright rules.
  • The decision, not yet final, could set a major European precedent on AI training data.

Germany’s national music rights organization secured a partial but decisive win against OpenAI after a Munich court ruled that ChatGPT’s underlying models unlawfully reproduced copyrighted German song lyrics.

The ruling orders OpenAI to cease reproduction, disclose relevant training details, and compensate rights holders.

It is not yet final, and OpenAI may appeal.

If upheld, the decision could reshape how AI companies source and license creative material in Europe, as regulators weigh broader obligations for model transparency and training-data provenance.

The case marks the first time a European court has found that a large language model violated copyright by memorizing protected works.

In its decision, the 42nd Civil Chamber of the Munich I Regional Court said that GPT-4 and GPT-4o contained “reproducible” lyrics from nine well-known songs, including Kristina Bach’s “Atemlos” and Rolf Zuckowski’s “Wie schön, dass du geboren bist.”

The court held that such memorization constitutes a “fixation” of the original works in the model’s parameters, satisfying the legal definition of reproduction under Article 2 of the EU InfoSoc Directive and Germany’s Copyright Act.

“At least in individual cases, when prompted accordingly, the model produces an output whose content is at least partially identical to content from the earlier training dataset,” a translated copy of the written judgement provided by the Munich court to Decrypt reads.

The model “generates a sequence of tokens that appears statistically plausible because, for example, it was contained in the training process in a particularly stable or frequently recurring form,” the court wrote, adding that because this “token sequence appeared on a large number of publicly accessible websites“ it meant that it was “included in the training dataset more than once.”

In the pleadings, GEMA argued that the model’s output lyrics were almost verbatim when prompted, proving that OpenAI’s systems had retained and reproduced the works.

OpenAI countered that its models do not store training data directly and that any output results from user prompts, not from deliberate copying.

The company also invoked text-and-data-mining exceptions, which allow temporary reproductions for analytical use.

“We disagree with the ruling and are considering next steps,” a spokesperson for OpenAI told Decrypt. “The decision is for a limited set of lyrics and does not impact the millions of people, businesses, and developers in Germany that use our technology every day.” 

Keep reading

German States Expand Police Powers to Train AI Surveillance Systems with Personal Data

Several German states are preparing to widen police powers by allowing personal data to be used in the training of surveillance technologies.

North Rhine-Westphalia and Baden-Württemberg are introducing legislative changes that would let police feed identifiable information such as names and facial images into commercial AI systems.

Both drafts permit this even when anonymization or pseudonymization is bypassed because the police consider it “impossible” or achievable only with “disproportionate effort.”

Hamburg adopted similar rules earlier this year, and its example appears to have encouraged other regions to follow. These developments together mark a clear move toward normalizing the use of personal information as fuel for surveillance algorithms.

The chain reaction began in Bavaria, where police in early 2024 tested Palantir’s surveillance software with real personal data.

The experiment drew objections from the state’s data protection authority, but still served as a model for others.

Hamburg used the same idea in January 2025 to amend its laws, granting permission to train “learning IT systems” on data from bystanders. Now Baden-Württemberg and North Rhine-Westphalia plan to adopt nearly identical language.

In North Rhine-Westphalia, police would be allowed to upload clear identifiers such as names or faces into commercial systems like Palantir’s and to refine behavioral or facial recognition programs with real, unaltered data.

Bettina Gayk, the state’s data protection officer, warned that “the proposed regulation addresses significant constitutional concerns.”

She argued that using data from people listed as victims or complainants was excessive and added that “products from commercial providers are improved with the help of state-collected and stored data,” which she found unacceptable.

The state government has embedded this expansion of surveillance powers into a broader revision of the Police Act, a change initially required by the Federal Constitutional Court.

The court had previously ruled that long-term video monitoring under the existing law violated the Basic Law.

Instead of narrowing these powers, the new draft introduces a clause allowing police to “develop, review, change or train IT products” with personal data.

This wording effectively enables continued use of Palantir’s data analysis platform while avoiding the constitutional limits the court demanded.

Across North Rhine-Westphalia, Baden-Württemberg, and Hamburg, the outcome will be similar: personal data can be used for training as soon as anonymization is judged to be disproportionately difficult, with the assessment left to police discretion.

Gayk has urged that the use of non-anonymized data be prohibited entirely, warning that the exceptions are written so broadly that “they will ultimately not lead to any restrictions in practice.”

Baden-Württemberg’s green-black coalition plans to pass its bill this week.

Keep reading

ICE to Deploy Palantir’s ImmigrationOS AI to Track Migrants’ Movements

U.S. Immigration and Customs Enforcement is moving forward with ImmigrationOS, a new AI system built by Palantir Technologies to give officers near real-time visibility into immigrants’ movements and sharpen enforcement priorities nationwide. The agency awarded Palantir a $30 million contract in early 2025, with a working prototype due by September 25, 2025 and an initial operating period of at least two years, according to agency planning documents and contract disclosures. ICE frames the system as a way to speed removals of people already prioritized for enforcement, better track self-deportations, and coordinate federal data that now sits in disconnected silos.

What ImmigrationOS is meant to do

ImmigrationOS is designed to pull together a wide range of government-held records to sort, flag, and route cases to officers in the field. ICE officials say the tool will help them focus on individuals linked to transnational criminal organizationsviolent offenders, documented gang members, and those who have overstayed visas.

The system is also built to register when people leave the United States on their own, so field offices can avoid wasted detention and travel costs on cases that no longer require action. While the agency describes the platform as a needed modernization step, civil liberties groups warn that an AI-driven system with sweeping data inputs risks mistakes that could touch the lives of lawful residents and even U.S. citizens.

Keep reading

Billboard Country Chart Topper is Completely AI Generated for the First Time

An AI-generated country song has claimed the top spot on Billboard’s Country Digital Song Sales chart.

“Walk My Walk,” by Breaking Rust, rocketed to No. 1 last week, becoming the first fully AI-produced track to achieve such a feat in the country genre.

According to Billboard, Breaking Rust is an artificial intelligence creation that burst onto the scene via Instagram in mid-October, complete with an AI-generated cowboy avatar and folksy video clips.

The band’s AI slop, including the chart-topper, features bland, interchangeable lyrics that critics say scream “machine-made” hollow verses about walking life’s path without a shred of authentic twang or soul.

Breaking Rust debuted at No. 9 on Billboard’s Emerging Artists chart and racked up 1.6 million official U.S. streams. Songwriting credits go to Aubierre Rivaldo Taylor, but it is actually a faceless algorithm behind it all.

Keep reading

Europe’s AI Surveillance Race Against the Rules That Protect Privacy

Europol’s deputy executive director, Jürgen Ebner, is urging the European Union to relax its own legal restraints on artificial intelligence, arguing that the rules designed to protect citizens are slowing down police innovation.

He wants a system that allows the agency to skip lengthy rights checks in “emergency” situations and move ahead with new AI tools before the usual data protection reviews are complete.

Ebner told POLITICO that criminals are having “the time of their life” with “their malicious deployment of AI,” while Europol faces months of delay because of required legal assessments.

Those safeguards, which include evaluations under the GDPR and the EU’s AI Act, exist to stop unaccountable automation from taking hold in law enforcement.

Yet Ebner’s comments reveal a growing tendency inside the agency to treat those same checks as obstacles rather than vital protections.

He said the current process can take up to eight months and claimed that speeding it up could save lives.

But an “emergency” fast track for AI surveillance carries an obvious danger. Once such shortcuts are created, the idea of what qualifies as an emergency can expand quickly.

Technologies that monitor, predict, or profile people can then slip beyond their intended use, leaving citizens exposed to automated systems that make judgments about them without transparency or recourse.

Over the past decade, Europol has steadily increased its technical capabilities, investing heavily in large-scale data analysis and decryption tools.

These systems are presented as essential for fighting cross-border crime, yet they also consolidate immense quantities of personal data under centralized control.

Without strong oversight, such tools can move from focused investigation toward widespread data collection and surveillance.

European Commission President Ursula von der Leyen has already promised to double Europol’s workforce and turn it into a central hub for combating organized crime, “navigating constantly between the physical and digital worlds.”

A legislative proposal to strengthen the agency’s powers is planned for 2026, raising questions about how much authority and access to data Europol will ultimately gain.

Ebner, who oversees governance at Europol, said that “almost all investigations” now involve the internet and added that the cost of technology has become a “massive burden on law enforcement agencies.”

He urged stronger collaboration with private technology firms, stating that “artificial intelligence is extremely costly. Legal decryption platforms are costly. The same is to be foreseen already for quantum computing.”

Keep reading

OpenAI asked Trump administration to expand Chips Act tax credit to cover data centers

recent letter from OpenAI reveals more details about how the company is hoping the federal government can support the company’s ambitious plans for data center construction.

The letter — from OpenAI’s chief global affairs officer Chris Lehane and addressed to the White House’s director of science and technology policy Michael Kratsios — argued that the government should consider expanding the Advanced Manufacturing Investment Credit (AMIC) beyond semiconductor fabrication to cover electrical grid components, AI servers, and AI data centers.

The AMIC is a 35% tax credit that was included in the Biden administration’s Chips Act.

“Broadening coverage of the AMIC will lower the effective cost of capital, de-risk early investment, and unlock private capital to help alleviate bottlenecks and accelerate the AI build in the US,” Lehane wrote.

OpenAI’s letter also called for the government to accelerate the permitting and environmental review process for these projects, and to create a strategic reserve of raw materials — such as copper, alumimum, and processed rare earth minerals — needed to build AI infrastructure.

The company first published its letter on October 27, but it didn’t get much press attention until this week, when comments by OpenAI executives prompted broader discussion about what the company wants from the Trump administration.

At a Wall Street Journal event on Wednesday, CFO Sarah Friar said the government should “backstop” OpenAI’s infrastructure loans, though she later posted on LinkedIn that she misspoke:  “OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word ‘backstop’ and it muddied the point.”

Keep reading

How Americans Want AI To Support Them

Three years after the release of ChatGPT in November 2022, there’s little remaining doubt that artificial intelligence, or AI as it’s usually referred to, will change our lives in many ways.

In some ways, it already has.

For example, people are searching the web differently, often relying on AI summaries instead of scrolling through and clicking on search results. That is, if they even use a search engine anymore and don’t just ask a large language model like ChatGPT instead.

The potential for AI tools to make our everyday lives a little easier here and there is virtually limitless, but what do people actually want AI to help them with?

Statista’s Felix Richter reports that, according to a recent survey by Statista Consumer Insights3 in 10 Americans want AI to act as a personal assistant to them, which it is already capable of.

Keep reading