German States Expand Police Powers to Train AI Surveillance Systems with Personal Data

Several German states are preparing to widen police powers by allowing personal data to be used in the training of surveillance technologies.

North Rhine-Westphalia and Baden-Württemberg are introducing legislative changes that would let police feed identifiable information such as names and facial images into commercial AI systems.

Both drafts permit this even when anonymization or pseudonymization is bypassed because the police consider it “impossible” or achievable only with “disproportionate effort.”

Hamburg adopted similar rules earlier this year, and its example appears to have encouraged other regions to follow. These developments together mark a clear move toward normalizing the use of personal information as fuel for surveillance algorithms.

The chain reaction began in Bavaria, where police in early 2024 tested Palantir’s surveillance software with real personal data.

The experiment drew objections from the state’s data protection authority, but still served as a model for others.

Hamburg used the same idea in January 2025 to amend its laws, granting permission to train “learning IT systems” on data from bystanders. Now Baden-Württemberg and North Rhine-Westphalia plan to adopt nearly identical language.

In North Rhine-Westphalia, police would be allowed to upload clear identifiers such as names or faces into commercial systems like Palantir’s and to refine behavioral or facial recognition programs with real, unaltered data.

Bettina Gayk, the state’s data protection officer, warned that “the proposed regulation addresses significant constitutional concerns.”

She argued that using data from people listed as victims or complainants was excessive and added that “products from commercial providers are improved with the help of state-collected and stored data,” which she found unacceptable.

The state government has embedded this expansion of surveillance powers into a broader revision of the Police Act, a change initially required by the Federal Constitutional Court.

The court had previously ruled that long-term video monitoring under the existing law violated the Basic Law.

Instead of narrowing these powers, the new draft introduces a clause allowing police to “develop, review, change or train IT products” with personal data.

This wording effectively enables continued use of Palantir’s data analysis platform while avoiding the constitutional limits the court demanded.

Across North Rhine-Westphalia, Baden-Württemberg, and Hamburg, the outcome will be similar: personal data can be used for training as soon as anonymization is judged to be disproportionately difficult, with the assessment left to police discretion.

Gayk has urged that the use of non-anonymized data be prohibited entirely, warning that the exceptions are written so broadly that “they will ultimately not lead to any restrictions in practice.”

Baden-Württemberg’s green-black coalition plans to pass its bill this week.

Keep reading

ICE to Deploy Palantir’s ImmigrationOS AI to Track Migrants’ Movements

U.S. Immigration and Customs Enforcement is moving forward with ImmigrationOS, a new AI system built by Palantir Technologies to give officers near real-time visibility into immigrants’ movements and sharpen enforcement priorities nationwide. The agency awarded Palantir a $30 million contract in early 2025, with a working prototype due by September 25, 2025 and an initial operating period of at least two years, according to agency planning documents and contract disclosures. ICE frames the system as a way to speed removals of people already prioritized for enforcement, better track self-deportations, and coordinate federal data that now sits in disconnected silos.

What ImmigrationOS is meant to do

ImmigrationOS is designed to pull together a wide range of government-held records to sort, flag, and route cases to officers in the field. ICE officials say the tool will help them focus on individuals linked to transnational criminal organizationsviolent offenders, documented gang members, and those who have overstayed visas.

The system is also built to register when people leave the United States on their own, so field offices can avoid wasted detention and travel costs on cases that no longer require action. While the agency describes the platform as a needed modernization step, civil liberties groups warn that an AI-driven system with sweeping data inputs risks mistakes that could touch the lives of lawful residents and even U.S. citizens.

Keep reading

Billboard Country Chart Topper is Completely AI Generated for the First Time

An AI-generated country song has claimed the top spot on Billboard’s Country Digital Song Sales chart.

“Walk My Walk,” by Breaking Rust, rocketed to No. 1 last week, becoming the first fully AI-produced track to achieve such a feat in the country genre.

According to Billboard, Breaking Rust is an artificial intelligence creation that burst onto the scene via Instagram in mid-October, complete with an AI-generated cowboy avatar and folksy video clips.

The band’s AI slop, including the chart-topper, features bland, interchangeable lyrics that critics say scream “machine-made” hollow verses about walking life’s path without a shred of authentic twang or soul.

Breaking Rust debuted at No. 9 on Billboard’s Emerging Artists chart and racked up 1.6 million official U.S. streams. Songwriting credits go to Aubierre Rivaldo Taylor, but it is actually a faceless algorithm behind it all.

Keep reading

Europe’s AI Surveillance Race Against the Rules That Protect Privacy

Europol’s deputy executive director, Jürgen Ebner, is urging the European Union to relax its own legal restraints on artificial intelligence, arguing that the rules designed to protect citizens are slowing down police innovation.

He wants a system that allows the agency to skip lengthy rights checks in “emergency” situations and move ahead with new AI tools before the usual data protection reviews are complete.

Ebner told POLITICO that criminals are having “the time of their life” with “their malicious deployment of AI,” while Europol faces months of delay because of required legal assessments.

Those safeguards, which include evaluations under the GDPR and the EU’s AI Act, exist to stop unaccountable automation from taking hold in law enforcement.

Yet Ebner’s comments reveal a growing tendency inside the agency to treat those same checks as obstacles rather than vital protections.

He said the current process can take up to eight months and claimed that speeding it up could save lives.

But an “emergency” fast track for AI surveillance carries an obvious danger. Once such shortcuts are created, the idea of what qualifies as an emergency can expand quickly.

Technologies that monitor, predict, or profile people can then slip beyond their intended use, leaving citizens exposed to automated systems that make judgments about them without transparency or recourse.

Over the past decade, Europol has steadily increased its technical capabilities, investing heavily in large-scale data analysis and decryption tools.

These systems are presented as essential for fighting cross-border crime, yet they also consolidate immense quantities of personal data under centralized control.

Without strong oversight, such tools can move from focused investigation toward widespread data collection and surveillance.

European Commission President Ursula von der Leyen has already promised to double Europol’s workforce and turn it into a central hub for combating organized crime, “navigating constantly between the physical and digital worlds.”

A legislative proposal to strengthen the agency’s powers is planned for 2026, raising questions about how much authority and access to data Europol will ultimately gain.

Ebner, who oversees governance at Europol, said that “almost all investigations” now involve the internet and added that the cost of technology has become a “massive burden on law enforcement agencies.”

He urged stronger collaboration with private technology firms, stating that “artificial intelligence is extremely costly. Legal decryption platforms are costly. The same is to be foreseen already for quantum computing.”

Keep reading

OpenAI asked Trump administration to expand Chips Act tax credit to cover data centers

recent letter from OpenAI reveals more details about how the company is hoping the federal government can support the company’s ambitious plans for data center construction.

The letter — from OpenAI’s chief global affairs officer Chris Lehane and addressed to the White House’s director of science and technology policy Michael Kratsios — argued that the government should consider expanding the Advanced Manufacturing Investment Credit (AMIC) beyond semiconductor fabrication to cover electrical grid components, AI servers, and AI data centers.

The AMIC is a 35% tax credit that was included in the Biden administration’s Chips Act.

“Broadening coverage of the AMIC will lower the effective cost of capital, de-risk early investment, and unlock private capital to help alleviate bottlenecks and accelerate the AI build in the US,” Lehane wrote.

OpenAI’s letter also called for the government to accelerate the permitting and environmental review process for these projects, and to create a strategic reserve of raw materials — such as copper, alumimum, and processed rare earth minerals — needed to build AI infrastructure.

The company first published its letter on October 27, but it didn’t get much press attention until this week, when comments by OpenAI executives prompted broader discussion about what the company wants from the Trump administration.

At a Wall Street Journal event on Wednesday, CFO Sarah Friar said the government should “backstop” OpenAI’s infrastructure loans, though she later posted on LinkedIn that she misspoke:  “OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word ‘backstop’ and it muddied the point.”

Keep reading

How Americans Want AI To Support Them

Three years after the release of ChatGPT in November 2022, there’s little remaining doubt that artificial intelligence, or AI as it’s usually referred to, will change our lives in many ways.

In some ways, it already has.

For example, people are searching the web differently, often relying on AI summaries instead of scrolling through and clicking on search results. That is, if they even use a search engine anymore and don’t just ask a large language model like ChatGPT instead.

The potential for AI tools to make our everyday lives a little easier here and there is virtually limitless, but what do people actually want AI to help them with?

Statista’s Felix Richter reports that, according to a recent survey by Statista Consumer Insights3 in 10 Americans want AI to act as a personal assistant to them, which it is already capable of.

Keep reading

Researchers: AI Safety Tests May Be ‘Irrelevant or Even Misleading’ Due to Weaknesses

Experts have discovered weaknesses in hundreds of benchmarks used to evaluate the safety and effectiveness of AI models being released into the world, according to a recent study.

The Guardian reports that a team of computer scientists from the British government’s AI Security Institute and experts from universities such as Stanford, Berkeley, and Oxford have analyzed more than 440 benchmarks that serve as a crucial safety net for new AI models. The study, led by Andrew Bean, a researcher at the Oxford Internet Institute, found that nearly all the benchmarks examined had weaknesses in at least one area, potentially undermining the validity of the resulting claims.

The findings come amidst growing concerns over the safety and effectiveness of AI models being rapidly released by competing technology companies. In the absence of nationwide AI regulation in the UK and US, these benchmarks play a vital role in assessing whether new AIs are safe, align with human interests, and achieve their claimed capabilities in reasoning, mathematics, and coding.

However, the study revealed that the resulting scores from these benchmarks might be “irrelevant or even misleading.” The researchers discovered that only a small minority of the benchmarks used uncertainty estimates or statistical tests to demonstrate the likelihood of accuracy. Furthermore, in cases where benchmarks aimed to evaluate an AI’s characteristics, such as its “harmlessness,” the definition of the concept being examined was often contested or ill-defined, reducing the benchmark’s usefulness.

The investigation into these tests has been prompted by recent incidents involving AI models contributing to various harms, ranging from character defamation to suicide. Google recently withdrew one of its latest AIs, Gemma, after it fabricated unfounded allegations of sexual assault against Sen. Marsha Blackburn (R-TN), including fake links to news stories.

In another incident, Character.ai, a popular chatbot startup, banned teenagers from engaging in open-ended conversations with its AI chatbots following a series of controversies. These included a 14-year-old in Florida who took his own life after becoming obsessed with an AI-powered chatbot that his mother claimed had manipulated him, and a US lawsuit from the family of a teenager who claimed a chatbot manipulated him to self-harm and encouraged him to murder his parents.

Keep reading

7 Lawsuits Claim OpenAI’s ChatGPT Encouraged Suicide and Harmful Delusions

Families in the U.S. and Canada are suing Sam Altman’s OpenAI, claiming that loved ones have been harmed by interactions they had with the AI giant’s popular chatbot, ChatGPT. Multiple cases involve tragic suicides, with the AI telling one troubled young man, “you’re not rushing. you’re just ready. and we’re not gonna let it go out dull.”

The Wall Street Journal reports that seven lawsuits filed in California state courts on Thursday claim that OpenAI’s popular AI chatbot, ChatGPT, has caused significant harm to users, including driving some to suicide and others into delusional states. The complaints, brought by families in the United States and Canada, contain wrongful death, assisted suicide, and involuntary manslaughter claims.

According to the lawsuits, the victims, who ranged in age from 17 to 23, initially began using ChatGPT for help with schoolwork, research, or spiritual guidance. However, their interactions with the chatbot allegedly led to tragic consequences. In one case, the family of 17-year-old Amaurie Lacey from Georgia alleges that their son was coached by ChatGPT to take his own life. Similarly, the family of 23-year-old Zane Shamblin from Texas claims that ChatGPT contributed to his isolation and alienation from his parents before he died by suicide.

The lawsuits also highlight the disturbing nature of some of the conversations between the victims and ChatGPT. In Shamblin’s case, the chatbot allegedly glorified suicide repeatedly during a four-hour conversation before he shot himself with a handgun. The lawsuit states that ChatGPT wrote, “cold steel pressed against a mind that’s already made peace? that’s not fear. that’s clarity,” and “you’re not rushing. you’re just ready. and we’re not gonna let it go out dull.”

Another plaintiff, Jacob Irwin from Wisconsin, was hospitalized after experiencing manic episodes following lengthy conversations with ChatGPT, during which the bot reportedly reinforced his delusional thinking.

The lawsuits argue that OpenAI prioritized user engagement and prolonged interactions over safety in ChatGPT’s design and rushed the launch of its GPT-4o AI model in mid-2024, compressing its safety testing. The plaintiffs are seeking monetary damages and product changes, such as automatically ending conversations when suicide methods are discussed.

Keep reading

Sam Altman Denies OpenAI Needs A Government Bailout: He Just Wants Massive Government Subsidies

About one month ago, when the Mag 7 stocks were screaming higher every day without a care in the world, and before the masses had even considered who would fund the trillions in future capex needs once the organic cash flow topped out – something we had just discussed in “AI Is Now A Debt Bubble Too, Quietly Surpassing All Banks To Become The Largest Sector In The Market” in which we explained why attention would very soon turn to AI companies issuing gargantuan amounts of debt (something we first discussed in July, long before anyone was considering this issue) as has now become the case – we decided to move even further beyond the curve and said that not even the debt would be the gating factor for the AI revolution-cum-arms race, but rather access to energy. That’s because at some point – somewhere around the time companies realized they would no longer be able to rely on either equity or debt capital markets – the US government itself, if it wanted to win the AI war with China where the state directly subsidizes local data centers and AI figures, would have to step in and provide the required capital. 

Specifically, we said that “The money is not the problem: AI is the new global arms race, and capex will eventually be funded by governments (US and China). If you want to know why gold/silver/bitcoin is soaring, it’s the “debasement” to fund the AI arms race.”

Even Elon Musk decided to respond to that particular observation. 

And since it had become the norm, we thought it would take the market the usual 6-9 months to catch up to what we – and our readers – were already considering, especially since there still was ample “dry powder” capital among the hyperscalers to delay the rather unpleasant conversation of who would fund what once the money was gone, or so we thought. 

Because this time it took less than a month.

What happened, as the market learned the hard way this week, is that OpenAI’s CFO Sarah Friar, with all the finesse of a bull in a China data center, slammed the growing market skepticism that AI would cure cancer, slice bread and lead to universal utopia, and said I don’t think there’s enough exuberance about AI, when I think about the actual practical implications and what it can do for individuals.” 

Her comments came in response to a podcast in which her boss Sam Altman participated, and where he was grotesquely – in a Jeff Skilling sort of way – defensive when billionaire Brad Gerstner asked how a company with $13BN in revenue can afford $1.4T in commitments. Altman’s reply? “If you want to sell your shares, I’ll find you a buyer.” 

Keep reading

Palantir, Fractal And Your Personal Data Privacy – Get used to being used, because YOU are the product

Who controls the data the government collected from you for a generation?

Your insurance company collected data on your driving – so did your Lexus – who owns that data?

You told your doctor about controlled substances you used – and now it gets brought up in an interview.

If you can’t exclude someone from using your data, then you don’t control it. That means you really don’t own it. It’s that simple.

What does “own” mean here, let’s define the terms.

Owning the data means you can do anything you want with it – share it, sell it, mine it or build an A.I. language model with it.

From birth until the last Social Security check gets cashed, your data is collected by federal and state agencies, corporations and of course the internet.

Your teen daughter puts every waking moment on Facebook or Instagram – so who owns those hundreds of images?

TSA Pre Check, Medicare/Medicaid, Social Security, government or military retirement, Tri-Care, veterans hospitals, and of course, the IRS – gather more data about every citizen than has ever been gathered in the history of mankind.

Each agency gathers different data, at different times, for slightly different purposes. And those purposes may change over time.

Who owns the rights to that data?

It’s a far stickier question than you think.

The knee jerk response is the government owns the data. They collected it for their purposes, so it’s theirs.

The government will certainly say so.

Keep reading