Russia’s AI Enabled Drones: The Next Evolution of Warfare

Russia is now fielding long-range, fiber-optic–controlled drones while simultaneously accelerating development of fully autonomous, AI-driven attack drones that mark a dramatic shift in its battlefield strategy. Ukrainian officials confirm that Moscow is deploying jam-proof fiber-optic FPV platforms with a 50-kilometer range, forcing Ukrainian units to cover supply routes with netting to protect vehicles from incoming strikes.

At the same time, Russia is rolling out a new generation of AI-enabled systems powered by smuggled NVIDIA Jetson processors, which were supposed to be blocked by sanctions but are now appearing inside multiple Russian drones.

Ukrainian drone specialist Serhii “Flash” Beskrestnov reports that Russia has adapted Ukrainian innovations for its V2U autonomous strike drone, while other captured systems, like the Shahed MS001 and Tyuvik, show that Russia is rapidly building an arsenal of “digital predators” capable of finding and attacking targets without human control.

The V2U remains Russia’s most advanced autonomous platform. First deployed in Ukraine’s Sumy region in February 2025, the drone was being used 30–50 times per day by mid-May across multiple fronts. It navigates by using machine vision to compare live imagery with digital terrain maps stored on a solid-state drive, upgraded to 128 GB in later versions. A 14-megapixel camera, laser rangefinder, and NVIDIA Jetson Orin processor enable the drone to identify targets, conduct terrain analysis, and strike without GPS or human commands.

Russian forces update its onboard code almost weekly, training the AI directly through battlefield experience. The drone’s AI is powerful enough to fly up and down roads searching for targets, though its limited ability to distinguish targets has led to mistakes, including striking civilian infrastructure such as a public toilet instead of a vehicle.

To extend range, Russian forces deploy large “mother drones” that transport smaller V2Us deep into contested airspace before releasing them for individual strikes. Russia is also experimenting with coordinated swarms of seven or eight V2Us, each painted with distinct wing colors to visually identify one another.

According to Ukrainian analysts, these drones can maintain formation, coordinate attack order, and perform anti-air evasive maneuvers if a member of the group is shot down. This behavior resembles early machine-driven swarm intelligence and may be adapted for Russia’s long-range Shahed drones, creating fully autonomous loitering-munitions swarms capable of saturating Ukrainian defenses.

Captured drones reveal the scale of Russia’s dependence on foreign components. The V2U relies on an NVIDIA Jetson Orin for AI processing, Intel wireless adapters, Sony optical sensors, and numerous Chinese-made motors, drives, and batteries. The MS001, an upgraded Shahed variant, also carries a Jetson Orin paired with a thermal imager and digital modem, allowing it to recognize ground objects and strike moving targets rather than simply flying to preset coordinates.

A third autonomous drone, the Tyuvik, resembles a miniature Shahed and is now reportedly in mass production. Tyuvik can locate and strike moving armored vehicles despite using inexpensive commercial hardware, suggesting it also depends on smuggled Western or Chinese AI processors.

Keep reading

UK Tech Secretary Urges Ofcom to Fast-Track Censorship Law Enforcement

UK Technology Secretary Liz Kendall is pressing Ofcom to accelerate the rollout of the controversial censorship law, the Online Safety Act, warning that delays could weaken protections for vulnerable users. In a letter to the communications regulator, she said:

“I remain deeply concerned that delays in implementing duties, such as user empowerment, could hinder our work to protect women and girls from harmful content and protect users from antisemitism.”

Kendall is determined to enforce the controversial law quickly, even as more people have finally realized that the Online Safety Act grants excessive power to regulators over what citizens can say or share online.

Ofcom has confirmed that it expects to publish by July next year a register identifying which companies will face the strictest obligations, including mandatory age verification.

That schedule is roughly a year later than initially promised. The regulator said the delay was due to “factors beyond its control,” citing a legal challenge that raised “complex issues.”

One challenge involves 4chan and Kiwi Farms, platforms often targeted by politicians seeking tighter online speech regulation.

Reclaim The Net recently reported that 4chan’s legal team had rejected Ofcom’s attempt to impose fines under the Act, arguing that the regulator’s enforcement powers overreach.

The law has also drawn criticism abroad.

The US State Department condemned the UK’s online censorship laws, including the Online Safety Act, warning that the powers granted to Ofcom could restrict the open exchange of ideas.

We also covered the growing concern among technology companies that the Act’s broad language and compliance costs could force them to reconsider their presence in the UK.

Keep reading

Data centers encounter local pushback amid major growth

At least 16 data center projects, worth a combined $64 billion, have been blocked or delayed as local opposition mounts to the developments, according to a new study.

Research collected by Data Center Watch shows that residents and politicians across seven states have stopped or stalled the data center projects.

In Arizona’s West Valley, development company Tract withdrew plans for a $14 billion project after city officials declined to approve required rezoning. Tract eventually announced a similar project in Buckeye, Ariz., where the development is proceeding.

In Peculiar, Mo., and Chesterton, Ind., residents and local officials also blocked data center developments worth billions.

In total, the study found that six data center developments have been fully blocked since May 2024. The backlash has also delayed 10 other data centers, including two from Amazon.

Nine of the documented data center blockages and delays have occurred in Virginia, the world’s unofficial data center capital, according to the research firm.

The study’s authors also found growing bipartisan aversion to the behemoth data center projects: about 55 percent of Republicans and 45 percent of Democrats in districts with large data center projects have taken public positions against the developments, according to the study.

“This cross-party resistance defies expectations and marks a rare area of bipartisan alignment in infrastructure politics,” the authors wrote.

The report also found that data centers were becoming an intensifying issue in local politics. As energy costs soar and affordability takes center stage, it’s likely more candidates and elected officials will take sides on the projects.

Keep reading

CEO of Palantir Says He Spends a Large Amount of Time Talking to Nazis

While you were busy wasting your time listening to podcasts and doomscrolling on your phone, one of America’s leading AI overlords was educating himself by talking to Nazis.

This was the startling admission made by Alex Karp, cofounder and CEO of the software company Palantir, a company that’s come under increasingly heavy scrutiny for its growing role as a provider of AI-powered surveillance technology to the military and government.

In an interview with podcaster Molly O’Shea published this week, Karp, who has Jewish heritage, was discussing German culture and his time in the country before going on a tangent about how outrageous it is that people online “laud the Nazis.” Then he fessed up to something even more eyebrow-raising.

“I spend a lot of time talking to Nazis,” Karp said, implying that this is an ongoing pastime of his. “Like, real Nazis,” he emphasized.

Karp explained that it was his way of “understanding what made them tick,” before making an ironic observation.

“Part of the crazy thing about people who laud the Nazis nowadays is there’s not a single Nazi that would ever have included them in their movement and would have shipped them off to the camps quicker maybe than they shipped me off to the camps!” he chuckled.

He then pulled off the smoothest segue of all time.

“Uh, but, um, and uh, and it’s like, it’s uh but” — the interview mercifully jumps cuts —  “the thing that’s crazy unique about America,” Karp began to muse.

Beyond his role as Palantir’s head honcho, Karp is known for his philosophical ramblings, his “eccentric” personality, and his affinity for German culture. He has a PhD in philosophy from Goethe University Frankfurt, and draws on his background to defend Western values — in particular American ones — as especially good for the world.

This year, for instance, he published a book about how the US needed to embrace having the most technologically advanced weapons possible to preserve its dominance. An excerpt of that book was published online as an essay under the headline “We Need a New Manhattan Project.”

Keep reading

A MASSIVE 97% of Listeners Fooled: Can YOU Tell If This Hit Song Is Human… or AI?

In an era where the boundaries between the synthetic and the sentient blur with alarming rapidity, a sobering revelation has emerged from the sonic realm: humanity’s capacity to discern the hand of the artist from the algorithm has all but evaporated. 

A recent survey commissioned by the French streaming platform Deezer, polling 9,000 individuals across eight nations, laid bare this disquieting truth. 

Respondents were tasked with listening to two clips of music wholly conjured by artificial intelligence and one crafted by human hands; astonishingly, 97 percent failed to differentiate between them. 

Deezer’s chief executive, Alexis Lanternier, observed, “The survey results clearly show that people care about music and want to know if they’re listening to AI or human made tracks or not.” 

Keep reading

The Disguised Return of The EU’s Private Message Scanning Plot

A major political confrontation over online privacy is approaching as European governments prepare to decide on “Chat Control 2.0,” the European Commission’s revised proposal for monitoring private digital communications.

The plan, which could be endorsed behind closed doors, has drawn urgent warnings from Dr. Patrick Breyer, a jurist and former Member of the European Parliament, who says the draft conceals sweeping new surveillance powers beneath misleading language about “risk mitigation” and “child protection.”

In a release sent to Reclaim The Net, Breyer, long a defender of digital freedom, argues that the Commission has quietly reintroduced compulsory scanning of private messages after it was previously rejected.

He describes the move as a “deceptive sleight of hand,” insisting that it transforms a supposedly voluntary framework into a system that could compel all chat, email, and messaging providers to monitor users.

“This is a political deception of the highest order,” Breyer said.

“Following loud public protests, several member states, including Germany, the Netherlands, Poland, and Austria, said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door disguised, more dangerous, and more comprehensive than ever. The public is being played for fools.”

Under the new text, providers would be obliged to take “all appropriate risk mitigation measures” to prevent abuse on their platforms. While the Commission presents this as a flexible safety requirement, Breyer insists it is a loophole that could justify forcing companies to scan every private message, including those protected by end-to-end encryption.

“The loophole renders the much-praised removal of detection orders worthless and negates their supposed voluntary nature,” he said.

He warns that it could even lead to the introduction of “client-side scanning,” where users’ devices themselves perform surveillance before messages are sent.

Unlike the current temporary exemption known as “Chat Control 1.0,” which allows voluntary scanning of photos and videos, the new draft would open the door to text and metadata analysis. Algorithms and artificial intelligence could be deployed to monitor conversations and flag “suspicious” content.

Keep reading

Why We Have a Surveillance State

It is the inevitable consequence of our prevailing governing philosophy.

“Gentlemen do not read each other’s mail.” Henry Stimson, Secretary of State, 1929

I was upbraided recently by a dear friend for my frequent praise of outcast investor Peter Thiel over Thiel’s involvement with big data company Palantir. He forwarded me a Bloomberg article titled “Peter Thiel’s data-mining company is using War on Terror tools to track American citizens” adding: “Really scary. Not good for democracy; a better version of the Stasi’s filing system and way cheaper and more efficient.”

Increasingly, we live under the kind of comprehensive surveillance predicted by science fiction writers. But Palantir is just an arms merchant, not the architect of our brave new world. Like gun manufacturers, its products can be used for good or evil.  I have always believed that moral responsibility lies with the wielder of weapons, not the manufacturers. (This is often expressed as “Guns don’t kill people, people kill people.”)

Peter Thiel’s choice to become an arms merchant rather than invest his considerable talents and fortune elsewhere is a fair question given his libertarian leanings. I have no insight into the answer. I would guess that he founded Palantir as an act of patriotism after 9/11, and it metastasized following the money, cash being the mother’s milk of the state, something the celebrated Alexander Hamilton deeply understood.

Surveillance Is Not the Problem, but It Is a Symptom

The real threat to the republic, however, lies not in the weapons available but in the unlimited and unaccountable bureaucracy in Washington that deploys them, both at home and abroad. Having broken free of constitutional constraints, America’s political class now directs an all-powerful state that naturally adopts every tool technology has to offer.

Because our prevailing governing philosophy acknowledges no limits to the doing of good or the thwarting of evil, any means necessary may be employed as long as worthy ends can be plausibly asserted. Evil must be discouraged, taxed, or outlawed; good must be encouraged, subsidized, or made mandatory. This progressive government mission must be implemented in the public square, in the marketplace, in our educational institutions, around the world, and in our homes until all forms of social injustice are eliminated.

Keep reading

ChatGPT’s Use Of Song Lyrics Violates Copyright, Munich Court Finds

  • Judges found GEMA’s claims valid, ordering OpenAI to cease reproduction and provide damages and disclosure.
  • The court said GPT-4 and GPT-4o “memorized” lyrics, amounting to reproduction under EU copyright rules.
  • The decision, not yet final, could set a major European precedent on AI training data.

Germany’s national music rights organization secured a partial but decisive win against OpenAI after a Munich court ruled that ChatGPT’s underlying models unlawfully reproduced copyrighted German song lyrics.

The ruling orders OpenAI to cease reproduction, disclose relevant training details, and compensate rights holders.

It is not yet final, and OpenAI may appeal.

If upheld, the decision could reshape how AI companies source and license creative material in Europe, as regulators weigh broader obligations for model transparency and training-data provenance.

The case marks the first time a European court has found that a large language model violated copyright by memorizing protected works.

In its decision, the 42nd Civil Chamber of the Munich I Regional Court said that GPT-4 and GPT-4o contained “reproducible” lyrics from nine well-known songs, including Kristina Bach’s “Atemlos” and Rolf Zuckowski’s “Wie schön, dass du geboren bist.”

The court held that such memorization constitutes a “fixation” of the original works in the model’s parameters, satisfying the legal definition of reproduction under Article 2 of the EU InfoSoc Directive and Germany’s Copyright Act.

“At least in individual cases, when prompted accordingly, the model produces an output whose content is at least partially identical to content from the earlier training dataset,” a translated copy of the written judgement provided by the Munich court to Decrypt reads.

The model “generates a sequence of tokens that appears statistically plausible because, for example, it was contained in the training process in a particularly stable or frequently recurring form,” the court wrote, adding that because this “token sequence appeared on a large number of publicly accessible websites“ it meant that it was “included in the training dataset more than once.”

In the pleadings, GEMA argued that the model’s output lyrics were almost verbatim when prompted, proving that OpenAI’s systems had retained and reproduced the works.

OpenAI countered that its models do not store training data directly and that any output results from user prompts, not from deliberate copying.

The company also invoked text-and-data-mining exceptions, which allow temporary reproductions for analytical use.

“We disagree with the ruling and are considering next steps,” a spokesperson for OpenAI told Decrypt. “The decision is for a limited set of lyrics and does not impact the millions of people, businesses, and developers in Germany that use our technology every day.” 

Keep reading

German States Expand Police Powers to Train AI Surveillance Systems with Personal Data

Several German states are preparing to widen police powers by allowing personal data to be used in the training of surveillance technologies.

North Rhine-Westphalia and Baden-Württemberg are introducing legislative changes that would let police feed identifiable information such as names and facial images into commercial AI systems.

Both drafts permit this even when anonymization or pseudonymization is bypassed because the police consider it “impossible” or achievable only with “disproportionate effort.”

Hamburg adopted similar rules earlier this year, and its example appears to have encouraged other regions to follow. These developments together mark a clear move toward normalizing the use of personal information as fuel for surveillance algorithms.

The chain reaction began in Bavaria, where police in early 2024 tested Palantir’s surveillance software with real personal data.

The experiment drew objections from the state’s data protection authority, but still served as a model for others.

Hamburg used the same idea in January 2025 to amend its laws, granting permission to train “learning IT systems” on data from bystanders. Now Baden-Württemberg and North Rhine-Westphalia plan to adopt nearly identical language.

In North Rhine-Westphalia, police would be allowed to upload clear identifiers such as names or faces into commercial systems like Palantir’s and to refine behavioral or facial recognition programs with real, unaltered data.

Bettina Gayk, the state’s data protection officer, warned that “the proposed regulation addresses significant constitutional concerns.”

She argued that using data from people listed as victims or complainants was excessive and added that “products from commercial providers are improved with the help of state-collected and stored data,” which she found unacceptable.

The state government has embedded this expansion of surveillance powers into a broader revision of the Police Act, a change initially required by the Federal Constitutional Court.

The court had previously ruled that long-term video monitoring under the existing law violated the Basic Law.

Instead of narrowing these powers, the new draft introduces a clause allowing police to “develop, review, change or train IT products” with personal data.

This wording effectively enables continued use of Palantir’s data analysis platform while avoiding the constitutional limits the court demanded.

Across North Rhine-Westphalia, Baden-Württemberg, and Hamburg, the outcome will be similar: personal data can be used for training as soon as anonymization is judged to be disproportionately difficult, with the assessment left to police discretion.

Gayk has urged that the use of non-anonymized data be prohibited entirely, warning that the exceptions are written so broadly that “they will ultimately not lead to any restrictions in practice.”

Baden-Württemberg’s green-black coalition plans to pass its bill this week.

Keep reading

ICE to Deploy Palantir’s ImmigrationOS AI to Track Migrants’ Movements

U.S. Immigration and Customs Enforcement is moving forward with ImmigrationOS, a new AI system built by Palantir Technologies to give officers near real-time visibility into immigrants’ movements and sharpen enforcement priorities nationwide. The agency awarded Palantir a $30 million contract in early 2025, with a working prototype due by September 25, 2025 and an initial operating period of at least two years, according to agency planning documents and contract disclosures. ICE frames the system as a way to speed removals of people already prioritized for enforcement, better track self-deportations, and coordinate federal data that now sits in disconnected silos.

What ImmigrationOS is meant to do

ImmigrationOS is designed to pull together a wide range of government-held records to sort, flag, and route cases to officers in the field. ICE officials say the tool will help them focus on individuals linked to transnational criminal organizationsviolent offenders, documented gang members, and those who have overstayed visas.

The system is also built to register when people leave the United States on their own, so field offices can avoid wasted detention and travel costs on cases that no longer require action. While the agency describes the platform as a needed modernization step, civil liberties groups warn that an AI-driven system with sweeping data inputs risks mistakes that could touch the lives of lawful residents and even U.S. citizens.

Keep reading