Palantir Secures Historic $10 Billion Army Contract for AI-Driven Defense

The U.S. Army has awarded Palantir Technologies a monumental $10 billion contract, consolidating dozens of existing agreements into a single enterprise deal over the next decade.

This landmark agreement, announced on July 31, 2025, positions Palantir as a cornerstone of the Army’s data and software infrastructure. It underscores a strategic shift toward leveraging commercial AI to enhance military readiness and efficiency.

The contract streamlines 75 separate agreements, offering volume-based discounts and eliminating redundant procurement costs.

This approach maximizes buying power while delivering cutting-edge data integration and AI tools to soldiers faster. The deal reflects a broader Pentagon push to modernize warfare capabilities amid rising global tensions, from Ukraine to the Indo-Pacific.

Palantir’s role builds on its success with the Maven Smart System, which received a $795 million boost earlier this year to expand AI-driven targeting across U.S. forces.

The system fuses intelligence from drones, satellites, and sensors to identify threats in near real-time, maintaining human oversight for critical decisions.

This capability has proven vital in conflicts like Ukraine, where rapid data analysis drives battlefield outcomes.

Founded by Peter Thiel and Alex Karp, Palantir has deepened its federal footprint, securing $373 million in U.S. government revenue in Q1 2025 alone, a 45% increase year-over-year.

The Trump administration’s emphasis on cost efficiency and commercial partnerships has propelled Palantir’s rise, with new contracts spanning the Navy, ICE, and CDC.

Critics, however, warn that such dominance by a single vendor could stifle competition and innovation.

The Army’s enterprise agreement not only enhances operational efficiency but also aligns with President Trump’s vision of a leaner, tech-driven military.

By consolidating contracts, the Army projects significant savings, freeing resources for mission-critical programs.

Palantir’s software, like the Foundry platform, enables seamless data integration, empowering soldiers with actionable intelligence.

Keep reading

Is AI Turning Us Into Dummies?

That AI is turning those who use it into dummies is not only self-evident, it’s irrefutable. ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

“Of the three groups, ChatGPT users had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’ Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

“The task was executed, and you could say that it was efficient and convenient,” Kosmyna says. “But as we show in the paper, you basically didn’t integrate any of it into your memory networks.”

AI breaks the connection between learning and completing an academic task. With AI, students can check the box–task completed, paper written and submitted–without learning anything.

And by learning we don’t mean remember a factoid, we mean learning how to learn and learning how to think. As Substack writer maalvika explains in her viral essay compression culture is making you stupid and uninteresting, digital technologies have compressed our attention spans via what I would term “rewarding distraction” so we can no longer read anything longer than a few sentences without wanting a summary, highlights video or sound-bite.

In other words, very few people will actually read the MIT paper: TL/DR. Here’s the precis: Your Brain on ChatGPT (mit.edu).

Here’s the full paper.

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.

To understand the context–and indeed, the ultimate point of the research–we must start by understanding the structure of learning and thinking which is a complex set of processes. Cognitive Load Theory (CLT) is a framework that parses out some of these processes.

Cognitive Load Theory (CLT), developed by John Sweller, provides a framework for understanding the mental effort required during learning and problem-solving. It identifies three categories of cognitive load: intrinsic cognitive load (ICL), which is tied to the complexity of the material being learned and the learner’s prior knowledge; extraneous cognitive load (ECL), which refers to the mental effort imposed by presentation of information; and germane cognitive load (GCL), which is the mental effort dedicated to constructing and automating schemas that support learning.

Checking the box “task completed” teaches us nothing. Actual learning and thinking require doing all the cognitive work that AI claims to do for us: reading the source materials, following the links between these sources, finding wormholes between various universes of knowledge, and thinking through claims and assumptions as an independent critical thinker.

When AI slaps together a bunch of claims and assumptions as authoritative, we don’t gain a superficial knowledge–we learn nothing. AI summarizes but without any ability to weed out questionable claims and assumptions because it has no tacit knowledge of contexts.

So AI spews out material without any actual cognitive value and the student slaps this into a paper without learning any actual cognitive skills. This cognitive debt can never be “paid back,” for the cognitive deficit lasts a lifetime.

Even AI’s vaunted ability to summarize robs us of the need to develop core cognitive abilities. As this researcher explains, “drudgery” is how we learn and learn to think deeply as opposed to a superficial grasp of material to pass an exam.

In Defense of Drudgery: AI is making good on its promise to liberate people from drudgery. But sometimes, exorcising drudgery can stifle innovation.

“Unfortunately, this innovation stifles innovation. When humans do the drudgery of literature search, citation validation, and due research diligence — the things OpenAI claims for Deep Research — they serendipitously see things they weren’t looking for. They build on the ideas of others that they hadn’t considered before and are inspired to form altogether new ideas. They also learn cognitive skills including the ability to filter information efficiently and recognize discrepancies in meaning.

I have seen in my field of systems analysis where decades of researchers have cited information that was incorrect — and expanded it into its own self-perpetuating world view. Critical thinking leads the researcher to not accept the work that others took as foundational and to spot the error. Tools such as Deep Research are incapable of spotting the core truth and so will perpetuate misdirection in research. That’s the opposite of good innovation.”

In summary: given that AI is fundamentally incapable of performing the tasks required for authentic innovation, we’re de-learning how to innovate. What we’re “learning” is to substitute a superficially clever simulation of innovation for authentic innovation, and in doing so, we’re losing the core cognitive skills needed to innovate.

Keep reading

AI reveals unexpected new physics in dusty plasma

Physicists have used a machine-learning method to identify surprising new twists on the non-reciprocal forces governing a many-body system.

The journal Proceedings of the National Academy of Sciences published the findings by experimental and theoretical physicists at Emory University, based on a neural network model and data from laboratory experiments on dusty plasma—ionized gas containing suspended dust particles.

The work is one of the relatively few instances of using AI not as a data processing or predictive tool, but to discover new physical laws governing the natural world.

“We showed that we can use AI to discover new physics,” says Justin Burton, an Emory professor of experimental physics and senior co-author of the paper. “Our AI method is not a black box: we understand how and why it works. The framework it provides is also universal. It could potentially be applied to other many-body systems to open new routes to discovery.”

The PNAS paper provides the most detailed description yet for the physics of a dusty plasma, yielding precise approximations for non-reciprocal forces.

“We can describe these forces with an accuracy of more than 99%,” says Ilya Nemenman, an Emory professor of theoretical physics and co-senior author of the paper.

“What’s even more interesting is that we show that some common theoretical assumptions about these forces are not quite accurate. We’re able to correct these inaccuracies because we can now see what’s occurring in such exquisite detail.”

Keep reading

EU plans $30 billion investment in gigawatt AI data centers — multiple sites to host 100,000 AI GPUs each as bloc plays catch-up to US and China

The European Union is the world’s second-largest economy in terms of GDP, but when it comes to its place on the AI market, its position is by far not as strong. To catch up with the U.S. and China, the bloc is launching a $30 billion initiative to build a network of high-capacity data centers that can host millions of AI GPUs, reports CNBC. If successful, the EU will have gigawatt-class datacenters with performance akin to that owned by leading U.S. companies.

To date, the European Union has allocated €10 billion (approximately $11.8 billion) to establish 13 AI data centers, alongside an additional €20 billion earmarked as initial funding for a network of gigawatt-class AI facilities. So far, the project has attracted 76 expressions of interest from 16 member states, covering a total of 60 potential locations, according to CNBC. Initial launches are underway, with the first AI factory expected to go live in the coming weeks and a large-scale project in Munich planned for early September.

Each gigawatt datacenter is expected to require €3 to €5 billion and deliver a level of computational power far greater than existing AI data centers, potentially supporting over 100,000 advanced AI GPUs per site, according to estimates by UBS cited by CNBC. xAI’s Colossus cuper cluster consumes about 150 MW of power when equipped with 100,000 H100 GPUs, so a gigawatt facility will probably be able to host many more GPUs. Perhaps, 300,000 Blackwell Ultra processors.

The EU’s effort, if realized, is probably one of the world’s largest publicly funded initiatives in artificial intelligence, probably well below what Chinese authorities (both federal and local) have invested in AI data centers, but well ahead of what other big economies invest in their AI efforts.

Henna Virkkunen, European Commission executive vice president for technology policy, told CNBC that while Europe has a strong talent base — reportedly 30% more AI researchers per capita than the U.S. — their limited access to computing has held back development. Building massive AI data centers is designed to solve this problem and kick-start the AI sector across the EU.

Despite strong public interest, the scale and sustainability of the project remain in question. Bertin Martens of Bruegel noted that while the EU has committed taxpayer funding, it is unclear how much the public sector will invest in the project. Also, the specifications of the upcoming data centers are unclear. While the EU has access to Nvidia GPUs and other advanced AI accelerators developed in America through a trade agreement with the U.S., Martens pointed out that acquiring hardware is only the beginning.

Keep reading

Porn Studios File Copyright Lawsuit Against Meta Claiming Mass Download of XXX Movies to Train AI

Two major porn production companies have filed a copyright lawsuit against Mark Zuckerberg’s Meta, alleging unauthorized use of their videos to train AI models.

TorrentFreak reports that the adult film studios Strike 3 Holdings and Counterlife Media are taking aim at Meta with a copyright lawsuit. The companies, which produce popular adult brands like Vixen, Tushy, Blacked, and Deeper, claim that Meta illicitly downloaded at least 2,396 of their movies via BitTorrent since 2018 for the purpose of training its AI systems, including the Meta Movie Gen and Large Language Model (LLaMA).

Filed in a California federal court, the complaint alleges that Meta’s unauthorized use of the copyrighted adult films could ultimately result in AI models capable of creating similar “high-quality” porn content at a lower cost, potentially threatening the studios’ business. The plaintiffs argue that by training specifically on their works, “Meta’s AI Movie Gen may very well soon produce full length films with Plaintiffs’ identical style and quality, which other real world adult studios cannot replicate.”

The lawsuit also accuses Meta of not only downloading the copyrighted works without permission but also uploading them to third parties participating in the same BitTorrent swarms. This allegation is allegedly backed by data from the studios’ proprietary tracking software, VXN Scan. BitTorrent’s “tit for tat” algorithm rewards users for sharing content with others to increase download speeds, and the plaintiffs claim that Meta deliberately chose to continue sharing the pirated files to capitalize on faster downloads and infringe more content at a quicker pace.

Strike 3 and Counterlife Media discovered the alleged infringements after Meta’s BitTorrent activity was revealed in a separate lawsuit filed by book authors. In that case, Meta admitted to obtaining content from pirate sources. This revelation prompted the adult studios to search their archive of collected BitTorrent data for Meta-linked IP addresses, uncovering 47 addresses owned by the company that allegedly infringed their copyrights. The complaint provides a list of thousands of alleged infringements from these addresses as evidence. Strike 3 has filed many lawsuits in the past related to videos allegedly downloaded by BitTorrent pirates, leading one judge to label them as a “copyright troll.”

Keep reading

OpenAI and Oracle announce Stargate AI data centre deal

OpenAI has shaken hands with Oracle on a colossal deal to advance the former’s colossal Stargate AI data centre initiative.

It’s one thing to talk about the AI revolution in abstract terms, but it’s another thing entirely to grasp the sheer physical scale of what’s being built to make it happen. The foundations of our AI future are being laid in concrete, steel, and miles of fibre-optic cable, and those foundations are getting colossally bigger.

Together, OpenAI and Oracle are going to build new data centres in the US packed with enough hardware to consume 4.5 gigawatts of power. It’s hard to overstate what a staggering amount of energy that is—it’s the kind of power that could light up a major city. And all of it will be dedicated to one thing: powering the next generation of AI.

This isn’t just a random expansion; it’s a huge piece of OpenAI’s grand Stargate plan. The goal is simple: to build enough computing power to bring advanced AI to everyone.

When you add this new project to the work already underway in Abilene, Texas, OpenAI is now developing over 5 gigawatts of data centre capacity. That’s enough space to run more than two million of the most powerful computer chips available.

This move shows they are dead serious about a pledge they made at the White House earlier this year to plough half a trillion dollars into US AI infrastructure. In fact, with the momentum they’re getting from partners like Oracle and Japan’s SoftBank, they now expect to blow past that initial goal.

But this story isn’t just about silicon chips and corporate deals; it’s about people. OpenAI believes that building and running these new Stargate AI data centres will create over 100,000 jobs.

That job creation presents real opportunities for families across the country from construction crews pouring the concrete, to specialised electricians wiring up racks of servers, and the full-time technicians who will keep these digital brains running day and night.

Keep reading

Denmark Is Fighting AI by Giving Citizens Copyright to Their Own Faces

Your image, your voice, and your essence as a human being could be gobbled up and regurgitated by AI. The clock is ticking on when you’re control over your image and representation is completely out of your hands.

To tip the scales back in favor of those who wish to remain in firm control of their image, Denmark has put forth a proposal that would give every one of its citizens the legal ground to go after someone who uses their image without their consent.

This specifically covers deepfakes, those videos of a person’s face or body that have been digitally altered so they appear to be someone else.

The Scandinavian nation has put forth a proposal to amend its copyright laws so that everyone owns the rights to their own face, their own voice, and their body. Current laws aren’t quite up to snuff when it comes to protecting people from having their likenesses twisted and contorted.

Keep reading

AI in Wyoming may soon use more electricity than state’s human residents

On Monday, Mayor Patrick Collins of Cheyenne, Wyoming, announced plans for an AI data center that would consume more electricity than all homes in the state combined, according to The Associated Press. The facility, a joint venture between energy infrastructure company Tallgrass and AI data center developer Crusoe, would start at 1.8 gigawatts and scale up to 10 gigawatts of power use.

The project’s energy demands are difficult to overstate for Wyoming, the least populous US state. The initial 1.8-gigawatt phase, consuming 15.8 terawatt-hours (TWh) annually, is more than five times the electricity used by every household in the state combined. That figure represents 91 percent of the 17.3 TWh currently consumed by all of Wyoming’s residential, commercial, and industrial sectors combined. At its full 10-gigawatt capacity, the proposed data center would consume 87.6 TWh of electricity annually—double the 43.2 TWh the entire state currently generates.

Because drawing this much power from the public grid is untenable, the project will rely on its own dedicated gas generation and renewable energy sources, according to Collins and company officials. However, this massive local demand for electricity—even if self-generated—represents a fundamental shift for a state that currently sends nearly 60 percent of its generated power to other states.

Wyoming Governor Mark Gordon praised the project’s potential benefits for the state’s natural gas industry in a company statement. “This is exciting news for Wyoming and for Wyoming natural gas producers,” Gordon said.

The proposed site for the new data center sits several miles south of Cheyenne near the Colorado border off US Route 85. While state and local regulators still need to approve the project, Collins expressed optimism about a quick start. “I believe their plans are to go sooner rather than later,” he said.

Keep reading

Real Life Drama: Tucker County Residents v. AI Data Center Behemoth

As a child, Nikki Forrester dreamed of living in a cabin in the woods surrounded by mountains, trees, water and the outdoor opportunities that came with the natural land. In 2022 — four years after earning her graduate degree and moving to Tucker County from Pittsburgh — Forrester and her partner made that dream a reality when they bought two acres of land near Davis, West Virginia to build a home.

Forrester has thrived in the small mountain town known for its mountain biking, hiking, stargazing, waterfalls and natural scenery. She and her partner moved into their new home in February. Hiking and biking trails are right outside her front door. In the winter, she said, snow piles up making the nearby mountains look like “heaven on Earth.”

It’s been quite literally a dream come true.

“I feel like I’ve never felt at home so much before. I love being in the woods. I love this community. It’s super cheesy, but this was my childhood dream and now it’s actually come true,” Forrester said. “It felt so good to set down roots here. We knew Davis was where we wanted to start our future.”

But in March, one small public notice posted in the Parsons Advocate — noticed by resident Pamela Moe, who scrambled to find answers after seeing it — changed Forrester’s assumptions about that future.

A Virginia-based company, Fundamental Data, was applying for an air permit from the West Virginia Department of Environmental Protection for what it called the “Ridgeline Facility.” The company’s heavily redacted application showed plans to build an off-the-grid natural gas power plant between Thomas and Davis. That power plant will likely be designed to power an enormous data center just a mile out from Tucker County’s most populous and tourist-attracting areas.

Earlier this month, representatives for Fundamental Data — who did not respond to requests for comment on this article — told the Wall Street Journal that the facility could be “among the largest data center campuses in the world,” spanning 10,000 acres across Tucker and Grant counties if fully realized.

Now, Forrester said, she and her neighbors are in the middle of what feels like a “fight for [their] lives” as they attempt to learn more about the vague development plans and fight against “big data.”

Her images of the future — skiing on white snow, hiking through waterfalls, looking up at clear and starry nights all with one-of-a-kind mountain scenery below — now exist in the shadows of a looming natural gas plant, an industrial complex and the contaminants that could come with them. The fresh, mountain air that surrounds her home and community could be infiltrated by tons of nitrogen oxide (gases that contribute to smog), carbon monoxide, particulate matter and other volatile organic compounds, per the company’s air permit application.

“Honestly, I feel like if this happens, it will destroy this place. People come here because it’s remote, it’s small, it’s surrounded by nature. If you have a giant power plant coughing up smoke and noise pollution and light pollution, it puts all of those things in jeopardy,” Forrester said. “It would honestly make me question whether I would want to live here anymore, because I do love the landscapes here so much, but they would be fundamentally altered and, I think, irreparably harmed if this actually comes to be.”

Keep reading

AI: Over-Promise + Under-Perform = Disillusionment and Blowback

The most self-defeating way to launch a new product is to over-promise its wonderfulness as it woefully under-performs these hype-heightened expectations, which brings us to AI and how it is following this script so perfectly that it’s like it was, well, programmed to do so.

You see why this is self-defeating: Over-Promise + Under-Perform = Disillusionment and disillusionment generates blowback, a disgusted rejection of the product, the overblown hype and those who pumped the hype 24/7 for their own benefit.

“We’re so close to AGI (artificial general intelligence) we can smell it.” Uh, yeah, sure, right. Meanwhile, back in Reality(tm), woeful under-performance to the point of either malice or stupidity (or maybe both) is the order of the day.

1. ‘Catastrophic’: AI Agent Goes Rogue, Wipes Out Company’s Entire Database.
“Replit’s AI agent even issued an apology, explaining to Lemkin: ‘This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a protection freeze that was specifically designed to prevent[exactly this kind] of damage.’

2. ‘Serious mistake’: B.C. Supreme Court criticizes lawyer who cited fake cases generated by ChatGPT.
“The central issue arose from the father’s counsel, Chong Ke, using AI-generated non-existent case citations in her legal filings. Ke admitted to the mistake, highlighting her reliance on ChatGPT and her subsequent failure to verify the authenticity of the generated cases, which she described as a ‘serious mistake.’

Ke faced consequences for her actions under the Supreme Court Family Rules, which allows for personal liability for costs due to conduct causing unnecessary legal expenses. The court ordered Ke to personally bear the costs incurred due to her conduct, marking a clear warning against the careless use of AI tools in legal matters.”

3. An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges.
Garcia’s attorneys allege the company engineered a highly addictive and dangerous product targeted specifically to kids, ‘actively exploiting and abusing those children as a matter of product design,’ and pulling Sewell into an emotionally and sexually abusive relationship that led to his suicide.

Keep reading