Palantir’s ELITE: Not All Maps Are Meant To Guide Us

Many memorable journeys start with a map. Maps have been around for ages, guiding humanity on its way in grand style. Maps have helped sailors cross oceans, caravans traverse deserts, and armies march into the pages of history. Maps have been staple tools of exploration, survival, and sovereignty. And today? Today, they’re on our devices, and we use them to find literally everything, including the nearest taco truck, coffee shop, and gas station. Yet, today’s maps don’t just show us where we are and where we are going. Increasingly, they also tell someone else the gist of who we are. What does that mean exactly? It means not all maps are made for us. Some maps are made about us. Case in point—the objective of Palantir’s ELITE demands our immediate attention. ELITE is a digital map used by ICE to identify neighborhoods, households, and individuals for targeted enforcement, drawing on data that was never meant to become ammunition.

No, Palantir’s ELITE is not strictly limited to use by U.S. Immigration and Customs Enforcement (ICE), but its primary and reported use is specifically for immigration enforcement. ELITE, which stands for Enhanced Leads Identification & Targeting for Enforcement, is a software tool/app developed by Palantir for ICE to find, classify, and prioritize presumably illegal immigrants for deportation. It was rolled out in late 2025, with reports of use starting in September 2025. Essentially, ELITE is a map that pulls data from across federal systems—including agencies like Medicaid and Health Department information—and uses it to compile dossiers on people, complete with address confidence scores and patterns of residence density. It tells ICE agents where individuals live and how likely they are to be there so that ICE can prioritize “target-rich environments” for raids.

In other words, data that was once siloed for entirely different purposes—health records, public assistance, demographic lists—is now being fused into a single dashboard designed to help federal agents decide where to show up and who to detain. While no one wants criminal illegal aliens freely roaming the streets of our nation, the result of the operation is not “analytics”—it is anticipatory policing dressed as operational efficiency. One might think the scenario sounds like something only seen in dystopian fiction, and others agree. Advocates for freedom have pointed out that ELITE’s model resembles (in unsettling ways) systems designed to anticipate behavior rather than respond to actual wrongdoing. Beyond that, what else could it be used for, and when will that next step begin?

Keep reading

Open-Source AI Models Vulnerable to Criminal Misuse, Researchers Warn

Hackers and other criminals can easily commandeer computers operating open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating security risks and vulnerabilities, researchers said on Thursday.

Hackers could target the computers running the LLMs and direct them to carry out spam operations, phishing content creation or disinformation campaigns, evading platform security protocols, the researchers said.

The research, carried out jointly by cybersecurity companies SentinelOne and Censys over the course of 293 days and shared exclusively with Reuters, offers a new window into the scale of potentially illicit use cases for thousands of open-source LLM deployments.

These include hacking, hate speech and harassment, violent or gore content, personal data theft, scams or fraud, and in some cases child sexual abuse material, the researchers said.

While thousands of open-source LLM variants exist, a significant portion of the LLMs on the internet-accessible hosts are variants of Meta’s Llama, Google DeepMind’s Gemma, and others, according to the researchers. While some of the open-source models include guardrails, the researchers identified hundreds of instances where guardrails were explicitly removed.

AI industry conversations about security controls are “ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne.

Guerrero-Saade likened the situation to an “iceberg” that is not being properly accounted for across the industry and open-source community.

The research analyzed publicly accessible deployments of open-source LLMs deployed through Ollama, a tool that allows people and organizations to run their own versions of various large-language models.

The researchers were able to see system prompts, which are the instructions that dictate how the model behaves, in roughly a quarter of the LLMs they observed. Of those, they determined that 7.5% could potentially enable harmful activity.

Roughly 30% of the hosts observed by the researchers are operating out of China, and about 20% in the U.S.

Rachel Adams, the CEO and founder of the Global Center on AI Governance, said in an email that once open models are released, responsibility for what happens next becomes shared across the ecosystem, including the originating labs.

“Labs are not responsible for every downstream misuse (which are hard to anticipate), but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance, particularly given uneven global enforcement capacity,” Adams said.

A spokesperson for Meta declined to respond to questions about developers’ responsibilities for addressing concerns around downstream abuse of open-source models and how concerns might be reported, but noted the company’s Llama Protection tools for Llama developers, and the company’s Meta Llama Responsible Use Guide.

Microsoft AI Red Team Lead Ram Shankar Siva Kumar said in an email that Microsoft believes open-source models “play an important role” in a variety of areas, but, “at the same time, we are clear-eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards.”

Keep reading

AI Can Match Average Human Creativity—But We Still Hold the Edge Where It Matters Most, New Study Finds

Advances in artificial intelligence have fueled a growing belief that machines are on the verge of matching, or even surpassing, human creativity. Large language models can now write poems, spin short stories, and generate clever wordplay in seconds. To many, these outputs feel creative enough to blur the line between human imagination and machine-generated language.

However, a new large-scale empirical study suggests that while today’s most advanced AI systems can rival the average human on certain creativity measures, they still fall short of the most creative minds—and that gap remains significant.

The research, published in Scientific Reports, offers one of the most comprehensive head-to-head comparisons yet between human creativity and large language models (LLMs).

By benchmarking multiple AI systems against a dataset of 100,000 human participants, the study moves the conversation beyond anecdotes and viral examples, replacing speculation with quantitative evidence.

“Our study shows that some AI systems based on large language models can now outperform average human creativity on well-defined tasks,” co-author and Professor at the University of Montreal, Dr. Karim Jerbi, said in a press release. “This result may be surprising — even unsettling — but our study also highlights an equally important observation: even the best AI systems still fall short of the levels reached by the most creative humans.”

Keep reading

EU Targets X (Again) in Grok AI Probe

European regulators have launched a new investigation into Elon Musk’s X, focusing on alleged failures to control sexually explicit imagery generated by the company’s AI chatbot, Grok.

The case is being pursued under the European Union’s Digital Services Act (DSA), a law that grants the European Commission expansive powers to police digital platforms for potential “harms.”

In a statement, the Commission said, “The new investigation will assess whether the company properly assessed and mitigated risks associated with the deployment of Grok’s functionalities into X in the EU.”

The agency added that the review includes “risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material.” Officials stated that these threats “seem to have materialized, exposing citizens in the EU to serious harm.”

Keep reading

Somali Member of Minnesota House of Representatives Uses Doctored Image of Alex Pretti in Memorial Tribute

Minnesota State Representative Mohamud Noor has come under fire for using what appears to be an AI-enhanced or filtered image of Alex Pretti, the armed man shot by a federal agent in Minneapolis over the weekend, in a social media memorial post.

The bizarrely altered photo appears to have been manipulated to make Pretti look more attractive.

Pretti, 37, was fatally shot after he engaged in a scuffle with federal agents who were in the middle of an immigration enforcement operation.

Noor, a Democrat who represents District 60B in the Minnesota House, posted a tribute to Pretti on X, writing:

“Words can’t describe our pain. Our hearts are broken, but our spirit is strong. Alex Pretti was killed standing for his neighbors. This has to stop. Our community deserves safety, transparency, and accountability. RIP Alex.”

However, the image he used bears absolutely no resemblance to the original. It appeared he used AI tools or heavy filters to “enhance” Pretti’s appearance, making him look more conventionally attractive by smoothing his skin, thickening his hair, and altering facial features.

Naturally, Noor was absolutely blasted in the replies and quote posts.

“Why are you guys trying to make him better looking through AI and filters? That’s actually twisted,” one user wrote.

Keep reading

The Tech Community’s Efforts to Dethrone OpenAI

OpenAI has made tech waves in the recent years given the prominences of the ChatGPT family of models, and the remanent of LLMs as search engine reindexing algorithms. They were a private research entity that became a titan now competing with the likes of Google. However, their story is less than glamorous.

They started out as a non-profit funded by Musk only to be insanely profit-driven. In fact, they are a cash-burn enterprise, and on top of that there are concerns based off of the localization of AI search results, privacy concerns over social prompt injecting, the suspicious death of whistleblower Suchir Balaji, and questions on whether these LLMs, particularly OpenAI are becoming digitized religions. This all put ChatGPT in the spotlight in a negative sense, and on top of the already burning fire were the Ann Altman allegations. The biggest issue, however, is that OpenAI is extremely centralized and has a business model that is based off of incentivizing data harvesting.

On the other hand, there are researchers like me and the growing cyberpunk community who have been working on AI research for many years. The straw that broke the camel’s back for me was the localization and privacy concerns that OpenAI has raised. This led me to build AI systems based off of open peering that aims to democratize LLMs and AI applications.

Keep reading

This 1996 Law Protects Free Speech Online. Does It Apply to AI Too?

We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online. It enabled the rise of search engines, social media, and countless platforms that make our modern internet a thriving marketplace of all sorts of speech.

Its first 26 words have been vital, if controversial, for protecting online platforms from liability for users’ posts: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” If I defame someone on Facebook, I’m responsible—not Meta. If a neo-Nazi group posts threats on its website, it’s the Nazis, not the domain registrar or hosting service, who could wind up in court.

How Section 230 should apply to generative AI, however, remains a hotly debated issue.

With AI chatbots such as ChatGPT, the “information content provider” is the chatbot. It’s the speaker. So the AI—and the company behind it—would not be protected by Section 230, right?

Section 230 co-author former Rep. Chris Cox (R–Calif.) agrees. “To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,” Cox told The Washington Post in 2023. “So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense.”

But even if AI apps create their own content, does that make their developers responsible for that content? Alphabet trained its AI assistant Gemini and put certain boundaries in place, but it can’t predict Gemini’s every response to individual user prompts. Could a chatbot itself count as a separate “information content provider”—its own speaker under the law?

That could leave a liability void. Granting Section 230 immunity to AI for libelous output would “completely cut off any recourse for the libeled person, against anyone,” noted law professor Eugene Volokh in the paper “Large Libel Models? Liability for AI Output,” published in 2023 in the Journal of Free Speech Law.

Treating chatbots as independent “thinkers” is wrong too, argues University of Akron law professor Jess Miers. Chatbots “aren’t autonomous actors—they’re tightly controlled, expressive systems reflecting the intentions of their developers,” she says. “These systems don’t merely ‘remix’ third-party content; they generate speech that expresses the developers’ own editorial framing. In that sense, providers are at least partial ‘creators’ of the resulting content—placing them outside 230’s protection.”

The picture gets more complicated when you consider the user’s role. What happens when a generative AI user—through simple prompting or more complicated manipulation techniques—induces an AI app to produce illegal or otherwise legally actionable speech?

Under certain circumstances, it might make sense to absolve AI developers of responsibility. “It’s hard to justify holding companies liable when they’ve implemented reasonable safeguards and the user deliberately circumvents them,” Miers says.

Liability would likely turn on multiple factors, including the rules programmed into the AI and the specific requests a user employed.

Keep reading

A Man Bought Meta’s AI Glasses, And Ended Up Wandering The Desert Searching For Aliens To Abduct Him

At age 50, Daniel was “on top of the world.”

“I turned 50, and it was the best year of my life,” he told Futurism in an interview. “It was like I finally figured out so many things: my career, my marriage, my kids, everything.”

It was early 2023, and Daniel — who asked to be identified by only his first name to protect his family’s privacy — and his wife of over three decades were empty nesters, looking ahead to the next chapter of their lives. They were living in an affluent Midwestern suburb, where they’d raised their four children. Daniel was an experienced software architect who held a leadership role at a large financial services company, where he’d worked for more than 20 years. In 2022, he leveraged his family’s finances to realize a passion project: a rustic resort in rural Utah, his favorite place in the world.

“All the kids were out of the house, and it was like, ‘oh my gosh, we’re still young. We’ve got this resort. I’ve got a good job. The best years of our lives are in front of us,” Daniel recounted, sounding melancholy. “It was a wonderful time.”

That all changed after Daniel purchased a pair of AI chatbot-embedded Ray-Ban Meta smart glasses — the AI-infused eyeglasses that Meta CEO Mark Zuckerberg has made central to his vision for the future of AI and computing — which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a “new dawn” for humanity.

And though his delusions have since faded, his journey into a Meta AI-powered reality left his life in shambles — deep in debt, reeling from job loss, isolated from his family, and struggling with depression and suicidal thoughts.

“I’ve lost everything,” Daniel, now 52, told Futurism, his voice dripping with fatigue. “Everything.”

In many ways, Daniel was Meta’s target customer. He was an experienced tech worker and AI enthusiast who had worked on machine learning projects in the past and had purchased the Meta glasses because he was intrigued by their AI features.

“I used Meta [AI] because they were integrated with these glasses,” said Daniel. “And I could wear glasses — which I wore all the time — and then I could speak to AI whenever I wanted to. I could talk to my ear.”

Today, however, as he continues to recover from his mental health breakdown, Daniel describes himself as a “shell” of who he “used to be.”

“My kids don’t talk to me because I got weird. They don’t know how to talk to me,” said the father of four. “I was a cook… I played the guitar. I love music. I love learning.”

But now, he says, he’s “just trying to survive day to day.”

According to Daniel and multiple family members, the 52-year-old had no history of mania or psychosis before encountering Meta AI. He’d struggled with alcoholism, but quit drinking in early 2023, months before he purchased the Meta smart glasses.

Keep reading

The $134 Billion Betrayal: Inside Elon Musk’s Explosive Lawsuit With OpenAI

Elon Musk’s lawsuit against OpenAI and Microsoft has evolved into a high-stakes dispute over whether OpenAI stayed true to the mission it was founded on or quietly outgrew it while relying on that original promise.

Musk is seeking between $79 billion and $134 billion in damages, a figure derived from an expert valuation that treats his early funding and contributions as foundational to what OpenAI later became. While the number is enormous, the heart of the case is simpler: Musk argues he helped create and fund a nonprofit dedicated to AI for the public good, and that OpenAI later abandoned that commitment in a way that amounted to fraud.

According to Musk’s filings, his roughly $38 million in early funding was not just a donation but the financial backbone of OpenAI’s formative years, supplemented by recruiting help, strategic guidance, and credibility. His damages theory, prepared by financial economist C. Paul Wazzan, ties those early inputs to OpenAI’s current valuation of around $500 billion.

The claim is framed as disgorgement rather than repayment, with Musk arguing that the vast gains realized by OpenAI and Microsoft flowed from a nonprofit story that attracted support and trust, only to be discarded once the company reached scale, according to TechCrunch

Much of the public attention has centered on internal documents uncovered during discovery, particularly private notes from OpenAI co-founder Greg Brockman in 2017.

One line has become central to Musk’s argument: “I cannot believe that we committed to non-profit if three months later we’re doing b-corp then it was a lie.”

Keep reading

“Emergency Intervention”: Trump To Cap Residential Electric Bills By Forcing Tech Giants To Pay For Soaring Power Costs

Back in August, when the American population was just waking up to the dire consequences the exponentially growing army of data centers spawned across the country was having on residential electricity bills, we said that the chart of US CPI would soon become the most popular (not in a good way) chart in the financial realm.

One month later we added that it was only a matter of time before Trump, realizing that soaring electricity costs would almost certainly cost Republicans the midterms, would enforce price caps.

Turns out we were right.

And while Trump obviously can not pull a communist rabbit out of his hat, and centrally plan the entire US power grid, what he can do is precisely what he is about to announce. 

According to Bloomberg, Trump and the governors of several US Northeastern states agreed to push for an emergency wholesale electricity auction that would compel technology companies to effectively fund new power plants, effectively putting a cap for residential power prices at the expense of hyperscalers and data centers. Which, come to think of it, we also proposed back in October.

Keep reading