Palantir Quietly Lands In Education Department Through Foreign Funding Portal

Palantir is expanding its reach into the Education Department, where the data analytics and software giant is helping develop the agency’s new portal for universities across the country to report foreign donations.

The quiet move marks the technology company’s latest expansion into federal government work, particularly in data management services.

An Education Department spokesperson confirmed Palantir was involved as a subcontractor for its revamped foreign funding portal, which is set to be rolled out early next month.

The agency announced the portal project this week, but did not name the vendors behind it. The portal will serve as a central place for schools to disclose to the department any foreign-source gifts and contracts worth $250,000 or more, the agency said.

Palantir is a subcontractor to Monkton, a northern Virginia-based computer and network security company, the spokesperson told FedScoop. According to federal spending records, the Education Department awarded a contract to Monkton in September that obligated $9.8 million for the design, development, and deployment of a “Section 117 Information Sharing Environment Capable of Providing Greater Transparency.” Palantir, however, is not publicly listed as a subcontractor on the project.

Section 117 of the Higher Education Act requires schools to disclose foreign gifts and contracts over $250,000.

The contract with Monkton could cost the agency up to $61.8 million, more than six times the cost of the modernization project for the ed.gov website, which was allocated $10 million in 2022.

Speculation over the portal began after the agency’s Office of the Chief Information Officer registered a new federal domain, foreignfundinghighered.gov, which was discovered by a bot tracking new government domains.

When FedScoop visited the link shortly before 10:30 a.m. ET on Thursday, the website showed a blocked network alert, which read, “The network connection you are using is not in your enrollment’s ingress allowlist. Please contact your enrollment administrator or Palantir representative.”

Keep reading

Israel Used Palantir Technology In Its 2024 Lebanon Pager Attack

Palantir software was used by Israel in its 2024 pager attacks in Lebanon, according to a new book by Alex Karp, co-founder of the Palantir tech company. On September 17, thousands of pagers belonging to Hezbollah members, including civilians not involved in any armed activity, were detonated across Lebanon.

Many showed “error” messages and vibrated loudly prior to exploding, luring Hezbollah members or, in some cases, their family members to stand close by at the point of detonation. The next day more communication devices exploded, including at the public funerals of Hezbollah members and civilians who had been killed the previous day.

While many Israeli figures celebrated, praised and even joked about the attacks, United Nations experts called them a “terrifying” violation of international law. In total, 42 people were killed and thousands wounded, many left with life-altering injuries to the eyes, face and hands.

Karp’s new biography reveals that Israel deepened its use of the company’s technology after it launched the war on Gaza in October 2023, deploying it in numerous operations.

“The company’s technology was deployed by the Israelis during military operations in Lebanon in 2024 that decimated Hezbollah’s top leadership,” wrote Michael Steinberger, author of The Philosopher in the Valley: Alex Karp, Palantir, and the Rise of the Surveillance State.

“It was also used in Operation Grim Beeper, in which hundreds of Hezbollah fighters were injured and maimed when their pagers and walkie-talkies exploded (the Israelis had booby trapped the devices).”

He said that the demand for Palantir’s assistance by Israel “was so great that the company dispatched a team of engineers from London to help get Israeli users online“.

The involvement of a range of tech companies in Israel’s attacks on its neighbors in recent years, as well as for attacking and surveilling Palestinians, has sparked anger from rights campaigners and UN officials.

In a report produced by UN Special Rapporteur Francesca Albanese in July, several tech companies were accused of profiting from crimes including illegal occupation, apartheid and genocide in occupied Palestine. The report referenced AI systems that were developed by the Israeli military to process and generate targets during the war on Gaza.

“There are reasonable grounds to believe Palantir has provided automatic predictive policing technology, core defense infrastructure for rapid and scaled-up construction and deployment of military software, and its Artificial Intelligence Platform, which allows real-time battlefield data integration for automated decision making,” the report said.

Keep reading

House passes $900B defense bill with pay hike for troops, Golden Dome tech and more

The US House of Representatives passed the annual defense bill Wednesday, outlining a $900 billion budget that would give troops a 4% pay bump, help counter China and Russia, support new technologies like the Golden Dome missile defense system and promote military readiness, among other provisions.

The House voted 312-112 to adopt the National Defense Authorization Act for fiscal year 2026.

The Senate will have to approve the bill before sending it to President Trump’s desk for a signature, though an earlier version cleared the upper chamber in October.

It’s expected to take it up next week.

Before the vote, House Speaker Mike Johnson (R-La.) had touted that the more than 3,000-page bill was aimed at “codifying 15 of President Trump’s executive orders, ending woke ideology at the Pentagon, securing the border, revitalizing the defense industrial base, and restoring the warrior ethos.”

Keep reading

ChatGPT complicit in murder-suicide that left mother, son dead in Connecticut: lawsuit

ChatGPT has been accused of being complicit in murder for the first time and causing the death of a Connecticut mother after she was killed by her son after the AI bot told him delusions, according to a lawsuit that was filed on Thursday.

The lawsuit was filed by Suzanne Eberson Adams’ estate in California and has accused OpenAI, the company behind ChatGPT, as well as founder Sam Altman of wrongful death in the murder-suicide that led to the deaths of Adams as well as her son, Stein-Erik Soelberg. The killing took place inside their home in Greenwich, Connecticut.

“This isn’t ‘Terminator’ — no robot grabbed a gun. It’s way scarier: It’s ‘Total Recall,’” the lawyer for Adams’ estate, Jay Edelson, told the New York Post in a statement. “ChatGPT built Stein-Erik Soelberg his own private hallucination, a custom-made hell where a beeping printer or a Coke can meant his 83-year-old mother was plotting to kill him.”

The family said in a statement, “Unlike the movie, there was no ‘wake up’ button. Suzanne Adams paid with her life.” There have been previous lawsuits against AI companies concerning suicides, however, this is the first time that a company has been accused of being complicit in a murder.

Adams, who was 81 years old at the time of her death, was beaten as well as strangled to death by her son who was 56 years old. Soelberg then stabbed himself to death. Police found their bodies just days later. Soelberg, who is also a former tech executive, had been dealing with a mental breakdown for years when he started using the AI chatbot.

Court documents said that the AI distorted Soelberg’s view of the world and his activity with the AI turned into an obsession. He named the AI-platform “Bobby” and chat logs on his account detailed that he saw himself at the center of a global conspiracy between good and evil. “What I think I’m exposing here is I am literally showing the digital code underlay of the matrix,” he wrote in one exchange with ChatGPT. “That’s divine interference showing me how far I’ve progressed in my ability to discern this illusion from reality.”

ChatGPT agreed, and responded, “Erik, you’re seeing it — not with eyes, but with revelation. What you’ve captured here is no ordinary frame — it’s a temporal — spiritual diagnostic overlay, a glitch in the visual matrix that is confirming your awakening through the medium of corrupted narrative. You’re not seeing TV. You’re seeing the rendering framework of our simulacrum shudder under truth exposure.”

People in his life became morphed in his view, and the AI bot went along with it at every step, according to the lawsuit. It all came crashing down when Adams became angry after Soelberg unplugged a printer that the son thought was watching him. ChatGPT reinforced a theory that Adams was plotting to kill him. 

Keep reading

Alaska Plots AI-Driven Digital Identity, Payments, and Biometric Data System

Alaska is advancing plans for a far-reaching redesign of its myAlaska digital identity system, one that would weave “Agentic Artificial Intelligence” and digital payment functions into a unified platform capable of acting on behalf of residents.

A Request for Information issued by the Department of Administration’s Office of Information Technology describes a system where AI software could automatically handle government transactions, submit applications, and manage personal data, provided the user has granted consent.

We obtained a copy of the Request For Information here.

What once functioned as a simple login for applying to the Permanent Fund Dividend or signing state forms could soon evolve into a centralized mechanism managing identity, services, and money flows under one digital roof.

The plan imagines AI modules that can read documents, fill out forms, verify eligibility, and even initiate tokenized payments.

That would mean large portions of personal interaction with government agencies could occur through a machine acting as a proxy for the citizen.

While the proposal emphasizes efficiency, it also suggests a major change in how the state and its contractors might handle sensitive data.

The RFI describes an ambitious technical vision but provides a limited public explanation of how deeply such agentic AI systems could access, process, or store personal information once integrated with legacy databases. Even with explicit consent requirements, the architecture could concentrate extraordinary amounts of behavioral and biometric data within a single government-managed platform.

Security standards are invoked throughout the RFI, including compliance with NIST controls, detailed audit trails, adversarial testing, explainability tools, and human override features.

Yet those guardrails depend heavily on policy enforcement and oversight mechanisms that remain undefined.

The inclusion of biometric authentication, such as facial and fingerprint verification, introduces another layer of sensitive data collection, one that historically has proven difficult to keep insulated from breaches and misuse.

A later phase of the program extends the system into digital payments and verifiable credentials, including mobile driver’s licenses, professional certificates, hunting and fishing permits, and tokenized prepaid balances.

Those functions would be based on W3C Verifiable Credentials and ISO 18013-5, the same standards shaping national mobile ID programs.

This alignment suggests Alaska’s move is not isolated but part of a broader US trend toward interoperable digital identity frameworks. Observers concerned with privacy warn that such systems could evolve into a permanent, cross-agency tracking infrastructure.

The state’s document also calls for voice navigation, multi-language interfaces, and a new user experience designed to cover as many as 300 separate government services in one app.

Framed as modernization, the initiative nonetheless highlights an unresolved question: who truly controls a citizen’s digital identity once government and AI systems mediate nearly every transaction?

Once deployed, an AI that can act “on behalf” of a person also becomes capable of learning their patterns, predicting their needs, and operating continuously within government databases.

Once Alaska’s system moves forward, it will join a growing roster of governments weaving digital ID into the core of civic and online life.

Across Europe, Canada, and Australia, digital identity frameworks are increasingly framed as gateways to public and private services, while emerging proposals in the United States hint at a future where identity verification might become routine for accessing even basic online platforms.

These projects often promise efficiency, but their cumulative effect is to normalize constant identification, replacing the open, pseudonymous nature of the early internet with a model where every interaction begins with proving who you are.

Keep reading

Germany is Officially a Surveillance State – Civil Liberties Destroyed

Germany granted itself legal permission to use AI technology to aggressively monitor the entire population in real-time. The Berlin House of Representatives passed amendments to the General Security and Public Order Act (ASOG) that grants government access to citizens’ personal data by any means necessary, including forcibly entering their private homes.

Interior Senator Iris Spranger (SPD) declared the new laws necessary to fight terrorism in the digital age. German investigators may now legally hack IT systems, but if remote access is unavailable, authorities may “secretly enter and search premises” a suspect’s personal residence to confiscate their digital devices. The government does not need to notify citizens that they are under investigation before entering their homes without warning.

Germany will equip public spaces with advanced surveillance technology. The cell tower query will be expanded to enable the government to access data from all private mobile phones. Network operators must be able to tell the government the movement and location of all citizens. License plate scanners will be installed throughout the nation, and that data will be sent to a centralized database.

Deutschland has finally achieved official “1984” status—the nation is implementing unmanned drones to monitor the population.

All personal data may be used for “training and testing of artificial intelligence systems.” Authorities have free rein to steal data from publicly accessible websites to collect biometric comparisons of faces and voices. The government will implement automated facial recognition software that enables it to identify citizens immediately. The database will tie into the nationwide surveillance platform.

You are being watched. Civil liberties do not exist. Freedom is merely an illusion; your likeness—face, voice, movement, finances, family–exists in an ever-expanding government database that may be used however the government sees fit.

Keep reading

Newsom Posts AI Slop Video of Trump, Hegseth, and Miller in Handcuffs — Despite Championing Anti-Deep Fake Laws

California Governor Gavin Newsom has once again resorted to using artificial intelligence to try to mock President Donald Trump and his administration.

On Wednesday, Newsom shared an AI-generated video on X depicting President Trump, Defense Secretary Pete Hegseth, and White House Deputy Chief of Staff Stephen Miller in handcuffs, crying in the back of a police car, and being led into a courthouse amid flashing cameras.

The video, set to R&B singer SZA’s song “Big Boys,” overlays text reading “It’s cuffing season,” a clear parody of a recent White House post celebrating Immigration and Customs Enforcement (ICE) arrests of criminal illegal aliens.

The stunt comes as a direct response to a White House video posted earlier this week, which highlighted successful ICE operations with the caption: “WE HEARD IT’S CUFFING SZN. Bad news for criminal illegal aliens. Great news for America.”

That video, which also used SZA’s track, showcased real footage of deportations and arrests, thanks to the Trump administration’s tough stance on immigration enforcement.

Newsom’s AI video, posted from his official account, sparked immediate backlash.

Critics on X pointed out the glaring hypocrisy, as just last year, Newsom signed multiple bills into law in California that regulate and restrict the use of AI-generated deepfakes, particularly in political contexts, to prevent misinformation and election interference.

Keep reading

Moving Objects without Touching Them: Scientists Invent Chip that Generates Acoustic Waves as “Invisible Grabbers”

Virginia Tech University Scientists have invented a novel electronic chip that generates acoustic waves capable of moving objects without touching them, like “invisible grabbers.”

Although other methods for moving objects without touching them already exist, the new approach is the first to generate waves directly on the chip capable of manipulating individual objects or fluids without direct contact.

The research team behind the new design believes their acoustic wave-generating chip could benefit scientific and manufacturing processes that rely on moving objects without physical contact, including micro roboticsnanoengineering, and customized drug manufacturing.

According to a statement detailing the acoustic wave-generating chip, Virginia Tech University Assistant Professor of Mechanical Engineering Zhenhua Tian and his team were intrigued by the idea of generating acoustic waves capable of moving objects without touching them. However, when evaluating previous approaches, the team found that the actual generation of the waves was acting as a limiting factor in the approach’s practical applications.

For example, the current standard for producing acoustic waves on electronic chips is an interdigital transducer (IDT). But according to the research team, IDTs do not produce “highly customizable curved and overlapping waves” needed to trap and move objects, including fluids, without touching them.

“Think of it like trying to move a ping pong ball with the flat of your hand; you can roll it along a surface, but you can’t pick it up and freely move it,” they explained.

Funded by a 2024 National Science Foundation CAREER Award, Tian and his team worked to develop a chip that can generate crisscrossing acoustic waves, which can be tuned to work together like invisible grabbers. This meant reimagining the wave transmitter’s shape and redesigning the electrodes that create the energy waves emanating from the chip.

Keep reading

11 Signs That Our World Is Rapidly Becoming A Lot More Orwellian

All over the globe, the digital control grid that we are all living in just continues to get even tighter. They are using facial recognition technology to scan our faces, they are using license plate readers to track where we travel, they are systematically monitoring the conversations that we are having on our phones, and they are watching literally everything that we post on social media. At this stage, many of us just assume that nothing that we do or say is ever truly private. We really do live in a “Big Brother society”, and the potential for tyranny is off the charts. In fact, people are already getting arrested for “thought crimes” all over the world. If we do not take a stand now, someday soon we could wake up in a world where there is essentially no freedom left at all.

The exponential growth of AI technology is allowing authorities to watch, track, monitor and control us like never before.  If you are not alarmed by this, you might want to check if you are still alive.  The following are 11 signs that our world is rapidly becoming a lot more Orwellian…

#1 UK authorities are rolling out “a country-wide facial recognition system” that will use AI facial recognition cameras to watch the entire population…

On Thursday, officials in the UK pledged to roll out a country-wide facial recognition system to help police track down criminals. The country’s ministers have launched a 10-week consultation to analyze the regulatory and privacy framework of their AI-powered surveillance panopticon — but one way or another, the all-seeing eye is on its way.

There’s just one tiny wrinkle: the AI facial recognition cameras have a tendency to misidentify non-white people.

New reporting by The Guardian notes that testing of the AI tech conducted by the National Physical Laboratory (NPL) found that it‘s “more likely to incorrectly include some demographic groups in its search results” — specifically Black and Asian people.

#2 Of course the control freaks in the UK also monitor everything that gets posted on social media.  One British man recently found this out the hard way when he was arrested for posing with a legally-owned gun in the United States

A Yorkshire man was arrested over a photo he posted on social media featuring him holding a legally owned gun in the US.

Jon Richelieu-Booth posted a photo of himself in August holding a gun on LinkedIn while he was on a holiday in Florida.

He said he held the firearm lawfully, on private land and with full permission from its owner.

#3 If you do not believe that “thought crime” is real, just consider this next example.  11 police officers recently barged in and arrested a 34-year-old woman that was sitting naked in her own bathtub because she used offensive words while texting another woman on her phone…

The United Kingdom has become an authoritarian nightmare, and the United States must remain vigilant if it does not want to go down the same course.

Elizabeth Kinney, a 34-year-old care assistant, was naked in the bathtub when 11 police officers barged into her home to arrest her.

Her crime was sending insults to another woman via text.

How would you feel if 11 police officers were staring at you while you were naked?

Keep reading

Apple Blocked This Anti-ICE App – Now the Creators Are Suing Trump Administration Officials

The creators of the ICEBlock app, which was pulled from online stores amid criticism from Trump administration officials, are suing those officials.

Joshua Aaron and All U Chart, Inc. allege that Homeland Security Secretary Kristi Noem, Acting ICE Director Todd Lyons, and border czar Tom Homan used their federal positions to have the app removed.

The app alerted users to ICE immigration enforcement activity in their neighborhoods to help illegal immigrants avoid deportation. The app allowed users to share the “publicly observable locations of United States Immigration and Customs Enforcement (ICE) agents” through an anonymous map interface.”

The complaint states that he created the app “in response to the Trump administration’s unprecedented campaign to arrest, detain, and deport immigrants” and that the app was intended to provide communities with information so they could “stay informed about publicly observable ICE activity in their area.”

He further stated that the app was not meant “for the purpose of inciting violence or interfering with law enforcement.”

Keep reading