DOJ joins Musk’s AI company in suing Colorado for new ‘DEI’ regulatory law

The U.S. Department of Justice (DOJ) has announced its support for Elon Musk’s artificial intelligence company, xAI, as it sues the state of Colorado over a new law set to go into effect in June that would regulate AI technology.

The company filed a suit against Colorado Attorney General Phil Weiser on Thursday to prevent the enforcement of the law, which would impose new requirements on AI programs to protect users from “algorithmic discrimination” in education, employment, healthcare, housing and financial services, and other sectors.

xAI argued that the statute “severely burdens the development and use of AI” and infringes on First Amendment free speech protections.

“Its provisions prohibit developers of AI systems from producing speech that the State of Colorado dislikes, while compelling them to conform their speech to a State-enforced orthodoxy on controversial topics of great public concern,” the lawsuit reads.

It also claims that the law would force Musk’s company to rework its AI chatbot called Grok, which can be found on the social media platform X, to “conform to a controversial, highly politicized viewpoint” instead of maintaining its objectivity.

The DOJ’s Civil Rights Division announced on Friday that it partnered with the Civil Division to file a motion to intervene in the suit.

Assistant Attorney General for Civil Rights Harmeet Dhillon said in a video posted to social media that the state law in question requires companies to comply with its “crazy, woke, DEI goals,” referring to the “Diversity, Equity and Inclusion” policies instated widely across left-leaning and liberal organizations.

Keep reading

Apple, Google Caught ‘Helping Users’ Find Apps That Can Deepfake Nude Pictures of Real People, and Worse Kids Are Vulnerable Too

Tech companies Apple and Google were found to have been leading users — specifically children — to apps that could effectively pornify images through artificial intelligence.

Last Wednesday, 9to5Mac reported the findings from January published by the Tech Transparency Project, which concluded both the Apple App Store and Google Play “are helping users to find apps that create deepfake nude images of women.”

The stores were even found promoting these apps and autocompleting search results for them.

About 40 percent of the top 10 apps appearing in searches for “nudify,” “undress,” and “deepnude” could “render women nude or scantily clad.”

These are apps where users can take two different images — one normal and one sexually explicit — and generate an image where components of both are used, sexualizing the person from the normal one.

9to5Mac reached out to the developer for one of these apps, and were told they “had no idea it was capable of producing such extreme content.”

On Thursday, Apple responded to the outlet, saying the apps were not allowed on their store given their review guidelines prohibit sexual content.

The company said it has removed 15 apps, with others receiving notice they will be removed if they continue to be in violation.

In January, California Democratic Gov. Gavin Newsom went after social media platform X with a similar allegation.

“xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile,” he said.

“I am calling on the Attorney General to immediately investigate the company and hold xAI accountable.”

Keep reading

‘Our candidates are fat Jewish Zionists!’ Trump aide Paul Ingrassia under fire again as leaked group chat reveals slur-filled rant targeting fellow Republican with vile AI-altered photo

A Trump administration official with a history of inflammatory remarks is once again under fire after newly leaked texts appear to show him ranting about fellow Republicans for supporting ‘fat Jewish Zionists’, the Daily Mail can exclusively reveal.

In the explosive messages obtained by the Daily Mail, Paul Ingrassia, the then White House liaison to the Department of Homeland Security, lashes out in a group chat with other Trump aides, titled ‘Team DOJ/DHS/WH’.

The exchange – which took place around April 2025 – escalated after a Justice Department employee sent a text fuming: ‘How the f**k are we losing Wisconsin‘, before adding: ‘Republicans are so stupid and lazy.’ 

The complaint was likely referring to Wisconsin’s 2025 Supreme Court election, in which liberal circuit judge Susan Crawford defeated Catholic, Republican circuit judge and former state attorney general Brad Schimel.

Ingrassia, a 30-year-old attorney and Trump loyalist, then chimed in with an anti-Semitic comment taking aim at Florida congressman Randy Fine, a pro-Israel Republican who had been elected that same day. 

‘It’s because our candidates are fat Jewish Zionist f**ks,’ Ingrassia replied, after sending an altered, unflattering image of Fine speaking at an event. 

The photo showed Fine, a proudly Jewish candidate, addressing a crowd with a grotesquely bulging stomach and baggy jeans.

‘That’s our candidate?!?’ the DOJ staffer replied incredulously, apparently not recognizing the newly elected congressman representing Florida’s 6th congressional district. 

Ingrassia’s lawyer Edward Andrew Paltzik said: ‘These accusations against Mr. Ingrassia are false and fabricated. No such group chat called ‘Team DOJ/DHS/WH’ exists on his phone.’

The Daily Mail has verified that the original photo of Fine was taken at an RNC meeting in Marion County, Florida in January 2025, but had been altered with AI to make him appear significantly larger. 

In a statement to the Daily Mail, Fine said: ‘I hope these text messages are fake. But if they are not, I know President Trump has a zero tolerance for antisemitism and will fire those involved immediately.’ 

Ingrassia’s text tirade, leaked to the Daily Mail, is not the first example of Ingrassia getting embroiled in controversy over his remarks.

Keep reading

It’s an Underreported Story, But Also a Glaring National Security Issue

Artificial intelligence is the next frontier, and there’s a glaring national security matter that must be addressed. We also can’t kick this can down the road, which is a hallmark characteristic of Congress. Our enemies are engaged in what could become the 21st-century arms race. 

This issue sometimes leans into science fiction, but the entertainment has been excellent. The Terminator franchise and The Matrix both depict humanity’s destruction by artificial intelligence. For our purposes, Skynet nuking us is probably the closest to what could become reality, but right now, it’s about protecting this technology from our enemies. How do we do it when everyone here has the same foundation in this area? We cannot rely on the policy of détente here because of that. 

Sen. Tim Sheehy (R-MT) told Townhall, “From the race for nuclear capability to the space race, the United States has consistently led the world by incentivizing innovation and investing in key infrastructure to achieve dominance. At a time when peer adversaries like China or non-state actors can leverage AI for their own ends, it’s critical we apply that same approach to not just lead but win the AI race in the 21st century.”

No doubt, but Sen. Steve Daines (R-MT) said that legislative guardrails should be codified to ensure advancements are protected, and that we’ve got a long way to go.

“While the U.S. has made critical advances in AI, we still have a long way to go. I will continue to urge my colleagues to think years ahead as we work to remain the global leader in AI research and development, and that starts with ensuring a strong American AI tech stack,” Sen. Daines said. 

Keep reading

Google is Tracking Your Life – Photo Cloud Feeding AI System

There was a time when your photo album sat in a drawer, private, personal, and disconnected from the outside world. Privacy no longer exists in the modern world as personal data will become the key tool of control, and now Google is taking the next step by turning your memories into fuel for artificial intelligence.

According to a recent report, Google has rolled out a major update to its Photos platform that allows its AI system, Gemini, to scan your entire photo library to build what it calls “Personal Intelligence.” What this means in plain English is that your images are no longer just stored, they are analyzed and integrated into a broader behavioral profile. Google openly admits the system can use actual images of you and your loved ones to generate AI content, eliminating the need for users to manually upload reference photos.

This is not a minor tweak to a photo app, but a structural shift in how data is harvested and understood, because every image you have ever taken now becomes part of a living model that attempts to understand who you are, who you associate with, where you go, and how you live your life. What was once private into something continuously processed and categorized.

Keep reading

Zodiac Killer may be tied to Black Dahlia case after ‘code cracked,’ new suspect emerges

The Zodiac Killer’s cryptic messages weren’t just taunts to police — they were a twisted throwback to his first victim, according to an independent investigator who says he’s cracked the code and uncovered new evidence suggesting the infamous serial killer began his career 23 years earlier with the California murder of Elizabeth Short, also known as the Black Dahlia.

Alex Baber, co-founder of Cold Case Consultants of America, said that after nine months of work, he cracked a double-layered encryption that involved transposition and substitution in a 2 by 7 grid.

“Currently, for the first time in history, LAPD detectives approached the family of a suspect to obtain DNA,” he told Fox News Digital in an interview on the sidelines of the Hamptons Whodunit event in East Hampton over the weekend. “That’s never happened for the Black Dahlia case… we got a pretty good feeling that we’re sitting in the right seat.”

The Los Angeles Police Department did not immediately respond to a request for comment. An FBI spokesperson declined to weigh in.

Baber’s finding, that the Zodiac’s “Z13” cipher depicts the name of a prime suspect in the 1947 Black Dahlia murder, was first revealed in the Daily Mail, and he presented them publicly Saturday at the East Hampton Library.

With help from a proprietary artificial intelligence software and self-taught knowledge of cryptography, he said the 13-character message is decoded to read “Marvin Merrill.” After further digging into social security records, he said he discovered that’s an alias for Marvin Margolis, who he said dated Short in the 1940s and had been on the LAPD’s suspect list after her murder and dismemberment. His AI software flagged the connection between the two cases, he added.

Keep reading

“An Occupied Nation”: Whistleblower Says Palantir Has Taken Over The US Government

A former Palantir executive recently confirmed what many have long suspected. In a public statement, the whistleblower said it plainly: Palantir intended to take over the US government, and many of his former colleagues are now installed inside the federal apparatus. He called it an occupied nation. He is not alone. Thirteen former Palantir employees—engineers, managers, and a member of the company’s own privacy team—signed a letter shared with NPR warning that guardrails meant to prevent discrimination, disinformation, and abuse of power have been violated and are being rapidly dismantled.

What Palantir represents is something unprecedented: the convergence of American imperialismZionism, technofascism, and surveillance capitalism into a single instrument of control. Understanding how we got here requires looking at the machine Palantir has built, who built it, and what they believe.

Palantir was founded in 2004 by Peter Thiel and Alex Karp. Its first major investor was In-Q-Tel, the CIA’s venture capital arm, which seeded the company with millions and opened the door to every major intelligence and defense agency. The logic was deliberate: The American ruling class recognized decades ago that the state’s coercive power—surveillance, targeting, data harvesting—could be run more effectively and more profitably through private contractors. When a government agency surveils its own citizens, there are hearings, FOIA requests, oversight committees. When a private company does it, it is a trade secret.

That strategy has paid off enormously. Palantir now holds contracts worth over $10 billion with the US Army alone. The Trump regime tapped Palantir to build a master database on American citizens. The Pentagon expanded its Maven Smart System contract by $795 million to deploy AI-powered battlefield intelligence across the empire. In June, the military swore in four tech executives as Army Reserve lieutenant colonels—including Palantir’s CTO—in a program that embeds Silicon Valley directly into military planning. Immigration and Customs Enforcement (ICE) signed a $30 million contract for Palantir’s ImmigrationOS platform, which provides near real-time tracking of people targeted for deportation. Thousands of American police departments use Palantir’s Gotham platform for domestic surveillance.

Abroad, the consequences are even more devastating. Palantir’s AI platforms have been deployed by Israel’s military to systematically prosecute the assault on Gaza. AI targeting systems built on Palantir’s architecture—known by names like Lavender, The Gospel, and Where’s Daddy—have enabled the kind of automated killing that produces mass civilian casualties at scale. Palantir’s own executives have been recorded discussing how bombing densely populated areas generates the movement data their algorithms need to train on. When people flee, make phone calls, search for loved ones, rush to hospitals that no longer exist—that movement becomes fuel for the machine. Palantir’s platforms were deployed in the illegal capture of Venezuelan President Nicolás Maduro, Israel’s terrorist pager attack against Lebanon, and the US carpet bombing of Iran at the behest of Israel—the same campaign that destroyed a girls’ elementary school in Minab.

Keep reading

Palantir’s Draft Push Collides with Washington’s Automatic Registration Machine

In 1777, Thomas Jefferson warned John Adams that a national military draft would rank among the most hated measures imaginable. Colonists had rebelled against British press gangs. That grievance made the Declaration of Independence. Nearly 250 years on, a $350 billion data giant echoes the idea. Palantir Technologies, fresh off zero federal taxes on $1.5 billion in U.S. income, just called for universal national service. Timing? Perfect. Or ominous.

The company’s manifesto hit X last Sunday. It boils down 22 points from CEO Alex Karp’s 2025 book, The Technological Republic, co-written with Nicholas W. Zamiska. One line stands out: “National service should be a universal duty. We should, as a society, seriously consider moving away from an all-volunteer force and only fight the next war if everyone shares in the risk and the cost.” (Fortune)

Palantir didn’t invent the draft. America tried it first in the Civil War. Then World War I. World War II. Korea. Vietnam. The last call came December 7, 1972. Jimmy Carter mandated male registration in 1980. Now comes the shift. Starting December 18, 2026, Selective Service goes automatic for men 18 to 26. No forms. No opt-out nudge. Government databases do the work. President Donald Trump’s National Defense Authorization Act locked it in. (Time)

Why now? Compliance dipped. Selective Service says automation streamlines everything, frees staff for readiness. It pulls from Social Security, DMV, student loans, immigration records. Citizens. Immigrants. Undocumented. Dual nationals. Green card holders. All in, within 30 days of turning 18. “This statutory change transfers responsibility for registration from individual men to SSS through integration with federal data sources,” the agency states. (Newsweek)

Palantir stays silent on direct ties. No contract announced for Selective Service. Yet speculation swirls. The firm holds a $10 billion U.S. Army deal for software and analytics. (U.S. Army) Its platforms run Project Maven, the Pentagon’s AI targeting tool. Reports link it to Gaza strike lists for Israel. (Mother Jones) Over half its revenue flows from government. 2026 guidance? $7.18 billion to $7.2 billion, up 70%.

And taxes. Zero federal in 2025, thanks to the One Big Beautiful Bill Act. (ITEP) Karp once framed the mission bluntly: “scare enemies and, on occasion, kill them.” (The Guardian) The manifesto adds layers. Silicon Valley owes a “moral debt.” Remilitarize Germany, Japan.

This lands amid the seventh week of U.S. action in Iran. Tensions simmer. Automatic registration isn’t a draft. But it builds the list. Critics see a data grab. Edward Hasbrouck, draft researcher, warns it props up war planning. Selective Service seeks broader data sharing with law enforcement, even abroad. (Hasbrouck.org)

On X, reactions mix alarm and shrugs. One user ties Palantir directly: “They will use existing gov databases (think Palantir) to find and register them.” (X post by @allenanalysis) Another calls it fearmongering: “This has always been a thing… now it is automatic. That is the only change.” (X post by @CarmineSabia) Palantir’s post drew shares, but no company reply to Fortune.

Keep reading

Palantir’s Technological Republic is a blueprint for digital tyranny

Walking through the glass-and-steel corridors of the modern tech-security apparatus reveals that the telescreen is a tireless processor of our very souls.

Palantir Technologies’ vision of a “Technological Republic” arrives as a manual for the refinement of the boot, the one destined to remain on the human face, provided the boot remains equipped with the latest predictive sensors. In the spirit of a clear-eyed look at the clock striking thirteen, we must dissect the alliance between corporate algorithmic power and the Zionist state. This is a new Newspeak, where “defense” is a moral debt and “deterrence” is the silent humming of an algorithm deciding who shall disappear.

The foundation of this digital fortress is built upon the claim of a “moral debt” that the engineering elite owes to the State. In George Orwell’s world of 1984, this represents the ultimate synthesis: the Party and the Corporation becoming indistinguishable. This “affirmative obligation” to participate in national defense is literalized in Palantir’s “strategic partnership” with the Israeli Ministry of Defense. Finalized in early 2024 during a high-stakes visit by co-founders Peter Thiel and Alex Karp to Tel Aviv, this pact seeks to harness advanced data mining for “war-related missions.” The software engineers of Palo Alto have been drafted as the new Inner Party: high priests of a digital armory. Their corporate identity is so entwined with the Zionist project that Palantir held its first board meeting of 2024 in Israel, signaling that their “Technological Republic” transcends borders when it comes to the enforcement of state power.

We are told that the age of “soaring rhetoric” and atomic deterrence is fading, replaced by a “hard power” built entirely on software. Here is the transition from the clumsy violence of the truncheon to the invisible violence of the code. Reports from Gaza suggest that Palantir provides the underlying scaffolding for a system where human intuition is replaced by mathematical certainty. By synthesizing massive datasets – surveillance footage, intercepted communications, and biometric records – the software assists in the production of targeting databases that function as automated “kill lists.”

Keep reading

Florida AG Launches Landmark Criminal Investigation into ChatGPT and OpenAI for ‘Offering Significant Advice’ to Alleged FSU School Shooter, ‘If It Was a Person, We’d Charge Them with Murder’

Florida Attorney General James Uthmeier announced Tuesday that the state has opened a full criminal investigation into OpenAI and its popular chatbot ChatGPT over allegations it provided “significant advice” to the man accused of carrying out the deadly 2025 Florida State University shooting.

The investigation centers on Phoenix Ikner, the 21-year-old charged with two counts of first-degree murder for the April 2025 attack on FSU’s campus in Tallahassee.

Ikner allegedly opened fire near the student union, killing two people, Robert Morales and Tiru Chabba, and injuring six others.

According to investigators and court documents, Ikner engaged in more than 200 messages with ChatGPT in the hours and days leading up to the massacre.

The conversations reportedly included detailed questions about school shootings, the busiest times on campus, operational details on firearms and ammunition, and strategies for maximizing media attention.

Uthmeier did not mince words when announcing the criminal investigation:

“My prosecutors have looked at this, and they’ve told me if it was a person on the other end of the screen, we would be charging them with murder.”

“This criminal investigation will determine whether OpenAI bears criminal responsibility for ChatGPT’s actions in the shooting at Florida State University last year,” he added.

Uthmeier explained that Florida law treats anyone who aids, abets, or counsels the commission of a crime as a principal to that crime, equally responsible as the actual perpetrator.

Attorneys for the family of victim Robert Morales first raised the alarm in early April, revealing they had evidence of “constant communication” between Ikner and ChatGPT right up to the shooting.

In a statement, the lawyers said they have “reason to believe that ChatGPT may have advised the shooter how to commit these heinous crimes.”

The family plans to file a civil lawsuit against ChatGPT and OpenAI’s ownership structure “very soon” to hold them accountable for Morales’ death, according to a report from WCTV.

Uthmeier has previously highlighted ChatGPT’s links to child sexual abuse material, encouragement of self-harm, and other criminal uses.

“We support innovation, but that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies or threaten our national security,” Uthmeier stated.

OpenAI has stated it will cooperate with the investigation.

The company has not yet issued a detailed public response specifically addressing the FSU chats, but it has previously maintained that its safety guardrails are designed to prevent harmful outputs.

Keep reading