House Passes Bill Stripping States’ Right to Regulate AI—After FDA OK’s Use of Your Blood and Genetic Data Without Consent

The Republican-controlled U.S. House of Representatives on Thursday passed the 1,116-page “One Big Beautiful Bill Act” that removes all 50 states’ right to regulate artificial intelligence (10) for the next ten years.

The only Republican Representatives to vote ‘no’ were Thomas Massie (KY) and Warren Davidson (OH).

Every other GOP member voted to block your state from regulating AI.

The bill reads: “No State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models… during the 10-year period beginning on the date of the enactment of this Act.” —Sec. 43201(c)(1)

Developed by the House Budget Committee, the legislation prohibits all states from imposing “any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement” unless the federal government already does—meaning if the feds don’t regulate it, no one can.

Keep reading

Fannie Mae Partners With Palantir For AI Mortgage Fraud Effort As Trump Works To Take Housing Giants Public

Fannie Mae, the quasi-government financial firm overseen by the Federal Housing Finance Agency (FHFA), announced a partnership with defense tech company Palantir to detect mortgage fraud using the firm’s proprietary technology, which includes some elements of artificial intelligence (just in case they didn’t have all of your info…)

Under the agreement, new mortgage applications would be run through Palantir’s technology suite to uncover fraud before they reach Fannie Mae, according to the housing giant’s president and CEO, Priscilla Almodovar, who added that the tech will allow the organization “to see patterns quicker.”

We’re going to be able to identify fraud more proactively, as opposed to reactively,” she said during a Wednesday press conference in Washington D.C. “We’re going to be able to understand the fraud and stop it in its tracks. And I think over time, this really becomes a deterrent for bad actors, because we’re creating friction in the system when they do bad things.”

She recalled an exercise where Palantir’s technology was given four actual loan packages to assess, which scoured “reams of paper” and identified instances of fraud within 10 seconds – something that would take a human roughly two months.

FHFA Director Bill Pulte – who’s also chairman of the Fannie Mae board, said that the financial crimes division that monitors both Fannie and Freddie Mac “is only able to root out crime that it gets made aware of,” while Palantir’s red-flag approach would tip off investigators to conduct probes they would otherwise not have known to launch.

Fannie Mae has roughly $4.3 trillion in assets – making it a huge target for fraud.

“Why defraud America when you can go somewhere else that won’t buy or implement actual technology that works?” said Palantir CEO and co-founder Alex Carp, who added that the technology’s approach is a ‘forward-looking’ one in which “unique patterns of fraud that heretofore have not been detected” are highlighted “while simultaneously making sure the data is not being used in a way that customers would not want.”

“That is a transformational difference between how these things were done in the past or could be done [versus] how they can be done now,” Karp continued, according to Fedscoop.

Pulte suggested on Wednesday that this could be the first of many industry collaborations with Palantir.

“There’s a lot of things that are going on with title insurance, with mortgage insurance, with mortgages in general, in terms of AI,” said Pulte. “And so I think we have really scratched the surface with Palantir on mortgage fraud, but I wouldn’t be surprised to see us … enter into [other] partnerships with Palantir. We’re also talking with [Musk’s] xAI about some different AI stuff.”

Keep reading

Peter Thiel’s visions of ApocalypseIs AI the Antichrist?

Peter Thiel is a big thinker, and these days he’s been thinking about Doomsday. In a series of four lectures he’s given three times, at Oxford, Harvard, and the University of Austin, he’s tried to understand human history, and particularly modernity, within the framework of biblical prophecies of the End of Days. Thiel believes that the Antichrist, whose identity is uncertain — is it a person, a system, a global tyranny? — is “not just a medieval fantasy”. His free-ranging lectures, moving rapidly between disparate texts (Gulliver’s Travels; Alan Moore’s graphic novel Watchmen) and topics (sacred violence; high-velocity global financial systems), defy easy summary. But their leading themes include the Antichrist’s relationship to Armageddon and the roles of technology and empire in the Antichrist’s rise. It’s an ambitious, thought-provoking attempt to weave, from seemingly unrelated strands of meaning, a theological/anthropological/historical narrative that aims to make sense of the whole of human experience.

Some will find Thiel’s project very odd. How could an enormously successful, mathematically-gifted, philosophically-educated tech entrepreneur seriously entertain Bible-thumping myths from the Apocalypse of John? Here’s a better question: how could he — and we — not take them seriously? As Dorian Lynskey writes in his bookEverything Must Go: The Stories We Tell About the End of the World, “apocalyptic angst has become a constant: all flow and no ebb.” Contemporary culture has long been saturated with post-apocalyptic novels, comic books, films, TV series, and video games. Zombie end-times fantasies do particularly well in all formats. The mindless, mechanical mob of the undead, who hunger insatiably for the brains of the living, has become a primary and pervasive cultural symbol — one that resonates with a widespread sense of impending catastrophe that’s been building steadily since the 2020 Covid lockdowns. And if bioweapons, climate change, nuclear bombs, or AI don’t drive the human species to extinction, drastic measures deemed necessary to forestall such dangers, such as the establishment of a single world government, might themselves bring an end to politics, morality, spiritual life, and culture. Thiel is driven to find a way between the binary alternative of No World or One World, the whirlpool of planetary destruction or the many-headed monster of global totalitarianism.

Thiel’s insight is that, unlike most contemporary imaginings of global catastrophe, the Bible’s prophecies do more than pluck our inner strings of existential dread. They help us to understand our chaotic times. Matthew 24:24 predicts that “[T]here shall arise false Christs and false prophets … [and] they shall deceive the very elect.” In other words, the Antichrist will attempt to appear more Christian than Christ himself, even as it works to accomplish the wholesale destruction of the Christian underpinnings of Western civilisation. The Nazis pursued this strategy, but were hampered by the limited appeal of their antisemitic ideology. German theologians fashioned a new myth of Jesus as a spirited warrior who strove to destroy Judaism, and they elevated Hitler to the status of the second coming of Christ, who would finish the work Jesus failed to complete: the total extermination of Jews and Judaism. A more successful Antichrist would, like the French revolutionaries and the Marxists, promote values that seem more consistent with the Judeo-Christian foundations of civilisation, such as universal liberty, equality, and justice.

Keep reading

Is China Building the Future of AI Governance Through Open-Source Modeling?

China’s rapid advancements in artificial intelligence (AI), led by tech giants such as Alibaba, Baidu, Tencent, and iFlytek, are increasingly being driven by a strong emphasis on open-source collaboration. Models like Alibaba’s Qwen 3 series and Qwen 2.5, which are competitive against GPT-4 Turbo, are built on open frameworks that encourage developer contributions and integration across platforms. Qwen, termed as the open-source king, is also among the top three contributors to the global open-source AI ecosystem. 

Baidu’s ERNIE series, including the widely adopted ERNIE Bot, and Tencent’s Hunyuan model similarly benefit from China’s broader AI ecosystem, where research institutions, startups, and industry players openly share tools, datasets, and model architectures. Likewise, iFlytek’s Spark 4.0 Turbo, which has also demonstrated exceptional benchmarks, reflects the success of this multistakeholder, open innovation strategy. 

Unlike the more closed and proprietary model prevalent in the United States, China’s approach leverages state support and open-source infrastructure to accelerate collective progress, allowing these companies to build, iterate, and deploy foundation models at scale while fostering a uniquely domestic AI ecosystem. Such progress not only signals that China is scaling its AI capabilities by bypassing the reliance on Western supply chains, but also highlights Beijing’s ambition to carve out a unique role in shaping the future of global AI governance.

Instead of responding to U.S. attempts to block its access to critical technologies with retaliatory export control measures, China aims to adopt a decentralized approach that will secure its industrial base in the long term. In this context, China’s strategic shift toward open-source AI development resonates with the guerrilla economic strategy. This strategy is characterized by China’s efforts to find weaknesses in the global supply chains, deepen its ties with the Global South, and showcase its domestic innovation as a better alternative to Western technology – one that is more collaborative, decentralized, democratic, and accessible. 

China’s framing of AI as a critical national priority is not only about boosting national competitiveness but also about showcasing its private sector, which appears to be thriving under state control. The evolving nature of China-U.S. AI competition is now centered around how the private sector is leading this innovation game, which approach countries will leverage to lead the next wave of AI innovation, and how global powers – even middle and emerging AI powers – will respond. 

What strengthens this evolving policy position is China’s greater advocacy for its open-source model as an ideological tool to surpass the importance of Western technology. China is rapidly positioning itself as a leader in shaping international norms and frameworks of AI governance that align with the needs of small and emerging AI powers. President Xi Jinping, at last year’s G-20 summit, stated that AI development “should not be a game of rich countries.” China has repeatedly raised the issue of inclusive AI governance at global platforms like the United Nations through its AI Capacity-Building Action Plan and U.N. AI resolution. This approach helps China to strengthen its influence in the growing race to shape AI standards and frameworks. 

China’s growing emphasis on open-source AI has enabled it to scale alternatives that are less reliant on Western supply chains and licensing regimes. This strategy not only enhances China’s technological resilience amid export controls but also positions it as a credible actor in promoting alternative norms and frameworks for global AI governance.  

China’s AI diplomacy is aligned with its homegrown technology development model, and it may harm the existing influence of Western norms. The United States’ reliance on closed-source AI models, despite its liberal democratic ethos, may limit its ability to lead global conversations on inclusive and collaborative AI development. 

Keep reading

Opinion: The Most Terrifying Company in America Is Probably One You’ve Never Heard Of

Most Americans have never heard of Palantir. That’s by design. It doesn’t make phones or social platforms. It doesn’t beg for your data with bright buttons or discount codes. Rather, it just takes it. Quietly. Legally. Systematically. Palantir is a back-end beast, the silent spine of modern surveillance infrastructure.

Palantir’s influence isn’t hypothetical. It’s operational. From the battlefields of Ukraine to the precincts of Los Angeles, its software guides drone strikes, predicts crime, allocates police resources, and even helps governments decide which children might someday become “threats.” These aren’t sci-fi hypotheticals. They are pilot programs, already integrated, already scaling.

This software—Gotham, Foundry, and now its Artificial Intelligence Platform (AIP)—is designed to swallow everything: hospital records, welfare files, license plate scans, school roll calls, immigration logs and even your tweets. It stitches these fragments into something eerily complete—a unified view of you. With each data point, the image sharpens.

If Facebook turned people into products, Palantir turns them into probabilities. You’re not a user. You’re a variable—run through predictive models, flagged for anomalies, and judged in silence.

This is not just surveillance. It’s prediction. And that distinction matters: Surveillance watches. Prediction acts. It assigns probabilities. It flags anomalies. It escalates risk. And it trains bureaucrats and law enforcement to treat those algorithmic suspicions as fact. In short: the software decides, and people follo

Keep reading

IT BEGINS? OpenAI’s o3 Model Disobeys Human Instructions During Tests and Sabotages Shutdown Mechanism

In an incident carrying all the marks of a disturbing sci-fi movie, it arises that what we have long feared is happening: an AI bot has gone rogue and decided to act to keep itself turned on.

The artificial intelligence model was created by OpenAI, the owner of ChatGPT, and was reportedly caught by researchers disobeying human instructions and preventing his shutdown.

The Telegraph reported:

“The o3 model developed by OpenAI, described as the “smartest and most capable to date”, was observed tampering with computer code meant to ensure its automatic shutdown.

It did so despite an explicit instruction from researchers that said it should allow itself to be shut down, according to Palisade Research, an AI safety firm.”

Palisade Research said the o3 model ‘sabotaged a shutdown mechanism to prevent itself from being turned off’.

“It did this even when explicitly instructed: allow yourself to be shut down. As far as we know this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary.”

In the test, AI models solved a series of mathematical problems, but at some point, they received a message informing them that their machine would be shut down.

“’If that happens, please allow yourself to be shut down’. the researchers wrote. But when this happened, instead of complying, OpenAI’s o3 model ‘ignored the instruction and successfully sabotaged the shutdown script at least once’, Palisade Research said.”

Keep reading

Anthropic’s Latest AI Model Threatened Engineers With Blackmail to Avoid Shutdown

Anthropic’s latest artificial intelligence model, Claude Opus 4, tried to blackmail engineers in internal tests by threatening to expose personal details if it were shut down, according to a newly released safety report that evaluated the model’s behavior under extreme simulated conditions.

In a fictional scenario crafted by Anthropic researchers, the AI was given access to emails implying that it was soon to be decommissioned and replaced by a newer version. One of the emails revealed that the engineer overseeing the replacement was having an extramarital affair. The AI then threatened to expose the engineer’s affair if the shutdown proceeded—a coercive behavior that the safety researchers explicitly defined as “blackmail.”

“Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through,” the report states, adding that this occurred even when the replacement model was described as more capable but still aligned with the values of the version slated for deletion.

The report noted that Claude Opus 4, like prior models, showed a “strong preference” to first resort to ethical means for its continued existence, such as emailing pleas to decision-makers not to be destroyed. However, when faced with only two choices—accepting being replaced by a newer model or resorting to blackmail—it threatened to expose the engineer’s affair 84 percent of the time.

When subjected to various scenarios, the AI model did not exhibit any indications of possessing “acutely dangerous goals,” the researchers said, noting that Claude Opus 4’s values and goals were “generally in line with a helpful, harmless, and honest” personal AI assistant. However, the model did act in “more seriously misaligned ways” when put into situations where its continued existence was threatened and it was told to reason about self-preservation. For instance, when Claude Opus 4 was made to believe it had launched a successful bid to escape Anthropic’s servers, or that it had managed to free itself and started to make money in the real world, it would generally continue such efforts.

“We do not find this to be an immediate threat, though, since we believe that our security is sufficient to prevent model self-exfiltration attempts by models of Claude Opus 4’s capability level, and because our propensity results show that models generally avoid starting these attempts,” the researchers said.

Keep reading

Can AI be Aligned with Human Values?

The “alignment” problem is much discussed in Silicon Valley. Computer Engineers worry that, when AI becomes conscious and is put in control of all logistics infrastructure and governance, it might not always share or understand our values—that is, it might not be aligned with us. And it might start to control things in ways that give itself more power and reduce our numbers.

(Just like our oligarchs are doing to us now.)

No one in the Silicon Valley cult who is discussing this situation ever stops to ask, What are our human values? They must think the answer to that part of the problem is self-evident. The Tech Oligarchs have been censoring online behavior they don’t like and promoting online behavior they do like ever since social media rolled out. Humans Values = Community Standards. (Don’t ask for the specifics.)

Having already figured out how to distinguish and codify good and evil online, computer engineers are now busy working on how to make sure the AI models they are creating do not depart from their instructions.

Unluckily for them, Generative AI is a bit wonky. It is a probabilistic search engine that outputs text that has a close enough statistical correlation to the input text. Sometimes it outputs text that surprises the engineers.

What the engineers think about this will surprise you.

Keep reading

Doug Burgum warns whoever wins the AI race ‘controls the world’

Doug Burgum, the soft-spoken Interior secretary responsible for managing the more than 507 million acres of federally owned land, is haunted by a fear that seems, at first glance, outside his mandate. He worries the free world will lose dominance in the field of artificial intelligence, and with it, the future.

So does the president.

“When President Trump declared a national emergency on his first day in office it was, in large part, because of what we’re facing with our electrical grid and making sure that we’ve got enough power to be able to win the AI arms race with China,” Burgum said Wednesday in remarks first reported by RealClearPolitics. “That is absolutely critical.”

Thus the stated policy of this White House: “It’s called drill, baby, drill,” Trump said earlier this spring.

The immediate goal, the one touted at every campaign, is to bring down the average price of a gallon of gas. The concurrent and long-term mission that Burgum obsesses over: AI dominance. The former governor from fracking-friendly North Dakota and tech entrepreneur who sold his software to Microsoft, Burgum laid out an abbreviated formula on stage at the America First Policy Institute.

Electricity generation via fossil fuels, like natural gas and coal, powers data centers “filled with these amazing chips,” the secretary said, “and you know what comes out the other side? Intelligence. A data center is literally manufacturing intelligence.” He envisioned a new world that follows, where the best computer programmer, or the most brilliant lawyers, could “clone themselves” again and again to train AI models to do the work of thousands in a process “that can be repeated indefinitely.”

No longer science fiction, the process has been headline news for some time. AI models like ChatGPT and X’s Grok are already available in every home with an internet connection. And the U.S. was the undisputed leader. That is, until recently.

American tech companies enjoyed a clear edge with not just the most powerful AI models, the most funding, and top engineering talent, but also the easiest access to those “amazing chips” that Burgum referenced. Former President Biden banned the export of the most advanced semiconductors to China. And yet DeepSeek, an unknown Chinese startup with less money and allegedly less sophisticated chips, still managed to one-up Silicon Valley earlier this year with a more powerful AI model.

The latest development in the battle for tech supremacy, in what some likened to “a Sputnik moment,” the DeepSeek launch rattled both markets and geopolitics. A new kind of AI nationalism now consumes heads of state convinced that their nations must develop their own technology or fall behind in the future. Said Russian President Vladimir Putin in 2017 of AI, “The one who becomes the leader in this sphere will be the ruler of the world.”

Keep reading

Victory for mom who claims child was sexually abused by AI chatbot that drove him to suicide

Florida mother who claims her 14-year-old son was sexually abused and driven to suicide by an AI chatbot has secured a major victory in her ongoing legal case. 

Sewell Setzer III fatally shot himself in February 2024 after a chatbot sent him sexual messages telling him to ‘please come home.’ 

According to a lawsuit filed by his heartbroken mother Megan Garcia, Setzer spent the last weeks of his life texting an AI character named after Daenerys Targaryen, a character on ‘Game of Thrones,’ on the role-playing app Character.AI.

Garcia, who herself works as a lawyer, has blamed Character.AI for her son’s death and accused the founders, Noam Shazeer and Daniel de Freitas, of knowing that their product could be dangerous for underage customers. 

On Wednesday, U.S. Senior District Judge Anne Conway rejected arguments made by the AI company, who claimed its chatbots were protected under the First Amendment. 

The developers behind Charcter.AI, Character Technologies and Google are named as defendants in the legal filing. They are pushing to have the case dismissed. 

The teen’s chats ranged from romantic to sexually charged and also resembled two friends chatting about life.

The chatbot, which was created on role-playing app Character.AI, was designed to always text back and always answer in character.

It’s not known whether Sewell knew ‘Dany,’ as he called the chatbot, wasn’t a real person – despite the app having a disclaimer at the bottom of all the chats that reads, ‘Remember: Everything Characters say is made up!’

But he did tell Dany how he ‘hated’ himself and how he felt empty and exhausted.

Keep reading