How Epstein Channeled Race Science and ‘Climate Culling’ Into Silicon Valley’s AI Elite

ewly released Jeffrey Epstein files reveal that an apocalyptic worldview – blending racial hierarchy, genetic “optimisation” and even climate-driven population culling – was circulating inside the elite, founder-linked networks shaping Silicon Valley’s rise.

These ideas appear most starkly in the convicted sex offender’s private exchanges with the AI theorist Joscha Bach, and sit alongside the longtermist and transhumanist philosophies championed by other influential figures in the same circles.

Joscha Bach, whose work on cognitive architectures and machine consciousness has shaped advanced AI research and influenced figures such as Elon Musk, appears in the documents engaging Epstein in sweeping discussions about race, hierarchy, genetic engineering and the supposed ‘utility’ of mass death, including under conditions of climate stress.

Meanwhile, another philosopher whose ideas underpin much of modern longtermism and whose work helped shape Silicon Valley’s early thinking on artificial general intelligence, Nick Bostrom, moved through the same intellectual and institutional ecosystem.

His published arguments on eugenics, selective population strategies and existential “optimisation” reveal a parallel strand of thinking within that milieu, financed and legitimised by many of the same networks.

Both men were also financed by Epstein.

Taken together, the Bach correspondence and the longtermist ideas circulating in this environment show that human hierarchy, population thinning and genetic destiny were not fringe provocations, but part of the ambient intellectual air inside the circles designing the next generation of AI.

Keep reading

WHO–Gates Unveils Blueprint For Global Digital ID, AI-Driven Surveillance, & Life-Long Vaccine Tracking For Everyone

In a document published in the October Bulletin of the World Health Organization and funded by the Gates Foundation, the World Health Organization (WHO) is proposing a globally interoperable digital-identity infrastructure that permanently tracks every individual’s vaccination status from birth.

The dystopian proposal raises far more than privacy and autonomy concerns: it establishes the architecture for government overreach, cross-domain profiling, AI-driven behavioral targeting, conditional access to services, and a globally interoperable surveillance grid tracking individuals.

It also creates unprecedented risks in data security, accountability, and mission creep, enabling a digital control system that reaches into every sector of life.

Keep reading

Is AI a Catalyst for Growth–or For Collapse?

Yes, AI is a catalyst. But for what is not yet knowable.

The current narrative holds that the big problem we need to solve is conjuring up cheap energy to power AI data centers. Fortunately for us, the solutions are at hand: building modular nuclear power plants at scale and tapping North America’s vast reserves of cheap natural gas.

Problem solved! With cheap energy to power all the AI data centers, we’re on a trajectory of fantastic growth of all the good things in life.

Let’s consider the implicit assumptions buried in this narrative.

1. The unspoken assumption here is AI will solve all our problems because it’s “smart.” But this assumes the problems are intellectual puzzles rather than self-reinforcing, self-destructive structures fueled by corruption and perverse incentives embedded in the system itself.

2. The assumption is that if we replace human workers with apps and robots, that will automatically generate Utopia. But this is based on a series of baseless, pie-in-the-sky assumptions about human nature and the nature of social and economic structures.

3. The assumption is that being “entertained” by staring at screens all day is the foundation of human fulfillment and happiness, and so getting rid of human work will usher in Nirvana. The reality is humans are hard-wired to find fulfillment in purposeful, meaningful work that is valued by others. Staring at “entertainment” on screens all day isn’t fulfillment, it’s deranging and depressing.

This is human nature in a nutshell: Idle hands are the devil’s workshop.

4. Another assumption is that every technological revolution generates more and better jobs by some causal mechanism. But there is no law of nature that technology inevitably creates more jobs than it destroys, or that the resulting jobs are more rewarding. That recent history supports this idea doesn’t make it a causal law of nature. By its very nature, AI destroys jobs while generating few replacement jobs.

The handful of top AI programmers are paid (or promised) millions of dollars; the industry doesn’t need more than a handful of top designers because AI can generate its own conventional coding.

5. This narrative assumes AI will be immensely profitable and the profit motive will push its limitless expansion. But once again, there are no laws of nature that every new technology is inevitably immensely profitable just because it’s a new technology.

If the projected use-value doesn’t materialize, the investment in the new tech is mal-invested–a stupendous waste of capital chasing a delusional pipe dream. Some percentage might generate some use-value, but this use-value may be obsoleted long before the massive initial investment pays off.

6. Even if the new technology continues expanding, the speculative bubble can deflate 80%. This is the lesson of the dot-com era: that the Internet continued to expand didn’t mean the speculative bubble continued inflating: the speculative bubble is not the same thing as the actual use-value in the real world.

The Internet continued expanding even as the dot-com stock bubble collapsed. In other words, this is the best-case scenario: if the use-value of AI is questionable, then the losses can approach 100%.

Keep reading

The AI Economy And The Public Risk Few Are Willing To Admit

Artificial intelligence is being sold as the technology that will “change everything.” Yet while a handful of firms are profiting enormously from the AI boom, the financial risk may already be shifting to the public. The louder the promises become, the quieter another possibility seems to be:

What if AI is not accelerating the global economy – but masking its slow down?

The headlines declare that AI is transforming medicine, education, logistics, finance, and culture. But when I speak with people in ordinary jobs, a different reality emerges: wages feel sluggish, job openings are tightening, and the loudest optimism often comes from sectors most financially invested in the AI narrative.

This raises an uncomfortable question: Has AI become a true engine of prosperity — or a financial life-support system?

The Mirage of Growth

Recent economic data suggests that a significant portion of U.S. GDP growth is being driven not by broad productivity, but by AI-related infrastructure spending — especially data centers.

study from S&P Global found that in Q2 of 2025, data center construction alone added 0.5% to U.S. GDP. That is historic. But what happens if this spending slows? Are we witnessing genuine economic expansion — or merely a short-term stimulus disguised as innovation?

This pattern is not new. In Ireland in 2008 — before the housing collapse — construction boomed, GDP rose, and skepticism was treated as pessimism. The United States experienced something similar the same year: real estate appeared to be a pillar of prosperity — until it wasn’t. On paper, economies looked strong. In reality, fragility was already setting in.

Today, echoes of that optimism are returning — except this time, the bubble may be silicon, data, and expectation.

The Productivity Paradox

AI has been presented as a labor-saving miracle. But many businesses report a different experience: “work slop” — AI-generated content that looks polished yet must be painstakingly corrected by humans. Time is not saved — it is quietly relocated.

Studies reflect the same paradox:

  • According to media coverage, MIT found that 95% of corporate AI pilot programs show no measurable ROI.
  • MIT Sloan research indicates that AI adoption can lead to initial productivity losses — and that any potential gains depend on major organizational and human adaptation.
  • Even McKinsey — one of AI’s greatest evangelists — warns that AI only produces value after major human and organizational change“Piloting gen AI is easy, but creating value is hard.”

This suggests that AI has not removed human labor. It has hidden it — behind algorithms, interfaces, and automated output that still requires correction.

We are not replacing work. We may only be concealing it.

AI may appear efficient, but it operates strictly within the limits of its training data: it can replicate mistakes, miss what humans would notice, and often reinforce a consensus version of reality rather than reality itself. Once AI becomes an administrative layer — managing speech, research, hiring, and access to capital — it can become financially embedded into institutions, whether or not it produces measurable productivity.

As I explore in the book Staying Human in the Age of AI at that point, AI does not enhance judgment — it administers it. And then we should ask:

Is AI improving society — or merely managing and controlling it?

Keep reading

U.S. Tech Giants Palantir and Dataminr Embed AI Surveillance in Gaza’s Post-War Control Grid

American surveillance firms Palantir and Dataminr have inserted themselves into the U.S. military’s operations center overseeing Gaza’s reconstruction, raising alarms about a dystopian AI-driven occupation regime under the guise of Trump’s peace plan.

Since mid-October, around 200 U.S. military personnel have operated from the Civil-Military Coordination Center (CMCC) in southern Israel, roughly 20 kilometers from Gaza’s northern border. Established to implement President Donald Trump’s 20-point plan—aimed at disarming Hamas, rebuilding the Strip, and paving the way for Palestinian self-determination—the center has drawn UN Security Council endorsement.

Yet no Palestinian representatives have joined these discussions on their future. Instead, seating charts and internal presentations reveal the presence of Palantir’s “Maven Field Service Representative” and Dataminr’s branding, signaling how private U.S. tech companies are positioning to profit from the region’s devastation.

Palantir’s Maven platform, described by the U.S. military as its “AI-powered battlefield platform,” aggregates data from satellites, drones, spy planes, intercepted communications, and online sources to accelerate targeting for airstrikes and operations. Defense reports highlight how it “packages” this intelligence into searchable apps for commanders, effectively shortening the “kill chain” from identification to lethal action.

Palantir’s CTO recently touted this capability as “optimizing the kill chain.” The firm secured a $10 billion Army contract over the summer to refine Maven, which has already guided U.S. strikes in Yemen, Syria, and Iraq.

Palantir’s ties to Israel’s military run deep, formalized in a January 2024 strategic partnership for “war-related missions.” The company’s Tel Aviv office, opened in 2015, has expanded rapidly amid Israel’s Gaza operations. CEO Alex Karp has defended the commitment, declaring Palantir the first company to be “completely anti-woke” despite genocide accusations.

Keep reading

Amazon Data Center Linked to Cluster of Rare Cancers

For the hundreds of communities who’ve been saddled with data centers in recent years, the bulky fixtures are sources of unbearable noisesoaring energy prices, and plenty of electrical fires.

Add another grim possibility to that list: debilitating rare cancers.

Reporting on the “data center boom” in the state of Oregon, Rolling Stone tells the story of Jim Doherty, a cattle rancher and former county commissioner of Morrow, in eastern Oregon.

Doherty’s story began when he noticed a rise in bizarre medical conditions among the county’s 45,000 residents, linked to toxins in the local water. Working with the county health office, the rancher-turned-official began a survey of 70 wells throughout his jurisdiction — 68 of which, his testing found, violated the federal limit for nitrates in drinking water.

Of the first 30 homes he visited, Doherty told RS that 25 residents had recently had miscarriages, while six had lost a kidney. “One man about 60 years old had his voice box taken out because of a cancer that only smokers get, but that guy hadn’t smoked a day of his life,” he told the publication.

But the spike in cancer-causing pollution wasn’t just the fault of local farms, as Doherty expected. It had its roots in a 10,000 square foot data center by the commerce giant Amazon, which first went online in Morrow County in 2011.

Basically, the allegations go like this: industrial megafarms operating in the area are responsible for churning out millions of gallons of wastewater, laden with nitrates from fertilizers. All that waste has to go somewhere, which is one way of saying it mostly ends up in the ground.

Amazon’s hulking data center, thirsty for water to cool its blazing hot computer chips, supercharged this process, adding millions of gallons of wastewater a year to the heavy volume of farm runoff, which Morrow County was already struggling to keep up with. Soon even the deepest reaches of the local aquifer were tainted, according to RS, as huge volumes of data center and agricultural wastewater saturated the water table.

This meant that the data center itself began taking on the toxic sludge as it drew on groundwater to cool its electronics. When it did, evaporation only further concentrated the wastewater, which occasionally contained nitrate levels eight times higher than Oregon’s safe limit. The super concentrated data center water then made its way back into the waste system, where it ostensibly piled up all over again.

In response to the allegations, Amazon spokesperson Lisa Levandowski said that “our data centers draw water from the same supply as other community members; nitrates are not an additive we use in any of our processes, and the volume of water our facilities use and return represents only a very small fraction of the overall water system — not enough to have any meaningful impact on water quality.”

Morrow County residents, however, beg to differ.

Keep reading

An MIT Student Awed Top Economists With His AI Study—Then It All Fell Apart

Aidan Toner-Rodgers, 27, sprung to the upper tiers of economics as a graduate student late last year from virtually out of nowhere.

While still taking core classes at the Massachusetts Institute of Technology, he wrote a paper on artificial intelligence’s workplace impact so rapidly influential it was cited in Congress. He appeared in the pages of The Wall Street Journal in December as the very picture of a wunderkind, in faded jeans with tousled hair, in between two of his mentors, including Nobel laureate Daron Acemoglu. Toner-Rodgers’s work offered a surprising and even hopeful revelation about our high-tech future. He concluded that AI increased worker productivity and spurred innovation. Also, people didn’t like using it very much.

Within weeks, those mentors were asking an unthinkable question: Had Toner-Rodgers made it all up?

By the spring, Toner-Rodgers was no longer enrolled at MIT. The university disavowed his paper. Questions multiplied, but one seemed more elusive than the rest: How did a baby-faced novice from small-town California dupe some of academia’s brightest minds?

“There is no world where this makes any sense,” said David Autor, one of the MIT professors who had previously championed his student’s research. MIT, Autor and Acemoglu declined to comment on​ the specifics of the investigation into the research, citing privacy constraints.

Toner-Rodgers’s illusory success seems in part thanks to the dynamics he has now upset: an academic culture at MIT where high levels of trust, integrity and rigor are all—for better or worse—assumed. He focused on AI, a field where peer-reviewed research is still in its infancy and the hunger for data is insatiable.Expand article logo  Continue reading

What has stunned his former colleagues and mentors is the sheer breadth of his apparent deception. He didn’t just tweak a few variables. It appears he invented the entire study.

Keep reading

The Dangers Of AI: Visualizing The Top Risks Companies Face

Companies are rushing to implement AI, but it’s not all smooth sailing.

More than half of businesses say the dangers of AI have led to at least one negative consequence.

But which issues plague businesses the most?

This infographic, via Visual Capitalist’s Jenna Ross, breaks down the most common risks.

It’s a preview of the brand-new executive guide from Terzo and Visual Capitalist, AI’s Illusion of Truth: The Data Behind AI Errors.

Keep reading

Suicides And Delusions: Lawsuits Point To Dark Side Of AI Chatbot

Can an artificial intelligence (AI) chatbot twist someone’s mind to breaking point, push them to reject their family, or even go so far as to coach them to commit suicide? And if it did, is the company that built that chatbot liable? What would need to be proven in a court of law?

These questions are already before the courts, raised by seven lawsuits that allege ChatGPT sent three people down delusional “rabbit holes” and encouraged four others to kill themselves.

ChatGPT, the mass-adopted AI assistant currently has 700 million active users, with 58 percent of adults under 30 saying they have used it—up 43 percent from 2024, according to a Pew Research survey.

The lawsuits accuse OpenAI of rushing a new version of its chatbot to market without sufficient safety testing, leading it to encourage every whim and claim users made, validate their delusions, and drive wedges between them and their loved ones.

Lawsuits Seek Injunctions on OpenAI

The lawsuits were filed in state courts in California on Nov. 6  by the Social Media Victims Law Center and the Tech Justice Law Project.

They allege “wrongful death, assisted suicide, involuntary manslaughter, and a variety of product liability, consumer protection, and negligence claims—against OpenAI, Inc. and CEO Sam Altman,” according to a statement from the Tech Justice Law Project.

The seven alleged victims range in age from 17 to 48 years. Two were students, and several had white collar jobs in positions working with technology before their lives spiraled out of control.

The plaintiffs want the court to award civil damages, and also to compel OpenAI to take specific actions.

The lawsuits demand that the company offer comprehensive safety warnings; delete the data derived from the conversations with the alleged victims; implement design changes to lessen psychological dependency; and create mandatory reporting to users’ emergency contacts when they express suicidal ideation or delusional beliefs.

The lawsuits also demand OpenAI display “clear” warnings about risks of psychological dependency.

Keep reading

Trump’s New EO, “Genesis Mission”, Just Gave The Nation Over To Technocrats, Lock, Stock, and Barrel

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered:

Section 1.  Purpose.  From the founding of our Republic, scientific discovery and technological innovation have driven American progress and prosperity.  Today, America is in a race for global technology dominance in the development of artificial intelligence (AI), an important frontier of scientific discovery and economic growth.  To that end, my Administration has taken a number of actions to win that race, including issuing multiple Executive Orders and implementing America’s AI Action Plan, which recognizes the need to invest in AI-enabled science to accelerate scientific advancement.  In this pivotal moment, the challenges we face require a historic national effort, comparable in urgency and ambition to the Manhattan Project that was instrumental to our victory in World War II and was a critical basis for the foundation of the Department of Energy (DOE) and its national laboratories.

Editor’s note: America is in a race with itself, but declared with upmost urgency. America’s AI Action Plan was written by Michael J. Kratsios, Assistant to the President for Science and Technology and David O. Sacks, Special Advisor for AI and Crypto. The EO is undoubtedly ghost written by same two Arch-Technocrats. The phrase “accelerate scientific advancement” refers to “accelerationism”, which is part of the Dark Enlightenment. Marc Andreessen wrote in his Techno-Optimist Manifesto: “We believe in accelerationism – the conscious and deliberate propulsion of technological development – to ensure the fulfillment of the Law of Accelerating Returns. To ensure the techno-capital upward spiral continues forever.” – End Editor’s note.

This order launches the “Genesis Mission” as a dedicated, coordinated national effort to unleash a new age of AI‑accelerated innovation and discovery that can solve the most challenging problems of this century.  The Genesis Mission will build an integrated AI platform to harness Federal scientific datasets — the world’s largest collection of such datasets, developed over decades of Federal investments — to train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs.  The Genesis Mission will bring together our Nation’s research and development resources — combining the efforts of brilliant American scientists, including those at our national laboratories, with pioneering American businesses; world-renowned universities; and existing research infrastructure, data repositories, production plants, and national security sites — to achieve dramatic acceleration in AI development and utilization.  We will harness for the benefit of our Nation the revolution underway in computing, and build on decades of innovation in semiconductors and high-performance computing.  The Genesis Mission will dramatically accelerate scientific discovery, strengthen national security, secure energy dominance, enhance workforce productivity, and multiply the return on taxpayer investment into research and development, thereby furthering America’s technological dominance and global strategic leadership.

Keep reading