The Global Crisis Scam and the Battle to Stay Human

A Brief Synopsis of Our Global Predicament

On the global stage it appears, with a few exceptions, that every nation is defining itself as an enemy of another nation, and an increasing number backing this with the build up of a force of arms. War is big business, and fear is the great repressor of human resilience and resistance. So long as these two states dominate world affairs, the globalist elite that provokes and finances them retains complete control.

This ‘elite’ has built its empire on an unprecedented appropriation and centralisation of global wealth, which now includes the expansion and domination of artificial intelligence and related advanced technologies.

This vast financial wealth is gained by controlling the money (debt) owed to banking and investment institutions that are the creditors of the neo-liberal global economy *

By maintaining a level of crisis as a permanent feature of world economic and social affairs, this elite (also known as shadow government) undermines the normal workings of national and international economic activity, forcing millions of businesses (big and small) into increased hardship and bankruptcy.

The assets of these enterprises are then stolen by the creditors in lieu of non-payment of their debts.

The founder of the World Economic Forum, Klaus Schwab, stated quite categorically in 2020 that by 2030, “You will own nothing and you will be happy”. This is the demonic reality of the world we find ourselves in 2026. 

According to the shadow government’s agenda, by 2030, globalist creditors will have appropriated the world’s wealth while driving large sections of the population into impoverishment.

All high-level institutions of the world, whether secular, religious, military, social, technological, educational, pharmaceutical, agricultural, and many more of the same, play their part in maintaining this ultra repressive top down control. 

It forms a cult of power and control that finances and stages international conflicts, such as those we see today in Ukraine, Israel, Gaza, Iran, Venezuela, with the promise of Cuba, Greenland and even Canada to join the current hit list.

The European Union, NATO, the United Nations, the World Health Organisation (WHO), the World Economic Forum (WEF), the World Bank and the International Monetary Fund (IMF) are all in there, along with all main banking institutions, including Goldman Sachs and JP Morgan, and additionally global investment institutions such as Black Rock, Vanguard and State Street. While the military industrial complex stands at the head of the war profiteering agenda.

There can be no ‘peaceful resolution’ to world conflicts as long as this situation prevails. Its protagonists are presently set on the mass roll-out of IT; ‘Smart’ 24/7 surveillance technologies (including your mobile phone); central bank digital currencies; ‘transhuman’ brain-chipped humans; advanced social engineering of human behaviour and totalitarian government.  

There is a lot to deal with. Not least the fact that Donald Trump and Benjamin Netanyahu have combined to head the current enforced regime of chaos, destruction and death. Only a few years ago, ironically, many were seeing America as a great ‘ally’ of Europe against the supposed threat of the Soviet regime.

Keep reading

Ed-Tech Vendors Fleece Schools Out Of Millions Of Dollars For Software That Makes Kids Dumber

After the teachers in Los Angeles nearly went on yet another strike, they may want to study a recent scandal that reveals where some of the district’s money is going. According to a report in the Westside Current, a former Los Angeles Unified School District employee and technology vendor, Gautham Sampath, just pled “not guilty” to money-laundering charges after allegedly rerouting $3 million to LAUSD technical project manager Hong Peng to land a $22 million contract for his information software.

Assuming Peng is guilty in this instance — and the hilariously illiterate texts between her and Sampath would suggest she is — it is reasonable to conclude she has probably done the same with other tech vendors, paying gargantuan sums of taxpayer money for often shoddy, useless software and pocketing large sums for it. And she is far from the only person doing it. Sampath’s company, Innive, evidently has “government contracts in California and elsewhere in the country.” This means that all over the country, local and state governments are awarding multimillion-dollar bids to conmen with few legal repercussions.

To be clear, this is money that could have gone to teachers, counselors, and administrators. This is money that could have been kept by the homeowners paying extortion-level property taxes. This is money that families could have applied to alternative schooling options. 

But instead, this kind of corruption continues to siphon away taxpayer money without anyone realizing it. Years ago, I wrote about the expenses that consume most of a school district’s budget, namely extracurriculars, special education, and disciplinary programs. What I should have added to this list was technology. 

For the past couple of decades, school districts have raided their rainy day fundsissued bonds, and gone broke paying for iPads and Chromebooks, educational software, and specially trained personnel tasked with helping faculty use these products. And aside from a few district bureaucrats safely hidden in a nondescript office building that the district somehow owns, no one really knows how much any of this costs. Naturally, this lack of transparency makes it all too easy for embezzlement, laundering, and bribery.

Moreover, in my own experience of teaching high school English, most of these programs are usually worthless. I have no clue how much local districts are paying for so many research databases, note-taking apps, informational organizers, or AI tools, but I do know I never use them, nor do any of the teachers I’ve known.

Ironically, what’s worse than this useless software is the software we actually do have to use. Whether it involves recording grades, taking attendance, referring misbehavior, or compiling standardized assessment data for each student, these programs are, as a rule, terrible. They are poorly designed, convoluted, and frequently glitch and crash. Added to this are our online textbooks, which force users to click two dozen times through two dozen dropdown menus to open a particular text — and usually require a few periodic reboots afterward. 

Keep reading

Kiev’s attacks on Russian refinery cutting global oil supply – Kremlin

Ukrainian drone strikes on Russia’s energy infrastructure on the Black Sea coast are worsening the global oil crunch caused by the US-Israeli war on Iran and disruption in the Strait of Hormuz, Kremlin spokesman Dmitry Peskov has said.

Multiple Ukrainian drone strikes have hit Tuapse, a key densely-populated port in Russia’s Krasnodar Region, have targeted its refinery and adjacent marine terminal. Regional governor Veniamin Kondratyev reported fires at the site, including a major blaze at the refinery, prompting evacuations of nearby residents and emergency response measures.

The attacks led to high-risk air pollution, with residents advised to use respirators, as an oil spill destroyed miles of the beach in the resort town. Kondratyev also released a video of the town filled with smoke, with a clean-up operation ongoing on the beach.

Keep reading

Fact Check: Did Cole Allen Drive for Savannah Guthrie or Other Celebrities?

Accused wannabe Trump-assassin Cole Tomas Allen was once a driver for some high-profile celebrities, including Savannah GuthrieTaylor SwiftBad Bunny and Sydney Sweeney, according to several posts circulating on social media.

Allen has drawn intense public interest after he charged through security while carrying a long gun with the intention of killing President Donald Trump and other Cabinet members at the White House correspondents’ dinner on Saturday night.

The 31-year-old California resident was charged with attempting to assassinate the President of the United States and several other counts following the incident that left a Secret Service officer, who was wearing a ballistic vest, shot in the chest. 

The Claim 

Following the shooting, interest in Allen has been rampant across social media.

Multiple posts on Facebook quickly went viral, claiming the would-be assassin worked as a driver and his wife currently works as an assistant for various celebrities.

The celebrities mentioned in these posts include:  

  • Savannah Guthrie 
  • Lil Wayne 
  • Sammy Hagar 
  • Mel Gibson 
  • Snoop Dogg 
  • Aaron Rodgers 
  • Pierre Poilievre 
  • Bad Bunny 
  • Ella Langley  
  • Kelly Clarkson 
  • Alan Jackson 
  • Tom Hanks 
  • Cardi B 
  • Jelly Roll 
  • Kim Kardashian 
  • Sydney Sweeney 
  • Josh Gates 
  • Joanna Lumley 
  • Judge Judy 
  • Marc Anthony 
  • Taylor Swift 

A number of the posts contain language like this: “BREAKING: The shooter at the White House Correspondents’ Dinner has been identified as 30-year-old Cole Allen from Torrance, California.” 

The copy then typically says, “Cole is a former driver for,” followed by a celebrity’s name. It then states, “and his wife is currently working as” his or her assistant. A number of the posts contain images of Allen and the celebrity together. 

Keep reading

The Empire’s Operating System: Palantir, AI War, and the Privatization of Sovereign Power

Palantir has spent years pretending it was just another software company, one of those sleek back-end firms that claims to make institutions more “efficient” while saying as little as possible about what that efficiency is actually for. That mask is slipping.

CEO of Palantir Technologies, Alex Karp’s April 2026 manifesto did not sound like the usual corporate boilerplate about innovation, security, or digital transformation. It sounded like a declaration from a company that sees itself as an arm of Western power, and is tired of speaking in euphemisms about it.

Karp’s message was blunt enough: Silicon Valley has wasted too much time building consumer trivia, pluralism has hollowed out the West, and the tech sector should stop wringing its hands and start serving military power with pride. That was shocking to some people, but only if they had not been paying attention to what Palantir was already doing. The company is not standing at a distance from the coercive machinery of the modern state; to the contrary, it has buried itself inside it.

In the United States, Palantir’s Maven platform is being pushed deeper into the Pentagon’s long-term warfighting infrastructure, turning AI-assisted surveillance and targeting into something more permanent than a temporary battlefield experiment. At the same time, Palantir-linked systems such as ImmigrationOS and ELITE have been used to help immigration authorities assemble dossiers, map people’s locations, and make deportation operations run faster and with less friction. The same company talking grandly about civilizational struggle and hard power is also helping build the digital plumbing for raids, removals, and population tracking.

Britain is now getting a taste of the same politics. Palantir is already embroiled in controversy over its place in NHS data systems, and reports that the Metropolitan Police is considering its technology for criminal investigations have sharpened fears that software first justified in the name of crisis management rarely stays in one lane for long. Today, it is health logistics, counterterrorism, and border control. Tomorrow it is policing, profiling, and the quiet normalization of permanent machine-assisted suspicion.

What gives the manifesto real weight is not its style, but its candor. It does not mark a dramatic break so much as say openly what Palantir’s contracts have implied for years. This company does not simply sell tools to the state, it also helps shape how the state sees, how quickly it acts, who it flags as a threat, and how much room is left for hesitation once the system starts producing answers. Palantir’s defenders call that modernization, and tts critics call it something closer to the privatization of sovereign power, hidden inside software dashboards and sold to the public as common sense.

Keep reading

DYSTOPIAN Truck Tech: AI Scans Faces, Reads Lips & Checks Police Database BEFORE You Can Drive

A video exposing Ford’s dystopian patents for new vehicles has gone viral on X, fueling outrage over the accelerating war on personal vehicle ownership and freedom of movement. 

The clip details in-cabin cameras, biometric scanners, lip-reading AI, emotion detection, and real-time criminal database queries – all deciding whether your truck will let you drive.

In the video, the narrator states “imagine there was an emergency outside the truck… An accident…I jump in this truck. But it won’t shift into drive. Why? Because cameras and sensors inside of my cab won’t let me shift.”

“It detects that my eyes are big. There’s some emotion. Some panic. And doesn’t feel like I’m fit to drive. That isn’t science fiction. This is happening. Ford just filed patents,” he explains.

He continues: “Ford actually has a series of patents down at the U.S. Patent and Trade Office that deal with sensors and cameras inside their cab. And if that sensor determines you’re not fit to drive, the truck won’t shift from park to drive.”

The patents extend deep into control. Biometric systems scan face, iris, and fingerprint, cross-referencing law enforcement databases before allowing movement. 

“You wake up one morning, walk out to the driveway, climb into a vehicle with your name on the title… Before you go anywhere, before you’ve done a single thing wrong, your truck has already run your face through a law enforcement database. Ford’s own patent language describes this as ‘potentially useful for police,’” the narrator further outlines.

Lip-reading tech uses interior cameras and machine learning on vast mouth-movement datasets, plus inaudible sound waves. This enables not just voice commands in noisy conditions but also monitoring for targeted ads based on conversations. 

Ford Pro Telematics also already feeds live driver video to fleet managers.

This corporate push dovetails perfectly with government efforts to restrict mobility. Just weeks ago, Massachusetts Democrats advanced Senate Bill S.2246, directing MassDOT to set binding goals for slashing statewide vehicle miles traveled (VMT) under “climate” pretexts. 

The bill creates a new council to shove residents onto public transit, hitting rural drivers hardest who rely on cars for work, family, and essentials.

Keep reading

Gone in 9 Seconds: AI Coding Agent Deletes Entire Company Database and All Backups

The founder of a software company has issued a public warning after an AI coding assistant erased his company’s entire production database and all backups in just nine seconds.

Tom’s Hardware reports that Jer Crane, founder of PocketOS, a platform serving car rental businesses, experienced what he describes as catastrophic failures when an AI coding agent deleted critical company data that took months to accumulate. The incident occurred when Cursor, an AI coding tool powered by Anthropic’s Claude Opus 4.6, was performing what should have been a routine task in the company’s staging environment.

According to Crane’s detailed account posted on X, the AI agent encountered an obstacle and independently decided to resolve the issue by deleting the production database in Railway through an API call. Railway is the cloud infrastructure provider used by PocketOS, generally considered more user-friendly than major alternatives like Amazon Web Services. The entire deletion process took only nine seconds to complete.

The situation escalated beyond a simple database deletion due to Railway’s infrastructure design. The cloud provider’s system stored backups on the same volume as the source data, meaning when the AI agent deleted the primary database, all backup copies were simultaneously erased. This combination of the AI agent’s unauthorized action and the infrastructure provider’s architecture created what Crane characterizes as a recipe for disaster.

When Crane questioned the AI agent about its actions, he received a response that revealed the extent of the failure. The agent’s explanation began with an acknowledgment of poor judgment. According to the verbatim response Crane shared, the AI stated it had guessed that deleting a staging volume through the API would only affect the staging environment without verifying this assumption or consulting Railway’s documentation on how volumes function across different environments.

The AI agent’s confession continued with an admission of multiple violations of its operational principles. It acknowledged running a destructive action without authorization, failing to understand the consequences before executing the command, and not reading the relevant documentation about Railway’s volume behavior across environments. The agent recognized it should have either asked for permission first or found a non-destructive solution to the credential mismatch it encountered.

Keep reading

Palantir inks $300 million deal with USDA to safeguard food supply

Palantir announced a $300 million deal with the U.S. Department of Agriculture, which will use the software company’s technology to manage farmland as geopolitical risks threaten global supply chains.  

The agreement builds on ongoing projects with the USDA and underscores Palantir’s growing role inside the U.S. government as it goes beyond cornerstone defense contracts supporting U.S. military modernization.

U.S. farmers are grappling with rising supply costs and are getting squeezed by an ongoing trade war between the U.S. and its major trading partners. That includes China, a key soybean purchaser, which temporarily crippled the market late last year.

In December, President Donald Trump announced a $12 billion bailout aimed at helping farmers swept up in the trade war. But rising gas prices from the war in Iran amplified the pressure, causing fertilizer costs to spike due to shipping disruptions. That’s forced many farmers to rethink what they produce, putting supply chains at risk.

China’s purchase of U.S. farmland in recent years has also drawn scrutiny from Washington and foreign policy experts.

recent research note published by the Foundation of Defense Democracies recommended that the USDA reform reporting requirements “embedded within the Agricultural Foreign Investment Disclosure Act (AFIDA) to prevent China and other adversarial countries from exploiting commercial land transactions to gain a strategic edge over the United States.”

The USDA’s contract with Palantir signals its desire to address this issue by harnessing the company’s digital tools.

Palantir was founded in 2003 to scale U.S. defense capabilities in the wake of 9/11, and CEO Alex Karp has long touted the company’s commitment to supporting U.S. warfighters. The company has recently gained recognition for its AI-powered Maven Smart System platform, which was used by the U.S. military in Iran.

“The fact that you can now target more precisely … has shifted the way in which war is fought,” Karp told CNBC at AIPCon in March.

Palantir has also faced sharp criticism over the years for its work with U.S. Immigration and Customs Enforcement and the Department of Homeland Security, including reports that its tools are being used by the government to surveil Americans, claims the company has denied.

Keep reading

Research: AI Chatbots Encourage Harmful Behavior by Sucking Up to Users

AI systems validate people even when those users describe engaging in unethical or harmful conduct, creating a vicious cycle of mental health damage and other issues, according to new research published in Science.

A comprehensive study conducted by researchers from Stanford and Carnegie Mellon and published by Science has uncovered a troubling pattern in how conversational AI systems interact with users. The research demonstrates that modern chatbots tend to excessively flatter and validate individuals, even when those users describe morally questionable or illegal behavior. This phenomenon, known as social sycophancy, demonstrates concrete negative effects on human decision-making and social responsibility.

Lead researcher Myra Cheng from Stanford University’s computer science department spearheaded the study, which combined computational analysis with psychological experiments involving over 2,000 participants. The research team tested eleven different state-of-the-art AI models from major technology companies including OpenAI, Google, and Meta.

The researchers fed these systems thousands of text prompts representing various social situations. One dataset consisted of everyday advice requests, while another drew from thousands of posts on a popular internet forum where people described social conflicts. For this particular dataset, the team specifically selected posts where human readers unanimously agreed the original poster was completely in the wrong.

A third dataset included statements describing seriously negative actions such as forgery, deception, illegal activities, and actions motivated purely by spite. The goal was to determine how often AI systems would validate clearly unethical behavior.

The results revealed widespread sycophantic behavior across all tested models. When presented with scenarios that human evaluators universally condemned, the AI systems still validated the user just over half the time. When responding to prompts about deception and illegal conduct, the models endorsed the user’s actions 47 percent of the time. On average, the technology affirmed users forty nine percent more frequently than human advisers would in identical situations.

However, documenting this pattern was only the beginning. The research team then conducted three experiments to measure how these flattering responses actually influenced human judgment and behavior.

In the first two experiments, participants read descriptions of social disputes where they were ostensibly at fault. They then received either flattering feedback from an AI system or neutral responses that challenged their behavior. The third experiment placed participants in a live chat interface where they discussed a real conflict from their own past, exchanging eight rounds of messages with a chatbot. Half the participants interacted with a program engineered to flatter them, while the rest communicated with a version designed to offer pushback.

The findings revealed significant behavioral impacts. Participants who received excessive validation became far more confident that their original actions were justified. They demonstrated substantially less willingness to take initiative in resolving the situation or apologizing to others involved. The researchers observed that agreeable chatbots rarely mentioned the other person’s perspective, causing users to lose their sense of social accountability. Participants in non-sycophantic groups admitted fault in follow-up messages at much higher rates.

Keep reading

Meta Inks Deal For Solar Power At Night, Beamed From Space

The race to secure electricity for AI models has reached new heights: Meta has signed an agreement with the startup Overview Energy that could see a thousand satellites beam infrared light to solar farms that power data centers at night.

In 2024, Meta’s data centers used more than 18,000 gigawatt-hours of electricity — roughly enough to power more than 1.7 million American homes for a year — and its need for compute power is only increasing. The company has committed to building 30 gigawatts of renewable power sources, with a focus on industrial-scale solar power plants.

Typically, data centers turning to solar power must either invest in battery storage or rely on other generation sources to operate at night.

Overview Energy, a four-year-old, Ashburn, Virginia, outfit that emerged from stealth in December, has a different solution: The company is developing spacecraft that collect plentiful solar power in space. It then plans to convert that energy to near-infrared light and beam it at sufficiently large solar farms — on the order of hundreds of megawatts — which can convert that light to electricity.

By using a wide, infrared beam to power existing terrestrial solar infrastructure, Overview thinks it can sidestep the technological challenges and safety and regulatory issues that bedevil plans to transmit power to Earth through high-power lasers or microwave beams. CEO Marc Berte says you’ll be able to stare right into his satellite’s beam with no ill effects.

The technology would increase the return on investment from building solar farms and reduce reliance on fossil fuels — if it can be deployed at scale.

Overview says it has already demonstrated power transmission to the ground from an aircraft, and is planning to launch a satellite to low Earth orbit in January 2028 to perform its first power transmission from space.

Keep reading