One of NHS’s biggest AI projects is halted after fears it used health data of 57 MILLION people without proper permissions

NHS England has paused a ground-breaking AI project designed to predict an individual’s risk of health conditions after concerns were raised data from 57 million people was being used without the right permissions.

Foresight, which uses Meta‘s open-source AI model, Llama 2, was being tested by researchers at University College London and King’s College London as part of a national pilot scheme exploring how AI could be used to tailor healthcare plans for patients based on their medical history.

But the brakes were applied to the pioneering scheme after experts warned even anonymised records could contain enough information to identify individuals, The Observer reported.

A joint IT committee between the British Medical Association (BMA) and the Royal College of General Practitioners (RCGP) also said it they had not been made aware that data collected for research into Covid was now being used to train the AI model. 

The bodies have also accused the research consortium, led by Health Data Research UK, of failing to consult an advisory body of doctors before feeding the health data of tens of millions of patients into Foresight.

Both BMA and RGCP have asked NHS England to refer itself to the Information Commissioner over the matter.

Professor Kamila Hawthorne, chair of RGCP, said the issue was one of ‘fostering patient trust’ that their data was not being used ‘beyond what they’ve given permission for.’

She said: ‘As data controllers, GPs take the management of their patients’ medical data very seriously, and we want to be sure data isn’t being used beyond its scope, in this case to train an AI programme.

Keep reading

‘The Agenda: their Vision, Your Future’: The digital prison that awaits us all

“The prophecies made in 1931 [in ‘Brave New World’] are coming true much sooner than I thought they would … The nightmare of total organisation … has emerged … and is now awaiting us, just around the next corner.”—Aldous Huxley, Brave New World Revisited, 1958

The Agenda: Their Vision, Your Future’ is a feature-length independent documentary produced by Mark Sharman; former UK broadcasting executive at ITV and Sky (formerly BSkyB).

In fiction and fact, there have always been people and organisations with ambitions to control the world. And now the oligarchs who pull the strings of finance and power finally have the tools to achieve their global objectives; omnipresent surveillance, artificial intelligence, digital currency and ultimately digital identities. The potential for social control of our lives and minds is alarmingly real.

The plan has been decades in the making and has seen infiltration of Governments, local councils, big business, civil society, the media and, crucially, education. A ceaseless push for a new reality, echoing Aldous Huxley’s ‘Brave New World’, or George Orwell’s ‘1984’.

The Agenda: Their Vision, Your Future’ examines the digital prison which awaits us if we do not push back right now. How your food, energy, money, travel and even your access to the internet could be limited and controlled; how financial power is strangling democracy and how global institutions like the World Health Organisation are commandeered to champion ideological and fiscal objectives.

The centrepiece is man-made climate change and with it, the race to Net Zero. Both are encapsulated in the United Nations and its Agenda 2030. A force for good? Or “a blank cheque for totalitarian global control”?

The Agenda presents expert views from the UK, the USA and Europe.

Keep reading

Palantir’s Value Soars With Dystopian Spy Tool that Will Centralize Data on Americans

During an end-of-year investor call this February, Palantir CEO, co-founder and militant Zionist Alex Karp bragged that his company was making a financial killing by enabling mass murder.

“Palantir is here to disrupt and make the institutions we partner with the very best in the world and, when it’s necessary, to scare enemies,” he stated, adding: “And on occasion, kill them.”  

On this front, Karp claimed Palantir was “crushing it,” and he professed to be “super-proud of the role we play, especially in places we can’t talk about.” 

Karp went on to predict social “disruption” ahead that would be “very good for Palantir.”

“There’s a revolution. Some people are going to get their heads cut off,” he warned, suggesting that his firm was producing the most vital technology enabling elites to restore control during the coming unrest.

Denver-based Palantir [which specializes in software platforms for big-data analytics] is already playing a decisive role in the besieged Gaza Strip, where its products assist Israel’s application of a ferocious AI targeting system known as Lavender which directs its ongoing genocide.

In the face of public protest, Karp has acknowledged that he is directly involved in killing Palestinians in Gaza, but insisted the dead were “mostly terrorists.”

Keep reading

Trump’s Palantir-Powered Surveillance Is Turning America Into A Digital Prison

We are fast approaching the stage of the ultimate inversion: the stage where the government is free to do anything it pleases, while the citizens may act only by permission.” — Ayn Rand

Call it what it is: a panopticon presidency.

President Trump’s plan to fuse government power with private surveillance tech to build a centralized, national citizen database is the final step in transforming America from a constitutional republic into a digital dictatorship armed with algorithms and powered by unaccountable, all-seeing artificial intelligence.

This isn’t about national security. It’s about control.

According to news reports, the Trump administration is quietly collaborating with Palantir Technologies—the data-mining behemoth co-founded by billionaire Peter Thiel—to construct a centralized, government-wide surveillance system that would consolidate biometric, behavioral, and geolocation data into a single, weaponized database of Americans’ private information.

This isn’t about protecting freedom. It’s about rendering freedom obsolete.

What we’re witnessing is the transformation of America into a digital prison—one where the inmates are told we’re free while every move, every word, every thought is monitored, recorded, and used to assign a “threat score” that determines our place in the new hierarchy of obedience.

This puts us one more step down the road to China’s dystopian system of social credit scores and Big Brother surveillance.

The tools enabling this all-seeing surveillance regime are not new, but under Trump’s direction, they are being fused together in unprecedented ways—with Palantir at the center of this digital dragnet.

Palantir, long criticized for its role in powering ICE (Immigration and Customs Enforcement) raids and predictive policing, is now poised to become the brain of Trump’s surveillance regime.

Under the guise of “data integration” and “public safety,” this public-private partnership would deploy AI-enhanced systems to comb through everything from facial recognition feeds and license plate readers to social media posts and cellphone metadata—cross-referencing it all to assess a person’s risk to the state.

Keep reading

To Outrun the Complacent Class

“The emails showed the world’s leading climatologists busily working to organize a research cartel. Peer review was a legitimate source of authority when the process supported their positions. It was compromised, if not malicious, when it offered critics of the orthodoxy a platform. The wish to crush dissenting views, in their minds, had become indistinguishable from the pursuit of truth.”  

– Martin Gurri

Over the last two decades, exafloods of Internet content have educated and entertained beyond imagination. Exponentially-growing communications bandwidth and data transparency empowered regular people, elevated previously unknown geniuses, and helped expose deep dysfunction among many existing “experts.” A tsunami of social media also generated psychedelic confusion, not least among the experts themselves, leading to, in Martin Gurri’s words, a “crisis of authority.”

Now, artificial intelligence is about to amplify this infowarp a million-fold, for good and ill, producing both unprecedented knowledge and wealth and new epistemic challenges. 

If you thought the battles over social media “misinformation” were intense, just wait for the A.I. era. 

Lots of failed experts are engaged in a tactical retreat, regrouping for the coming battles. They passively admit “mistakes were made” but dodge specific accountability and refuse to acknowledge those who got the big questions right. 

At the same time, they are busy establishing new gatekeepers, taboos, and approved voices. The very people who got so many giant questions so very wrong over the last two decades are attempting to build a new information fortress for the next 20 years.

Journalist Douglas Murray, who once backed free speech but also celebrated America’s forever-war disasters, is threatening misbehavers with excommunication. After a dismal recent appearance on the Joe Rogan Experience, the prolific podcast guest warned against listening to the wrong podcasts. 

What the standards are in the new media — especially on podcasts — is still being worked out.

But there must be some.

Otherwise the new media will lead people into errors and evils far greater than the old media could ever dream of.

What a turn of events. In recent years, on varied topics from Covid to Ukraine, the highly imperfect and diverse new media demolished the lockstep old media. Years from now, we might conclude the new media helped save Western civilization. Murray himself owes much of his impressive influence to new media. But now, suddenly, if Joe Rogan and alternative outlets don’t bow to Murray and his friends, they might be more “evil” than the people who lied about Joe Biden’s health, the origin of the virus, Iraqi WMD, Russia collusion, climate apocalypse, and so much more.

But doesn’t Murray have a point? Don’t standards matter? 

Of course, standards matter. Credibility matters. Expertise, editorial judgment, and curation are all important – perhaps more so in an era of information overload. No single person can navigate the infowarp alone. We need trusted sources and guides. 

When Murray balks at defining any standards, however, his tsk-tsk-ing is exposed as an arbitrary threat. If you don’t toe the party line, he and his friends will smear you out of polite society. 

The question is not whether we value standards and expertise. Obviously we do. The question is: at what layers of the stack are these judgments made? 

Keep reading

OpenAI Is Ordered to Save Every ChatGPT Chat — Even the Ones You Delete

A federal court order requiring OpenAI to retain all ChatGPT conversations, including those users have deleted, should strong concern among privacy advocates and added pressure to a growing legal battle over the use of copyrighted material in AI systems.

On May 13, US Magistrate Judge Ona T. Wang directed OpenAI to “preserve and segregate all output log data that would otherwise be deleted on a going-forward basis until further order of the Court.” Although the order was issued several weeks ago, it only came to wider attention this week as OpenAI began taking formal steps to challenge it.

The ruling stems from multiple lawsuits filed by media organizations, including The New York Times, that accuse OpenAI of unlawfully using their copyrighted content to train and operate ChatGPT.

In response, OpenAI submitted a filing urging US District Judge Sidney H. Stein to overturn what it described as a “sweeping, unprecedented order.”

The company argued that the directive forces it to ignore user choices about data deletion, jeopardizing the privacy of millions. OpenAI also pointed to a statement from The New York Times editorial board asserting that Americans “should be able to control what happens to their personal data.”

Keep reading

Welcome to the Palantir World Order

How does a company with CIA ties and two steering committee members of the secretive Bilderberg Group as founders end up in the White House?

This question should be on the minds of every free-thinking person regardless of political affiliation or lack thereof. The answer to this question cuts to the heart of understanding the future direction of the American experiment, and the impact it will have on the rest of the world.

Starting in 2019 I began warning that we were witnessing the creation of a Technocratic State, with Big Tech CEOs amassing exorbitant wealth and unfathomable data about the world. This collection of financial wealth and data has allowed these Technocrats to gain power equivalent to many nations, and beyond that of smaller nations. Palantir is a perfect example of the merging of corporate and state power.

Palantir was co-founded by Peter Thiel and Alex Karp, long before they were made Steering Committee members of the secretive Bilderberg Group. Karp and Thiel launched Palantir with seed funding from the CIA’s venture capital firm In-Q-Tel. The CIA aimed to use Palantir to relaunch the controversial post-9/11 program known as Total Information Awareness. TIA would be shuttered after public outcry and concerns around surveillance. However, after Thiel and Karp began meeting with intelligence officials they helped Palantir to do privately what the government could not get permission from the American people to do publicly.

Over the last 120 days of the 2nd Trump administration it has become clear that Palantir is on the way to becoming the U.S. government’s new favorite Military Industrial Complex contractor of choice. A quick search reveals numerous headlines detailing the recent rapid rise of Palantir’s stock.

This should come as no surprise given the abundant contracts and projects Palantir is reportedly developing with the U.S. government. Here’s a brief look at the ways in which Palantir is becoming more deeply connected to the MIC.

Keep reading

‘Trump Flipped On Us’: MAGA Reacts to Potential National Citizen Database

Supporters of President Donald Trump expressed anger and disbelief online following reports that his administration had advanced plans to create a national citizen database with technology firm Palantir.

Newsweek reached out to Palantir for comment.

Why It Matters

The White House has contracted Palantir, a Colorado-based analytics company co-founded by Trump supporter Peter Thiel, to assist in compiling a database of personal information on American citizens, according to unnamed government officials and Palantir employees who spoke with The New York Times. The purported deal follows project talks Palantir had with the Social Security Administration (SSA) and the Department of Education.

The reaction from Trump’s supporters reflected growing unease within conservative circles, indicating a rare rupture between the president and key segments of his constituency. The controversy underscores nationwide anxieties around privacy, civil liberties, and the growing influence of technology firms over personal information management.

What To Know

The Palantir deal marks a significant development in government data collection, drawing sharp concern from privacy advocates and Trump’s own core base, otherwise known as “MAGA.” Detractors compared the centralized database effort to surveillance initiatives in authoritarian regimes.

Numerous pro-Trump voices expressed dismay and feelings of betrayal across social media platforms like X.

“People are so quick to suggest that I flipped on Trump…No, no, no…I didn’t flip on Trump. TRUMP FLIPPED ON US. I’m just not willing to continue living in a LIE, and I will tell you the unfortunate TRUTH about it,” The Patriot Voice wrote on X to his 158,000 followers.

Keep reading

Trump’s ‘Big Beautiful Bill’ Locks Down AI for a Decade – Welcome to the Golden Age of AI Tyranny

President Donald J. Trump’s One Big Beautiful Bill Act, specifically Section 43201, imposes a 10-year moratorium on state-level AI regulations.

This move, part of H.R.1, raises concerns about a future dominated by unchecked AI power.

The bill allocates $500 million to modernize federal IT systems with AI, but it also bars states from enforcing AI laws. Now, critics warn of an “AI tyranny” era.

The One Big Beautiful Bill Act’s Section 43201 outlines a significant shift in AI governance. Trump actively pushed for this bill, which the House passed. Section 43201 states,

“Except as provided in paragraph (2), no State or political subdivision thereof may enforce, during the 10-year period beginning on the date of the enactment of this Act, any law or regulation limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.”

This moratorium aims to prevent state interference in AI development. However, it allows exceptions for laws that facilitate AI deployment or impose federal requirements.

Therefore, this section prioritizes federal control over AI. The bill also funds AI modernization within the Department of Commerce, allocating $500 million until 2034. Thus, this dual approach sparks debate over innovation versus regulation.

Keep reading

Will Human Error Hand AI The Key To Our Destruction?

By now, the apocalyptic whispers that once belonged solely to science fiction are starting to sound more like realistic forecasts. Artificial intelligence, once hailed as the great liberator of human productivity and ingenuity, is now moonlighting as a con artist, data thief, and spy.

The machines are rising, yes—but they’re not doing it alone. As we embrace AI with reckless abandon, it’s not the code that’s dooming us. It’s the carbon-based lifeforms behind the keyboard making forehead-slapping mistakes. If civilization does collapse under the weight of digital warfare, it’ll be a joint project between rogue AI and good old-fashioned human idiocy.

Let’s talk about the Rise of the Machines, 2025 edition—not in the form of Terminators with glowing eyes, but as lines of sophisticated code hell-bent on manipulation, infiltration, and destruction. Whether we are willing to accept it or not, AI-powered cyberattacks are becoming disturbingly common and alarmingly sophisticated.

We’re seeing the proliferation of deepfake scams, hyper-personalized phishing attacks, and AI-assisted password cracking that make traditional defenses look as flimsy as a paper umbrella in a hurricane.

Take the case of deepfake fraud, where criminals now impersonate CEOs and executives with astonishing accuracy. These aren’t your cousin’s sloppy Photoshop jobs. These are full-motion, pitch-perfect, AI-generated replicas of real people, used in schemes to authorize fraudulent wire transfers, manipulate employees, or simply throw entire organizations into chaos. It’s not just unsettling. It’s an outright weaponization of trust—an erosion of reality itself.

And don’t forget AI-generated phishing emails. These aren’t the hilariously broken English scams from 2006. AI now writes flawless prose, mirroring the tone and style of your boss, your bank, or your kid’s school, tricking you into clicking that one wrong link that detonates ransomware across your organization like a digital IED. The machines aren’t playing chess anymore—they’re playing you.

But even as AI’s capabilities soar into dystopian territory, the greatest cybersecurity threat isn’t machine intelligence. It’s human incompetence. You could hand someone the most secure system in the world, and they’ll still manage to set it on fire with a reused password or a click on an “urgent invoice” from a Nigerian prince.

report by NinjaOne drives this point home with a sledgehammer: nearly 95% of cybersecurity breaches are caused by human error. Think about that. Not Skynet, not Chinese cyber commandos or North Korean hackers in basements—but Steve in Accounting, who uses “123456” as his password and clicks on pop-ups promising free iPhones.

The attack vectors are depressingly mundane: downloading unsafe software, failing to update systems, weak passwords, falling for phishing scams, and misconfigured security settings.

Keep reading