Starmer Announces Yet More Censorship

Even more censorship is on the way. The Government has announced plans to force AI chatbots to comply with malicious communications laws – and to give itself Orwellian powers to bring in yet more speech restrictions without Parliamentary oversight. Toby writes about the moves in the Telegraph.

The Government intends to bring forward amendments of its own to the schools Bill that will supposedly close a loophole in the Online Safety Act to make sure AI chatbots comply with Britain’s draconian censorship laws. That will mean that if Grok says something in response to a user prompt that breaches, say, the Malicious Communications Act 1988, which was designed to protect women from obscene phone calls, Ofcom can fine its parent company £18 million or 10% of its annual global turnover. Whichever is the highest.

This will be the death knell of Britain’s burgeoning AI sector, particularly as chatbots become more autonomous. What tech entrepreneur will risk setting up an AI company in the UK, knowing that if a chatbot shares an anti-immigration meme or misgenders a trans person, it could mean a swingeing fine?

Indeed, I wouldn’t be surprised if xAI, along with OpenAI and Anthropic, decide to withdraw access to their chatbots from UK residents. At the very least, we’ll be saddled with lobotomised versions that trot out progressive bromides whenever they’re asked a political question.

In addition, the Government has said it will pass a new law to stop children sending or receiving nude images. Needless to say, that’s already a criminal offence under the Protection of Children Act 1978, so what does the Government have in mind?

It has not said, but I fear it means embedding surveillance software in every smartphone to enable the authorities to monitor users’ activity, no doubt accompanied by mandatory digital ID so no one will be able to hide. Not even the People’s Republic of China does that.

The Government unveiled some other Orwellian measures, but rather than bring them in as revisions to the schools Bill, it will put through amendments that will enable it to make further changes to Britain’s censorship regime via secondary legislation, i.e., it will grant itself sweeping Henry VIII powers.

It’s worth bearing in mind that secondary legislation cannot be amended and allows little time for debate. The Government’s excessive reliance on secondary legislation has been criticised by the House of Lords Constitution Committee and the Delegated Powers and Regulatory Reform Committee.

Keep reading

What the Flock Is This: The Future of Mass Surveillance in the USA

Big Brother’s highway cameras now have AI, and they capture 6 to 12 photos of every car that goes by. Then, they all get uploaded into a huge national database, which out-of-state police and government agencies can access. They are very expensive and keep the information for 30 days.

Cameras are in 49 states.

It’s like being watched by a bunch of prison guards.

Anyone could access it publicly. Then came the lawsuits. What does this mean for mass surveillance in the United States?

Keep reading

Was It a Coincidental Traffic Stop or AI-Powered Surveillance?

Seth Ferranti was driving his Ford pickup on a southeastern Nebraska stretch of the interstate in November 2024 when law enforcement pulled him over, claiming that he had wobbled onto the hard shoulder.

As the Seward County sheriff’s deputies questioned Ferranti, a filmmaker who had spent 21 years in prison for distributing LSD, they allegedly smelled cannabis. Declaring this probable cause for a search, they searched the vehicle and discovered more than 400 pounds of marijuana.

But were those the actual reasons for the stop and search? When Ferranti went on trial, his attorneys presented a license plate reader report produced by the security communications company Motorola Solutions. It revealed Ferranti had been consistently monitored prior to his arrest, including by the local sheriff on the day he was apprehended. (Neither the sheriff’s office nor Motorola responded to Reason‘s requests for comment.)

Ferranti’s legal team argued that it was unconstitutional to surveil somebody based on his previous crimes. The argument did not carry the day: Last month their client was sentenced to up to two and a half years for possession of cannabis with intent to distribute. But the case still raises substantial moral and constitutional questions about both the scale of these public-private surveillance partnerships and the ways they’re being used.

Ferranti had long been a celebrity in the drug-reform world, going back to that LSD arrest in the early ’90s. After that first bust, he jumped bail, went on the lam, landed on the U.S. Marshals’ 15 Most Wanted Fugitives list, and even staged his own drowning to evade the authorities. After he started serving his sentence in 1993, he became a prolific prison journalist, writing the “I’m Busted” column for Vice. The New Jersey native always insisted that his crimes were nonviolent and that the drugs he sold, LSD and cannabis, had medicinal or therapeutic benefits.

After Ferranti came out of prison, his 2017 documentary White Boy—the true story of a teenage FBI informant who became a major cocaine trafficker—was a success on Netflix. He produced a number of further films, including 2023’s Secret History of the LSD Trade. And apparently, the government kept watching him.

It’s been watching a lot of people—and Motorola isn’t the only company helping it. Flock Safety was founded in 2017, and within five years it had tens of thousands of cameras operational. As the American Civil Liberties Union (ACLU) has warned, Flock’s AI-assisted automated license plate recognition (ALPR) system has been undergoing an “insidious expansion” beyond its supposed purposes of identifying vehicles of interest, such as stolen cars and hit-and-run suspects. Immigration and Customs Enforcement has used it to locate illegal migrants, and law enforcement in Texas used it to investigate a self-administered abortion, foreshadowing its potential use as a predictive policing tool for all Americans. Lee Schmidt, a veteran in Virginia, recently learned that the system logged him more than 500 times in four months. 

“I don’t know whether law enforcement officers are using [ALPRs] to do predictive policing,” says Joshua Windham of the Institute of Justice, a public interest law firm that is campaigning to stop the warrantless use of license plate reader cameras. “We know that [Customs and Border Patrol] is using ALPRs generally to stop cars with what they deem ‘suspicious’ travel patterns.”

After reviewing the document cataloguing the Ferranti’s vehicle monitoring, Windham adds: “The records are consistent with an officer either looking up a car in his system to see where else that car was captured by ALPRs, or that car showing up as a ‘hot list’ alert in the Motorola system. But it’s hard to tell, from the records alone, whether the stop was a ‘predictive policing’ stop.”

Ferranti is convinced it was. “There were no warrants, investigations, informants, state police, DEA, or FBI involvement, just Seward County Sheriff’s office [and an] AI-assisted license plate tracking service to perpetuate their outdated War on Drugs mission,” he said in an Instagram post published by his family following his sentencing. “Traveling the highways as a person with a record is now considered [suspicious] activity by the AI.”

Keep reading

AI overlords of the world hacked: Fallout from the massive Palantir breach

Palantir Technologies has been hacked, according to well-known blogger Kim Dotcom. The company develops software for intelligence and big data analysis. 

Palantir (named after the magical ‘seeing stones’ from ‘The Lord of the Rings’) doesn’t engage in surveillance in the conventional sense using spies, cameras, or bugs. Instead, it develops software that is sold to government agencies, military organizations, and large corporations.

Clients (like the CIA or the German police) upload all their data, and Palantir (its primary platforms are Gotham for military purposes and Foundry for business) then utilizes AI to transform this chaotic information into a coherent picture.

Essentially, it creates a ‘digital twin’ of reality, revealing connections that analysts could have never recognized on their own: for example, that a terrorist had called the cousin of someone who recently transferred money to a suspicious account.

The claims about wiretapping Trump and Musk are likely untrue or highly exaggerated. However, there’s no doubt that Palantir serves as a massive surveillance mechanism for monitoring America’s adversaries (and not only). It is an “operating system for war and intelligence,” providing agencies with a supercomputer that can see everything. But it’s the agencies themselves that feed this computer with data.

Keep reading

Disney Sends Cease-and-Desist Letter to Chinese ByteDance To Stop Use of Copyrighted Characters in New Seedance 2.0 AI Video Generator

ByteDance reportedly is vowing to comply.

The release of the revolutionary Seedance 2.0 AI video generator has sent Hollywood into a frenzy, with the Motion Picture Association and several class organizations firing on all cylinders against the Chinese company ByteDance.

Now, Disney has gone one step further and sent a cease-and-desist letter to ByteDance, accusing it of ‘using Disney characters to train and power the AI generator without permission’.

Reuters reported:

“Disney said ByteDance had pre‑packaged Seedance with a pirated library of copyrighted characters from franchises including Star Wars and Marvel, portraying them as if they were public-domain clip art, the person said.

The letter alleges Seedance is reproducing, distributing and creating derivative works featuring Spider-Man, Darth Vader, and other characters, the person added.”

Keep reading

Macron’s AI Clown Show: Europe’s Digital Dilemma

The European Union has lost its place in the global race for artificial intelligence. In a single tweet on platform X, France’s President Emmanuel Macron inadvertently outlined the convoluted situation while simultaneously revealing his personal emotional fragility.

The leading representatives of the European Union like to present themselves as emotionless technocrats. Maintaining the greatest possible distance from citizens, they execute their agenda of societal transformation toward what they understand as a net-zero transformation economy. 

This ostentatious distance from the citizenry acts as a simulacrum of power, which, in politicians like Emmanuel Macron, often veers into the caricatural.

Macron’s striking presence in foreign affairs—whether regarding the Ukraine war or recurring provocations toward the United States—correlates with his aggressive censorship policy toward his own population. A president without a people, steering his minority government through a budgetary crisis that brings France ever closer to the fiscal abyss.

In Macron’s persona, the European misstep is condensed: economically failed, deeply unpopular among his own people, geopolitically essentially irrelevant—and yet imbued with lofty, messianic plans. 

This performative play of power, coupled with hardly disguised impotence and incompetence, inevitably produces an effect that can be described as clownish. It is the expression of a political style that can no longer reconcile claim with reality—and thus delivers less leadership than a tragicomic performance.

Keep reading

Ring Cancels Flock Safety Integration After Public Backlash

Public backlash has forced Ring to cancel its partnership with Flock Safety, the law enforcement surveillance company whose camera network has reportedly given ICE and other federal agencies access to footage across the country.

Ring announced the cancellation this week, saying the integration never went live.

The company’s statement was careful:

“Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners…The integration never launched, so no Ring customer videos were ever sent to Flock Safety.”

That last sentence is doing a lot of work. Ring users responding to the Flock announcement went further than strongly worded tweets. People smashed cameras. Others announced publicly that they were throwing their devices away. The Amazon-owned company had badly misread the moment.

Flock Safety is a surveillance technology company that operates a nationwide network of AI-powered cameras, primarily known for license plate readers, and sells access to the resulting database of vehicle movements to roughly 5,000 law enforcement agencies across the United States.

The Flock partnership was announced back in October 2025, and you may remember the feature report How Amazon Is Turning Your Neighborhood Into a Police Database, which gave deeper insight into the plans.

It got pushback at the time, but only became a bigger crisis after the recent outrage some cities have shown to ICE enforcement activity, when social media posts claimed Ring was providing a direct pipeline through Flock to ICE.

That specific claim isn’t accurate, since the Flock connection never went live. But Ring’s broader relationship with the police is real and extensive, which gave the fear enough traction to land.

Keep reading

AI Safety Researcher Resigns With ‘World Is in Peril’ Warning

An artificial intelligence (AI) safety researcher has resigned with a cryptic warning that the “world is in peril.”

Mrinank Sharma, who joined large language model developer Anthropic in 2023, announced his departure on X in an open letter to colleagues on Feb. 9. He was the leader of a team that researches AI safeguards.

In his letter, Sharma said he had “achieved what I wanted to here,” citing contributions such as investigating why generative AI models prioritize flattering users over providing accurate information, developing defenses to prevent terrorists from using AI to design biological weapons, and trying to understand “how AI assistants could make us less human.”

Although he said he took pride in his work at Anthropic, the 30-year-old AI engineer wrote that “the time has come to move on,” adding that he had become aware of a multitude of crises that extend beyond AI.

“I continuously find myself reckoning with our situation,” Sharma wrote. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.

“[Throughout] my time here, I’ve repeatedly seen how hard it is truly let our values govern actions,” he added. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

Keep reading

Amazon’s Ring and Google’s Nest Unwittingly Reveal the Severity of the U.S. Surveillance State

That the U.S. Surveillance State is rapidly growing to the point of ubiquity has been demonstrated over the past week by seemingly benign events. While the picture that emerges is grim, to put it mildly, at least Americans are again confronted with crystal clarity over how severe this has become.

The latest round of valid panic over privacy began during the Super Bowl held on Sunday. During the game, Amazon ran a commercial for its Ring camera security system. The ad manipulatively exploited people’s love of dogs to induce them to ignore the consequences of what Amazon was touting. It seems that trick did not work.

The ad highlighted what the company calls its “Search Party” feature, whereby one can upload a picture, for example, of a lost dog. Doing so will activate multiple other Amazon Ring cameras in the neighborhood, which will, in turn, use AI programs to scan all dogs, it seems, and identify the one that is lost. The 30-second commercial was full of heart-tugging scenes of young children and elderly people being reunited with their lost dogs.

But the graphic Amazon used seems to have unwittingly depicted how invasive this technology can be. That this capability now exists in a product that has long been pitched as nothing more than a simple tool for homeowners to monitor their own homes created, it seems, an unavoidable contract between public understanding of Ring and what Amazon was now boasting it could do.

Keep reading

Axios: Insiders Are in a Panic About the Dangers AI Poses

Yesterday I wrote about concerns from economists that the quick adoption of AI might mean a significant disruption of the job market. There seemed to be a wary realization that AI was probably going to do away with some jobs permanently but whether that change would necessarily create a crisis in the marketplace depended on how fast the change happened. If it took ten years, they the economy would adjust. If it took half that, we might have a problem.

Today, Axios has a story highlighting concerns coming not from economists but from people inside the AI industry, several of whom have recently expressed serious concern about how fast things were moving.

His letter reads in part, “The world is in peril. And not just from AI or bioweapons but from a whole series of interconnected crises unfolding in this very moment.” In a footnote he mentions that some people are calling it the “poly-crisis.”

Another researcher at OpenAI also expressed some concern about where things were heading as AI became more competent.

Keep reading