Starmer Announces Yet More Censorship

Even more censorship is on the way. The Government has announced plans to force AI chatbots to comply with malicious communications laws – and to give itself Orwellian powers to bring in yet more speech restrictions without Parliamentary oversight. Toby writes about the moves in the Telegraph.

The Government intends to bring forward amendments of its own to the schools Bill that will supposedly close a loophole in the Online Safety Act to make sure AI chatbots comply with Britain’s draconian censorship laws. That will mean that if Grok says something in response to a user prompt that breaches, say, the Malicious Communications Act 1988, which was designed to protect women from obscene phone calls, Ofcom can fine its parent company £18 million or 10% of its annual global turnover. Whichever is the highest.

This will be the death knell of Britain’s burgeoning AI sector, particularly as chatbots become more autonomous. What tech entrepreneur will risk setting up an AI company in the UK, knowing that if a chatbot shares an anti-immigration meme or misgenders a trans person, it could mean a swingeing fine?

Indeed, I wouldn’t be surprised if xAI, along with OpenAI and Anthropic, decide to withdraw access to their chatbots from UK residents. At the very least, we’ll be saddled with lobotomised versions that trot out progressive bromides whenever they’re asked a political question.

In addition, the Government has said it will pass a new law to stop children sending or receiving nude images. Needless to say, that’s already a criminal offence under the Protection of Children Act 1978, so what does the Government have in mind?

It has not said, but I fear it means embedding surveillance software in every smartphone to enable the authorities to monitor users’ activity, no doubt accompanied by mandatory digital ID so no one will be able to hide. Not even the People’s Republic of China does that.

The Government unveiled some other Orwellian measures, but rather than bring them in as revisions to the schools Bill, it will put through amendments that will enable it to make further changes to Britain’s censorship regime via secondary legislation, i.e., it will grant itself sweeping Henry VIII powers.

It’s worth bearing in mind that secondary legislation cannot be amended and allows little time for debate. The Government’s excessive reliance on secondary legislation has been criticised by the House of Lords Constitution Committee and the Delegated Powers and Regulatory Reform Committee.

Keep reading

What the Flock Is This: The Future of Mass Surveillance in the USA

Big Brother’s highway cameras now have AI, and they capture 6 to 12 photos of every car that goes by. Then, they all get uploaded into a huge national database, which out-of-state police and government agencies can access. They are very expensive and keep the information for 30 days.

Cameras are in 49 states.

It’s like being watched by a bunch of prison guards.

Anyone could access it publicly. Then came the lawsuits. What does this mean for mass surveillance in the United States?

Keep reading

UK Government Plans to Use Delegated Powers to Undermine Encryption and Expand Online Surveillance

The UK government wants to scan people’s photos before they send them. Not just children’s photos. Everyone’s.

Technology Secretary Liz Kendall spelled it out on BBC Breakfast, floating a proposal to “block photographs being sent that are potentially nude photographs by anybody or block children from sending those.” That second clause is the tell. Blocking “anybody” from sending potentially nude images requires scanning everybody’s messages. There’s no technical path to that outcome that doesn’t involve reading content the sender assumed was private.

Kendall said the government is conducting a consultation on “whether we should have age limits on things like live streaming” and whether there should be “age limits on what’s called stranger pairing, for example, on games online.” The consultation, she said, will look at all of these. That list now covers messaging apps, photo sharing, gaming, and live streaming. Any feature that lets you share an image with another person potentially falls inside it.

This is how the mandate grows. The government announced a push for new delegated powers on February 16, framing them around age verification for social media and VPNs.

Keep reading

Jeffrey Epstein Recruited NSA Codebreakers for Genome “Manhattan Project”

In the decade before Russia’s invasion of Ukraine in 2022, the U.S. and Russia were engaged in high-stakes exchanges of advanced technology involving the Massachusetts Institute of Technology (MIT) and the Skolkovo Innovation Center—a Russian government-backed technology hub that aimed to jump-start a “venture” innovation ecosystem in Moscow.

Jeffrey Epstein sat at the crossroads of academia, philanthropy, and venture finance as these global capital flows were threatened by the brewing confrontation in Ukraine.

In 2013, during the early cryptocurrency boom, Epstein sought an audience with Vladimir Putin to encourage the Russian president to shift course from the MIT–Skolkovo model. Instead of playing “catch up” with the United States through venture-backed startups, Epstein proposed, Russia could help lead a new financial system based on a novel global currency.

Epstein funded the early development of cryptocurrency through the MIT Digital Currency Initiative, founded in 2015. MIT’s Bitcoin Core Development Fund helped pay bitcoin’s early developers to maintain the open-source software authored by Satoshi Nakamoto, bitcoin’s anonymous inventor. Epstein was an early investor in Coinbase, and he was friends with Brock Pierce, the co-founder of U.S. dollar stablecoin company Tether, which operates, in effect, the world’s largest crypto bank.

Epstein was also recruiting cryptographers to a more ambitious project: hacking the human genome. In an email to a redacted recipient in August 2012, Epstein wrote, “My biology gurus at harvard all agree that the signal intelligence used by the various agencies , could be put to work on breaking the dna code or protein signal problems. breaking foreign codes is the expertise of the us and nsa.” Epstein prompted the recipient to help him recruit “code breakers” from the various intelligence agencies: “it would be great to know which agency button to push.”

Keep reading

Was It a Coincidental Traffic Stop or AI-Powered Surveillance?

Seth Ferranti was driving his Ford pickup on a southeastern Nebraska stretch of the interstate in November 2024 when law enforcement pulled him over, claiming that he had wobbled onto the hard shoulder.

As the Seward County sheriff’s deputies questioned Ferranti, a filmmaker who had spent 21 years in prison for distributing LSD, they allegedly smelled cannabis. Declaring this probable cause for a search, they searched the vehicle and discovered more than 400 pounds of marijuana.

But were those the actual reasons for the stop and search? When Ferranti went on trial, his attorneys presented a license plate reader report produced by the security communications company Motorola Solutions. It revealed Ferranti had been consistently monitored prior to his arrest, including by the local sheriff on the day he was apprehended. (Neither the sheriff’s office nor Motorola responded to Reason‘s requests for comment.)

Ferranti’s legal team argued that it was unconstitutional to surveil somebody based on his previous crimes. The argument did not carry the day: Last month their client was sentenced to up to two and a half years for possession of cannabis with intent to distribute. But the case still raises substantial moral and constitutional questions about both the scale of these public-private surveillance partnerships and the ways they’re being used.

Ferranti had long been a celebrity in the drug-reform world, going back to that LSD arrest in the early ’90s. After that first bust, he jumped bail, went on the lam, landed on the U.S. Marshals’ 15 Most Wanted Fugitives list, and even staged his own drowning to evade the authorities. After he started serving his sentence in 1993, he became a prolific prison journalist, writing the “I’m Busted” column for Vice. The New Jersey native always insisted that his crimes were nonviolent and that the drugs he sold, LSD and cannabis, had medicinal or therapeutic benefits.

After Ferranti came out of prison, his 2017 documentary White Boy—the true story of a teenage FBI informant who became a major cocaine trafficker—was a success on Netflix. He produced a number of further films, including 2023’s Secret History of the LSD Trade. And apparently, the government kept watching him.

It’s been watching a lot of people—and Motorola isn’t the only company helping it. Flock Safety was founded in 2017, and within five years it had tens of thousands of cameras operational. As the American Civil Liberties Union (ACLU) has warned, Flock’s AI-assisted automated license plate recognition (ALPR) system has been undergoing an “insidious expansion” beyond its supposed purposes of identifying vehicles of interest, such as stolen cars and hit-and-run suspects. Immigration and Customs Enforcement has used it to locate illegal migrants, and law enforcement in Texas used it to investigate a self-administered abortion, foreshadowing its potential use as a predictive policing tool for all Americans. Lee Schmidt, a veteran in Virginia, recently learned that the system logged him more than 500 times in four months. 

“I don’t know whether law enforcement officers are using [ALPRs] to do predictive policing,” says Joshua Windham of the Institute of Justice, a public interest law firm that is campaigning to stop the warrantless use of license plate reader cameras. “We know that [Customs and Border Patrol] is using ALPRs generally to stop cars with what they deem ‘suspicious’ travel patterns.”

After reviewing the document cataloguing the Ferranti’s vehicle monitoring, Windham adds: “The records are consistent with an officer either looking up a car in his system to see where else that car was captured by ALPRs, or that car showing up as a ‘hot list’ alert in the Motorola system. But it’s hard to tell, from the records alone, whether the stop was a ‘predictive policing’ stop.”

Ferranti is convinced it was. “There were no warrants, investigations, informants, state police, DEA, or FBI involvement, just Seward County Sheriff’s office [and an] AI-assisted license plate tracking service to perpetuate their outdated War on Drugs mission,” he said in an Instagram post published by his family following his sentencing. “Traveling the highways as a person with a record is now considered [suspicious] activity by the AI.”

Keep reading

AI overlords of the world hacked: Fallout from the massive Palantir breach

Palantir Technologies has been hacked, according to well-known blogger Kim Dotcom. The company develops software for intelligence and big data analysis. 

Palantir (named after the magical ‘seeing stones’ from ‘The Lord of the Rings’) doesn’t engage in surveillance in the conventional sense using spies, cameras, or bugs. Instead, it develops software that is sold to government agencies, military organizations, and large corporations.

Clients (like the CIA or the German police) upload all their data, and Palantir (its primary platforms are Gotham for military purposes and Foundry for business) then utilizes AI to transform this chaotic information into a coherent picture.

Essentially, it creates a ‘digital twin’ of reality, revealing connections that analysts could have never recognized on their own: for example, that a terrorist had called the cousin of someone who recently transferred money to a suspicious account.

The claims about wiretapping Trump and Musk are likely untrue or highly exaggerated. However, there’s no doubt that Palantir serves as a massive surveillance mechanism for monitoring America’s adversaries (and not only). It is an “operating system for war and intelligence,” providing agencies with a supercomputer that can see everything. But it’s the agencies themselves that feed this computer with data.

Keep reading

Disney Sends Cease-and-Desist Letter to Chinese ByteDance To Stop Use of Copyrighted Characters in New Seedance 2.0 AI Video Generator

ByteDance reportedly is vowing to comply.

The release of the revolutionary Seedance 2.0 AI video generator has sent Hollywood into a frenzy, with the Motion Picture Association and several class organizations firing on all cylinders against the Chinese company ByteDance.

Now, Disney has gone one step further and sent a cease-and-desist letter to ByteDance, accusing it of ‘using Disney characters to train and power the AI generator without permission’.

Reuters reported:

“Disney said ByteDance had pre‑packaged Seedance with a pirated library of copyrighted characters from franchises including Star Wars and Marvel, portraying them as if they were public-domain clip art, the person said.

The letter alleges Seedance is reproducing, distributing and creating derivative works featuring Spider-Man, Darth Vader, and other characters, the person added.”

Keep reading

5,300-Year-Old Egyptian Artifact Confirms Existence of “Mechanically Sophisticated” Drilling Technology Before the Age of the Pharaohs

Close to a century ago, researchers unearthed a small, unusual metal object during excavations at a cemetery in Upper Egypt. Now, a reinvestigation into the curious find has identified it as the earliest known rotary drill ever found in association with ancient Egyptian archaeology.

The small artifact, which measures less than 64 millimeters across and weighs under two grams, is crafted from copper-alloy and dates to the late 4th millennium BCE, which coincides with Egypt’s Predynastic period, a remote era that predates the reign of the earliest pharaohs.

The remarkable discovery, which experts now characterize as “a mechanically sophisticated drilling tool,” was recently detailed in a study published in the journal Egypt and the Levant.

 An Ancient Curiosity Comes into Focus

The small artifact, retrieved from the burial of an ancient predynastic Egyptian man identified as Grave 3932, was first documented in the 1920s. At that time, the object was described as “a little awl of copper, with some leather thong wound round it,” a description that offered researchers little to go on as to what its potential use might have been.

Now, according to a team of archaeologists at Newcastle University, working in collaboration with the Academy of Fine Arts in Vienna, re-examination of the object under magnification has revealed wear patterns that are consistent with drilling devices from later periods in Egyptian history.

Specifically, the team points to evidence of edges rounded from wear, as well as striations and other features that are clear indicators of rotary motion.

“The ancient Egyptians are famous for stone temples, painted tombs, and dazzling jewelry, but behind those achievements lay practical, everyday technologies that rarely survive in the archaeological record,” said Dr. Martin Odler, a Visiting Fellow in Newcastle University’s School of History, Classics and Archaeology, in a statement.

According to Odler, the lead author of the recent study that reassessed the artifact, one of the most important technologies behind such famous achievements afforded us by the ancient Egyptians was the drill, which had uses in everything from woodwork to shaping stone for construction and the creation of decorative pieces.

Evidence of a Bowstring Emerges

Intriguingly, the early 20th-century references to “some leather thong” have proven correct, as the Newcastle team and their Italian collaborators say that six coils of a very fragile piece of leather cordage appear to represent clear evidence of a bowstring which would have been used to power the drill.

Such ancient bow drills served as an early form of rotary tool, which one could liken to an ancient counterpart to modern hand drills. To function, these bow drills featured a small length of leather wrapped around a shaft, which spins the drill very quickly as the string is moved back and forth.

Keep reading

Macron’s AI Clown Show: Europe’s Digital Dilemma

The European Union has lost its place in the global race for artificial intelligence. In a single tweet on platform X, France’s President Emmanuel Macron inadvertently outlined the convoluted situation while simultaneously revealing his personal emotional fragility.

The leading representatives of the European Union like to present themselves as emotionless technocrats. Maintaining the greatest possible distance from citizens, they execute their agenda of societal transformation toward what they understand as a net-zero transformation economy. 

This ostentatious distance from the citizenry acts as a simulacrum of power, which, in politicians like Emmanuel Macron, often veers into the caricatural.

Macron’s striking presence in foreign affairs—whether regarding the Ukraine war or recurring provocations toward the United States—correlates with his aggressive censorship policy toward his own population. A president without a people, steering his minority government through a budgetary crisis that brings France ever closer to the fiscal abyss.

In Macron’s persona, the European misstep is condensed: economically failed, deeply unpopular among his own people, geopolitically essentially irrelevant—and yet imbued with lofty, messianic plans. 

This performative play of power, coupled with hardly disguised impotence and incompetence, inevitably produces an effect that can be described as clownish. It is the expression of a political style that can no longer reconcile claim with reality—and thus delivers less leadership than a tragicomic performance.

Keep reading

Meta Considers Timed Face Recognition Launch to Exploit Distracted Society

Meta is weighing whether to add face recognition to its camera-equipped smart glasses, and The New York Times obtained an internal company document that reveals more than just the plan itself.

It reveals how Meta thinks about when to launch it: “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

Read that plainly: Meta wants to release a mass biometric surveillance product while the people most likely to fight it are too distracted to respond.

The technology would scan the face of every person who enters the glasses’ field of view, building a faceprint to match against a database. Every passerby. Every stranger on the subway. Every person who happens to walk through the frame of someone else’s device. None of them consented. Most of them won’t even know they were captured.

Faceprints are among the most sensitive data a company can collect. Unlike a password, a face cannot be changed after a breach. Once collected, this data enables mass surveillance, fuels discrimination, and creates a permanent identification trail attached to a person’s physical movement through the world.

Putting that capability into wearable glasses carried by ordinary people in ordinary places moves it off servers and into every room, street, and gathering that people enter.

Meta ran this experiment before and lost.

The company shut down (only kind of) its photo face-scanning tool in November 2021, simultaneously announcing it would delete (if you believe them) over a billion stored face templates. That retreat came after years of mounting legal exposure that produced a very expensive record.

In July 2019, Facebook settled a Federal Trade Commission investigation for $5 billion. The allegations included that the company’s face recognition settings were confusing and deceptive, and the settlement required the company to obtain consent before running face recognition on users going forward.

Less than two years later, Meta agreed to pay $650 million to settle a class action brought by Illinois residents under that state’s biometric privacy law. Then, in July 2024, it settled with Texas for $1.4 billion over the same defunct system. Nearly $7 billion across three settlements, all tied to face recognition practices the company ultimately abandoned.

Keep reading