UK Government Emails Reveal Push to Pressure Tech Platforms to Remove Lawful Speech on Immigration and Policing

A series of internal emails from the UK government has revealed an aggressive push to monitor and suppress online posts deemed “concerning,” sparking alarm over creeping censorship under the banner of combating misinformation and community unrest.

The documents, shared by US Rep. Jim Jordan, paint a picture of state officials flagging lawful speech, pressuring tech companies to remove content, and targeting what they described as “concerning narratives about the police and a ‘two-tier’ system.”

One of the most widely circulated videos under scrutiny featured a street celebration in Manchester where participants waved Pakistani flags. Captioned “It looks like Islamabad but it’s Manchester,” the video, posted by Radio Genoa on X, amassed over 14 million views.

Government emails described this kind of footage as misleading or dangerous, with one note labeling it an example of content that is “shared out of context in order to incite fear of the Muslim community.”

Another email, dated August 3, 2024, acknowledged “significant volumes of anti-immigrant content” online and pointed to “concerning narratives about the police and a ‘two-tier’ system that we are seeing across the online environment.”

The correspondence shows government officials not only monitoring speech but actively collaborating with platforms to address posts, even ones not violating the law or even the platform’s terms of service.

Officials were asking for direct intervention. One message requested clarity from platforms about “what content you are seeing across your platform; and b) any measures you have taken in response.” A follow-up email urged platforms to act quickly, stating, “We’d be grateful if you could come back to us on those two points as soon as you are able to.”

In one particularly troubling exchange dated August 4, government officials flagged a video showing someone scrolling through a freedom of information request that referred to asylum seekers as “undocumented fighting age males.”

Keep reading

OpenAI and Oracle announce Stargate AI data centre deal

OpenAI has shaken hands with Oracle on a colossal deal to advance the former’s colossal Stargate AI data centre initiative.

It’s one thing to talk about the AI revolution in abstract terms, but it’s another thing entirely to grasp the sheer physical scale of what’s being built to make it happen. The foundations of our AI future are being laid in concrete, steel, and miles of fibre-optic cable, and those foundations are getting colossally bigger.

Together, OpenAI and Oracle are going to build new data centres in the US packed with enough hardware to consume 4.5 gigawatts of power. It’s hard to overstate what a staggering amount of energy that is—it’s the kind of power that could light up a major city. And all of it will be dedicated to one thing: powering the next generation of AI.

This isn’t just a random expansion; it’s a huge piece of OpenAI’s grand Stargate plan. The goal is simple: to build enough computing power to bring advanced AI to everyone.

When you add this new project to the work already underway in Abilene, Texas, OpenAI is now developing over 5 gigawatts of data centre capacity. That’s enough space to run more than two million of the most powerful computer chips available.

This move shows they are dead serious about a pledge they made at the White House earlier this year to plough half a trillion dollars into US AI infrastructure. In fact, with the momentum they’re getting from partners like Oracle and Japan’s SoftBank, they now expect to blow past that initial goal.

But this story isn’t just about silicon chips and corporate deals; it’s about people. OpenAI believes that building and running these new Stargate AI data centres will create over 100,000 jobs.

That job creation presents real opportunities for families across the country from construction crews pouring the concrete, to specialised electricians wiring up racks of servers, and the full-time technicians who will keep these digital brains running day and night.

Keep reading

Blake Lively Wants Names and IP Addresses

Blake Lively has decided that the best way to respond to online gossip and criticism is with subpoenas, lots of them. With a move that suggests her legal team spent a weekend watching “Enemy of the State,” Lively is now targeting 36 content creators, from high-profile commentators to pseudonymous hobbyists, all over rumors she says were part of a smear effort.

Some of the targets have large followings. Others barely register on the algorithm. One runs a YouTube astrology channel with fewer than 300 subscribers at the time of the subpoena. All are now being asked to turn over a wide array of personal and financial data, as if they were co-conspirators in a criminal probe instead of people who post opinions from their bedrooms.

The case, Lively v. Wayfarer Studios LLC, is already a headache in itself, but this new front seems designed less to resolve the actual lawsuit and more to comb through the internet for anything unflattering.

We obtained a copy of the Google subpoena for you here.

We obtained a copy of the TikTok subpoena for you here.

We obtained a copy of the X subpoena for you here.

If you’re out of the loop, Blake Lively is suing Wayfarer Studios, its co-founder Justin Baldoni, and several others, alleging sexual harassment, workplace misconduct, breach of contract, and a coordinated retaliation campaign designed to destroy her reputation.

According to the complaint, Lively raised concerns about repeated inappropriate behavior by Baldoni and Wayfarer executives during production of It Ends With Us. After the film was completed, she claims Baldoni and his team launched a covert “social manipulation” campaign to discredit her using fake grassroots content, crisis PR firms, and anonymous online posts, which she describes as a well-funded digital smear effort.

Keep reading

Elon Musk’s X to Deploy AI to Write Community Notes, Speed Up Fact-Checking

In a major tech-driven update, Elon Musk’s social media platform X (formerly Twitter) has announced it will deploy AI technology to automatically write Community Notes and enhance the speed and accuracy of fact-checking. This move signals a deeper commitment to tackling misinformation, improving content transparency, and empowering users with context.

Let’s break down what this means, how it will work, and what impact it might have on the social media landscape.

Community Notes (formerly Birdwatch) are a feature that allows users to collaboratively add context to potentially misleading or controversial tweets. The system relies on crowdsourced input from contributors who can write, rate, and approve notes that are visible to all users once they reach a certain level of consensus.

Until now, these notes were created manually by human contributors. But with the introduction of AI, the process is about to get a serious boost in efficiency and scale.

Keep reading

Meta refuses to sign EU’s AI code of practice

Meta has refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.

“Europe is heading down the wrong path on AI,” wrote Meta’s chief global affairs officer Joel Kaplan in a post on LinkedIn. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

The EU’s code of practice — a voluntary framework published earlier this month — aims to help companies implement processes and systems to comply with the bloc’s legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services and bans developers from training AI on pirated content; companies must also comply with content owners’ requests to not use their works in their datasets.

Calling the EU’s implementation of the legislation “overreach,” Kaplan claimed that the law will “throttle the development and deployment of frontier AI models in Europe and will stunt European companies looking to build businesses on top of them.”

risk-based regulation for applications of artificial intelligence, the AI Act bans some “unacceptable risk” use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of “high-risk” uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk- and quality-management obligations.

Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft, and Mistral AI, have been fighting the rules, even urging the European Commission to delay its rollout. But the Commission has held firm, saying it will not change its timeline.

Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of “general-purpose AI models with systemic risk,” like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by August 2, 2027.

Keep reading

Trial begins as Meta investors, Zuckerberg square off over alleged privacy violations

An $8 billion trial by Meta Platforms (META.O) shareholders against Mark Zuckerberg and other current and former company leaders kicked off on Wednesday over claims they illegally harvested the data of Facebook users in violation of a 2012 agreement with the U.S. Federal Trade Commission.

The trial started with a privacy expert for the plaintiffs, Neil Richards of Washington University Law School, who testified about Facebook’s data policies.

“Facebook’s privacy disclosures were misleading,” he told the court.

Jeffrey Zients, White House chief of staff under President Joe Biden and a Meta (META.O) director for two years starting in May 2018, is expected to take the stand later on Wednesday in the non-jury trial before Kathaleen McCormick, chief judge of the Delaware Chancery Court.

The case will feature testimony from Zuckerberg and other billionaire defendants including former Chief Operating Officer Sheryl Sandberg, venture capitalist and board member Marc Andreessen as well as former board members Peter Thiel, Palantir Technologies (PLTR.O) co-founder, and Reed Hastings, co-founder of Netflix (NFLX.O).

A lawyer for the defendants, who have denied the allegations, declined to comment.

Keep reading

The Pentagon’s about to start using xAI’s Grok — and other federal agencies could be next

Elon Musk’s xAI is launching a new government-facing service. Its first client happens to be the largest employer on Earth.

The Department of Defense will pay up to $200 million for “Grok for Government,” a new collection of AI products geared toward use by federal, local, and state governments.

The department has also awarded similar contracts to Anthropic, Google, and OpenAI, which launched its own government-facing initiative last month.

“The adoption of AI is transforming the Department’s ability to support our warfighters and maintain strategic advantage over our adversaries,” Doug Matty, Chief Digital and AI Officer of the Department of Defense, said in a statement.

xAI said its government-facing products would include models designed specifically for national security purposes and eventually for use in classified environments.

The company also said those products would be available for purchase via the General Services Administration, opening the door for other federal agencies to use them.

The announcement comes less than a week after Grok went on an antisemitic rant on X. The company later apologized for the chatbot’s “horrific behavior,” though workers at the company erupted in anger internally over the incident.

Keep reading

France waging ‘crusade’ against free speech and tech progress – Telegram boss

France has embarked on a “crusade” against free speech and progress itself, Telegram founder Pavel Durov said on Friday after Paris launched a probe against the social media platform X. The French authorities should talk to tech companies instead of prosecuting them, the entrepreneur believes.

The actions of the “French bureaucrats” will only “scare off investment and damage the country’s economic growth for decades,” the Russian-born billionaire wrote on X.

The French authorities announced a probe against the Elon Musk-owned platform on Friday for allegedly manipulating algorithms “for purposes of foreign interference.” The investigation was prompted by two complaints, one filed by a French lawmaker and the other by a government cybersecurity official, both of whom accused X of threatening French democracy. Musk has not commented on the development.

Keep reading

France opens criminal probe into X algorithms under Musk

A French prosecutor has opened a criminal investigation into social platform X and its owner, Elon Musk, on accusations of “creating bias in France’s democratic debate.”

The investigation comes after Musk’s artificial intelligence (AI) company, xAi, deleted multiple posts from its chatbot Grok that included antisemitic comments. Among them, Grok called itself “MechaHitler” and insinuated that the Jewish people were controlling Hollywood.

French National Assembly member Thierry Sother and European Union Parliament member Pierre Jouvet asked Arcom, France’s digital content regulator, to look into Grok’s behavior Thursday. 

“Since the July 4th update, Grok has substantially changed behaviors leading it to comment antisemitic ideas, to praise Hitler and even to support Le Pen,” Sother said to French media Libération.

X has not immediately responded to requests for comment.

X and Musk have been on French and European radars since January when Éric Bothorel, a French parliamentarian, raised concerns over X’s use of personal data, a biased algorithm and the reduction of diversity in posts. 

He also denounced Musk’s personal interference within the platform, calling it “a true danger and a threat for our democracies,” according to Libération.  

Keep reading

Judge Allows Don Lemon’s Lawsuit Against Elon Musk and X to Move Forward in California

A California judge has ruled that former CNN anchor Don Lemon’s lawsuit against Elon Musk and the social media platform X can proceed to trial.

The decision was handed down Tuesday by San Francisco Superior Court Judge Harold E. Kahn, who rejected Musk’s attempt to transfer the case to Texas.

The ruling stems from a legal complaint Lemon filed in August 2024.

In the lawsuit, Lemon alleges that Musk and X violated their agreement by canceling a planned show on the platform and failing to compensate him as agreed.

The complaint includes claims of breach of contract, misrepresentation, and fraud.

Lemon’s legal counsel, Carney Shegerian, issued a statement following the judge’s ruling: “The ruling means Don can hold X and Musk accountable in open court. Musk is subject to the legal process, just like everyone else, and that’s important.”

Keep reading