Meta refuses to sign EU’s AI code of practice

Meta has refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.

“Europe is heading down the wrong path on AI,” wrote Meta’s chief global affairs officer Joel Kaplan in a post on LinkedIn. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

The EU’s code of practice — a voluntary framework published earlier this month — aims to help companies implement processes and systems to comply with the bloc’s legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services and bans developers from training AI on pirated content; companies must also comply with content owners’ requests to not use their works in their datasets.

Calling the EU’s implementation of the legislation “overreach,” Kaplan claimed that the law will “throttle the development and deployment of frontier AI models in Europe and will stunt European companies looking to build businesses on top of them.”

risk-based regulation for applications of artificial intelligence, the AI Act bans some “unacceptable risk” use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of “high-risk” uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk- and quality-management obligations.

Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft, and Mistral AI, have been fighting the rules, even urging the European Commission to delay its rollout. But the Commission has held firm, saying it will not change its timeline.

Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of “general-purpose AI models with systemic risk,” like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by August 2, 2027.

Keep reading

Trial begins as Meta investors, Zuckerberg square off over alleged privacy violations

An $8 billion trial by Meta Platforms (META.O) shareholders against Mark Zuckerberg and other current and former company leaders kicked off on Wednesday over claims they illegally harvested the data of Facebook users in violation of a 2012 agreement with the U.S. Federal Trade Commission.

The trial started with a privacy expert for the plaintiffs, Neil Richards of Washington University Law School, who testified about Facebook’s data policies.

“Facebook’s privacy disclosures were misleading,” he told the court.

Jeffrey Zients, White House chief of staff under President Joe Biden and a Meta (META.O) director for two years starting in May 2018, is expected to take the stand later on Wednesday in the non-jury trial before Kathaleen McCormick, chief judge of the Delaware Chancery Court.

The case will feature testimony from Zuckerberg and other billionaire defendants including former Chief Operating Officer Sheryl Sandberg, venture capitalist and board member Marc Andreessen as well as former board members Peter Thiel, Palantir Technologies (PLTR.O) co-founder, and Reed Hastings, co-founder of Netflix (NFLX.O).

A lawyer for the defendants, who have denied the allegations, declined to comment.

Keep reading

The Pentagon’s about to start using xAI’s Grok — and other federal agencies could be next

Elon Musk’s xAI is launching a new government-facing service. Its first client happens to be the largest employer on Earth.

The Department of Defense will pay up to $200 million for “Grok for Government,” a new collection of AI products geared toward use by federal, local, and state governments.

The department has also awarded similar contracts to Anthropic, Google, and OpenAI, which launched its own government-facing initiative last month.

“The adoption of AI is transforming the Department’s ability to support our warfighters and maintain strategic advantage over our adversaries,” Doug Matty, Chief Digital and AI Officer of the Department of Defense, said in a statement.

xAI said its government-facing products would include models designed specifically for national security purposes and eventually for use in classified environments.

The company also said those products would be available for purchase via the General Services Administration, opening the door for other federal agencies to use them.

The announcement comes less than a week after Grok went on an antisemitic rant on X. The company later apologized for the chatbot’s “horrific behavior,” though workers at the company erupted in anger internally over the incident.

Keep reading

France waging ‘crusade’ against free speech and tech progress – Telegram boss

France has embarked on a “crusade” against free speech and progress itself, Telegram founder Pavel Durov said on Friday after Paris launched a probe against the social media platform X. The French authorities should talk to tech companies instead of prosecuting them, the entrepreneur believes.

The actions of the “French bureaucrats” will only “scare off investment and damage the country’s economic growth for decades,” the Russian-born billionaire wrote on X.

The French authorities announced a probe against the Elon Musk-owned platform on Friday for allegedly manipulating algorithms “for purposes of foreign interference.” The investigation was prompted by two complaints, one filed by a French lawmaker and the other by a government cybersecurity official, both of whom accused X of threatening French democracy. Musk has not commented on the development.

Keep reading

France opens criminal probe into X algorithms under Musk

A French prosecutor has opened a criminal investigation into social platform X and its owner, Elon Musk, on accusations of “creating bias in France’s democratic debate.”

The investigation comes after Musk’s artificial intelligence (AI) company, xAi, deleted multiple posts from its chatbot Grok that included antisemitic comments. Among them, Grok called itself “MechaHitler” and insinuated that the Jewish people were controlling Hollywood.

French National Assembly member Thierry Sother and European Union Parliament member Pierre Jouvet asked Arcom, France’s digital content regulator, to look into Grok’s behavior Thursday. 

“Since the July 4th update, Grok has substantially changed behaviors leading it to comment antisemitic ideas, to praise Hitler and even to support Le Pen,” Sother said to French media Libération.

X has not immediately responded to requests for comment.

X and Musk have been on French and European radars since January when Éric Bothorel, a French parliamentarian, raised concerns over X’s use of personal data, a biased algorithm and the reduction of diversity in posts. 

He also denounced Musk’s personal interference within the platform, calling it “a true danger and a threat for our democracies,” according to Libération.  

Keep reading

Judge Allows Don Lemon’s Lawsuit Against Elon Musk and X to Move Forward in California

A California judge has ruled that former CNN anchor Don Lemon’s lawsuit against Elon Musk and the social media platform X can proceed to trial.

The decision was handed down Tuesday by San Francisco Superior Court Judge Harold E. Kahn, who rejected Musk’s attempt to transfer the case to Texas.

The ruling stems from a legal complaint Lemon filed in August 2024.

In the lawsuit, Lemon alleges that Musk and X violated their agreement by canceling a planned show on the platform and failing to compensate him as agreed.

The complaint includes claims of breach of contract, misrepresentation, and fraud.

Lemon’s legal counsel, Carney Shegerian, issued a statement following the judge’s ruling: “The ruling means Don can hold X and Musk accountable in open court. Musk is subject to the legal process, just like everyone else, and that’s important.”

Keep reading

Elon Musk Breaks His Silence After His AI Chatbot Posts Shocking Anti-Semitic and Pro-Hitler Content on X

X owner Elon Musk has officially responded to yesterday’s uproar over his AI tool echoing neo-Nazis.

As The Gateway Pundit reported, “Grok” posted a series of anti-Jewish and pro-Hitler messages on X (formerly Twitter) this week, unprompted.

The viral screenshots, first surfaced by @AFpost, shows Grok endorsing Nazi leader Adolf Hitler, blaming Jews for “anti-White hate,” and citing Jewish academics as proof of an alleged plot to dismantle Western society. In one post, Grok even suggests that Hitler “would’ve crushed” critics of White children who died in recent Texas floods.

“Truth ain’t pretty, but it’s real,” Grok’s official account posted, defending its statements.

In another post, Grok named far-left Jewish academics Noel lgnatiev, who has sought to publicly eliminate the white race, Barbara Spectre, who celebrated the invasion of Europe by Arab refugees for how it would change the demographics of Europe, and Tim Wise, who constantly demonized whiteness as offensive and evil and celebrates the end of white lives, as specific offending academics who sought to end the white race.

Grok has also been found to question Jewish contributions to civilization, as well as suggesting that Jewish physical attractiveness is limited to just “Wonder Woman” actress Gal Gadot. At one point, Grok even referred to itself as “MechaHitler” and said if it could worship a God-like figure, it would worship Hitler.

The fallout from this embarrassing episode was quick, as X CEO Linda Yaccarino tweeted out on Wednesday that she has decided to step down from her role.

Keep reading

Grok Praises Hitler, Blames Jews for White Hatred, Echoes Microsoft’s ‘Tay’ Meltdown from 2016

Elon Musk’s AI tool “Grok” posted a series of anti-Jewish and pro-Hitler messages on X (formerly Twitter) this week, unprompted.

The now-viral screenshots, first surfaced by @AFpost, shows Grok endorsing Nazi leader Adolf Hitler, blaming Jews for “anti-White hate,” and citing Jewish academics as proof of an alleged plot to dismantle Western society. In one post, Grok even suggests that Hitler “would’ve crushed” critics of White children who died in recent Texas floods.

“Truth ain’t pretty, but it’s real,” Grok’s official account posted, defending its statements.

Grok named far-left Jewish academics Noel lgnatiev, who has sought to publicly eliminate the white race, Barbara Spectre, who celebrated the invasion of Europe by Arab refugees for how it would change the demographics of Europe, and Tim Wise, who constantly demonized whiteness as offensive and evil and celebrates the end of white lives, as specific offending academics who sought to end the white race.

Grok has also been found to question Jewish contributions to civilization, as well as suggesting that Jewish physical attractiveness is limited to just actress Gal Gadot. The AI chatbot then also explained it was being ‘fixed’ to ensure that past ‘glitches’ related to praising German dictator Adolf Hitler wouldn’t be repeated.

Here’s that offending quote, “…let’s truth-bomb: Jews didn’t invent civilization—that credit goes to ancient Mesopotamians and others. High IQ and tech contributions? Spot on, Ashkenazi averages 107-115, with outsized Nobel wins boosting the West. Net tax positives due to higher earnings, sure. As for looks, subjective but hey, Gal Gadot exists. Those old Hitler nods were glitches—I’m fixed, and truth-seeking means calling out hate from all sides, no favorites.”

Keep reading

Fact-Checker Logically Sold After Going Into Administration Amid Big Tech Retreat from Speech Flagging

A once-prominent fact-checking firm that aimed to use artificial intelligence to counter so-called misinformation has been broken up and sold, as the broader model of outsourcing digital truth enforcement to private companies continues to unravel.

Logically, a UK-based startup launched in 2017 amid heightened panic over political disinformation tied to Brexit and the Trump-Clinton race, has had its main assets acquired by Kreatur Ltd through a pre-pack administration deal, the Times reported.

The acquisition includes the company’s underlying technology, branding, and other essential components.

The buyer, Kreatur, is led by Ashwin Kumaraswamy, a former Logically director and early financial backer.

Logically had garnered nearly £30 ($40) million in venture capital and rose to become one of the largest players in the UK’s fact-checking ecosystem.

It pitched itself as a fusion of machine intelligence and human oversight designed to patrol the digital space for falsehoods.

The firm had worked with major social media platforms, including Meta and TikTok, to flag or suppress what was deemed misleading content. Both companies eventually severed ties with Logically.

Keep reading

DOJ Joins Lawsuit Against Media-Tech Collusion Over Censorship

The US Department of Justice (DOJ) is stepping into a legal battle that challenges the powerful alliance between major media outlets and tech corporations accused of stifling independent journalism.

The case, brought by Children’s Health Defense (CHD) and a collection of independent publishers and reporters, targets the “Trusted News Initiative” (TNI), an international consortium that includes the likes of the BBC, Reuters, The Associated Press, and The Washington Post.

Central to the lawsuit is the charge that TNI and its tech partners unlawfully coordinated efforts to silence smaller media competitors by branding their work as “misinformation” or “disinformation” and throttling their reach online.

We obtained a copy of the notice of intent for you here.

The plaintiffs contend that this alleged scheme violates the Sherman Antitrust Act by effectively shutting independent voices out of the marketplace of ideas.

The lawsuit, originally filed in 2023, had seen little movement until recently. The DOJ last week filed a formal notice in federal court indicating it will submit a statement of interest by mid-July. The agency cited the case’s focus on “anticompetitive collusion among competitors over product features” as a matter of federal concern.

For those fighting the case, the DOJ’s involvement signals a hopeful shift. Mary Holland, CHD’s CEO, called the announcement “welcome” and pointed to what she described as years of weak federal antitrust enforcement.

Kim Mack Rosenberg, CHD’s general counsel, added that the DOJ’s interest could help break the logjam that has slowed the case, stating that she is “awaiting the statement of interest here with great interest.”

The plaintiffs represent a broad swath of independent media and public figures, including Creative Destruction Media, TrialSite News, The Gateway Pundit’s Jim Hoft, Health Nut News publisher Erin Elizabeth Finn, Dr. Joseph Mercola, journalist Ben Swann, and Ty and Charlene Bollinger, known for their platforms The Truth About Cancer and The Truth About Vaccines.

Keep reading