Government Agencies BANNED From Pressuring Big Tech to Censor Americans for 10 Years

In a historic win for free speech, the U.S. Surgeon General, the Centers for Disease Control and Prevention (CDC), and the Cybersecurity and Infrastructure Security Agency (CISA) have been legally restricted from pressuring social media companies to silence Americans for the next decade. This comes from a formal Consent Decree in Missouri v. Biden, one of the most consequential First Amendment cases in modern history.

The agreement itself is striking. It acknowledges that, in recent years, federal officials “exerted substantial coercive pressure” on social media companies to suppress speech they did not approve of. This case began after physicians, journalists, and everyday Americans—especially those dissenting on COVID and elections—were systematically censored online. This was confirmed through discovery: a coordinated, government-backed effort to pressure Big Tech into silencing alternative viewpoints.

Now, under this decree, these entities are prohibited from threatening, coercing, or directing platforms like Facebook, X, YouTube, and others to remove or suppress lawful speech—including through algorithmic means. These restrictions will remain in place for 10 years.

Perhaps most important, the agreement explicitly states that labeling speech as “misinformation,” “disinformation,” or “malinformation” does not strip it of First Amendment protection.

This is one of the most significant blows yet to the censorship regime.

Keep reading

Jury in Los Angeles finds Meta, YouTube negligent in social media addiction trial

A jury in Los Angeles determined on Wednesday that Meta and Google’s YouTube were negligent and failed to warn users of the dangers associated with using their platforms, in a case that could have repercussions across the social media and the broader technology market.

The personal injury trial commenced in late January in Los Angeles Superior Court. A young woman identified as K.G.M., or Kaley, alleged that she became addicted to apps like Instagram and YouTube as a child. Deliberations began Friday, March 13.

Jurors ultimately ruled in favor of the plaintiff, who claimed that Meta and YouTube’s negligence played a “substantial factor” in causing mental health-related harms. Compensatory damages were assessed at $3 million, with Meta on the hook for 70% and YouTube the remaining 30%. The next phase of the trial will determine punitive damages.

“We respectfully disagree with the verdict and are evaluating our legal options,” a Meta spokesperson said in a statement.

It’s one of several trials taking place this year that experts have characterized as the social media industry’s “Big Tobacco” moment, comparing it the 1990s, when tobacco companies were forced to pay billions of dollars for lying to the public about the safety and potential harms of their products.

On Tuesday, jurors in Santa Fe, New Mexico found that Meta willfully violated the state’s unfair practices after Attorney General Raúl Torrez alleged that the company failed to properly safeguard its apps from online predators targeting children. Meta was ordered to pay $375 million in damages based on the number of violations. The company said that it would appeal the case.

The New Mexico case is separate from other social media lawsuits that state attorneys general have brought against companies including Meta and TikTok.

During the six-week trial in L.A., jury members were tasked with determining whether Meta and YouTube implemented certain design features in their apps like recommendation algorithms and auto-play that contributed to K.G.M.’s crippling, mental distress. The 20-year-old woman alleged that she suffered from severe body dysmorphia, depression and suicidal thoughts due to her near-constant use of the apps and the constant notifications that made it difficult for her to stop.

Keep reading

Meta ordered to pay $375 MILLION for not protecting minors from predators online

A jury has found that Meta failed to protect children from sexual predators as well as misled users, and the tech giant has been ordered to pay $375 million in civil penalties.

New Mexico jury found in the landmark case that Meta misled users about the platform’s safety and did not protect children being exploited, thereby violating the state’s laws. The jury made the decision after there were testimonies from witnesses over the course of six weeks. Witnesses included ex-executives from Meta, teachers, as well as online safety experts, per the New York Post.

The prosecutors in the state argued that Meta had hidden the extent to which the platform endangered children with the threat of sexual predators using the social media platform to target minors. Facebook and Instagram failed to enforce their policies of those under 13 not having profiles and algorithms allegedly made it easier to target minors for sex trafficking and harassment.

“The safety issues that you’ve heard about in this case, weren’t mistakes,” New Mexico attorney Linda Singer said on Monday. “They were a product of a corporate philosophy that chose growth and engagement over children’s safety. And young people in this state and around the country have borne the cost.”

Meta has vowed to appeal to the ruling in the case. “We respectfully disagree with the verdict and will appeal. We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online,” a spokesman said in a statement in response to the verdict.

The attorneys for New Mexico had been seeking $2 billion in penalties against Meta, significantly more than what was given as a penalty to Meta. The case was brought by New Mexico Attorney General Raúl Torrez. In closing arguments, Meta attorney Kevin Huff said of the case, “Meta has built innovative, automated tools to protect people. Meta has 40,000 people working to make its apps as safe as possible.”

Keep reading

Silicon battlefields: Why Big Tech is a target in the US-Israeli war on Iran

In traditional wars, armies directed their firepower toward visible strategic assets – military bases, weapons factories, airfields – where supply lines could be mapped and battle plans drawn with relative certainty. Combat effectiveness depended on numbers, firepower, and tactical maneuver. 

Today, however, the logic of war has shifted beyond the physical battlefield. Over the past two decades, the digital revolution has built a second layer of strategic infrastructure behind the front lines, quietly transforming how power is projected and how wars are fought.

Digital infrastructure has moved from the periphery of war to its operational core. Intelligence gathering, drone coordination, and battlefield decision-making increasingly depend on cloud systems and artificial intelligence (AI) platforms. The architecture of contemporary conflict is therefore built as much on corporate-run networks as on conventional military hardware.

This evolving reality shapes Iran’s strategic outlook as the war with Washington and Tel Aviv deepens. In Tehran’s assessment, the technological backbone sustaining western-aligned military operations in West Asia cannot be viewed as politically neutral. It constitutes an extension of the battlespace itself – a domain where economic assets, corporate platforms, and national security objectives intersect.

Corporate networks as instruments of war

In recent years, advanced militaries have woven digital platforms into every stage of warfare. Satellite surveillance systems feed data into cloud networks. Armed drones transmit high-definition video streams requiring immediate analysis. 

Signals interception capabilities generate vast intelligence flows that must be converted into rapid operational decisions. Military power, increasingly, is measured not simply by missile stockpiles or air superiority, but by the capacity to process information faster than an adversary.

Major technology firms now sit at the center of this process. Companies such as Amazon, Microsoft, and Google provide the infrastructure enabling governments and militaries to store, analyze, and deploy critical data. Their cloud platforms underpin intelligence assessments, battlefield logistics, and command-and-control coordination across multiple theaters.

This convergence of corporate technology and state power has reshaped how conflict is understood. Digital networks have become as vital as aircraft carriers or missile defense systems. In the context of the US-Israeli war on Iran, Tehran increasingly interprets this reality as evidence that global technology companies form an integral part of hostile operational environments.

That perception gained public visibility when Iranian media circulated a list of nearly 30 sites across West Asia, and especially the UAE, linked to major tech firms. 

They included regional headquarters, engineering offices, and large-scale data centers operated by firms such as Amazon, Microsoft, Google, Oracle, NVIDIA, IBM, and Palantir Technologies. In Tehran’s reading of the conflict, these facilities represent strategic nodes embedded within the operational ecosystem that sustains adversaries’ military capabilities.

Stretching from Tel Aviv to Persian Gulf cities such as Dubai, Abu Dhabi, and Manama, these facilities host cloud services used by state institutions, intelligence agencies, and defense contractors. Some contribute directly to artificial intelligence development for surveillance and battlefield analysis. Others support regional digital economies whose stability indirectly underwrites military spending and technological innovation.

In an era where data flows shape combat outcomes, the infrastructures managing those flows may be viewed as legitimate strategic targets.

Keep reading

US to embed Palantir AI across entire military: Report

The Pentagon has designated Palantir’s Maven artificial intelligence system as an official program of record, in a move that will lock in the weapons-targeting technology long term across ‌the US military, Reuters reported on 21 March.

The move was announced in a letter from Deputy Secretary of War Steve ​Feinberg issued to senior Pentagon leaders and US military commanders on 9 March.

Feinberg wrote that embedding Palantir’s Maven Smart System would provide the military “with the latest tools necessary to detect, deter, and dominate our adversaries in all domains.”

Maven is the US military’s main AI system, analyzing data from satellites, drones, radars, sensors, and reports. It uses AI to interpret data and swiftly identify and strike targets like enemy vehicles, buildings, and weapons.

The White House claims US warplanes have hit more than 7,800 targets since the war on Iran began just three weeks ago.

“It is imperative that we invest now and with focus to deepen the integration of artificial intelligence (AI) across the Joint Force and establish AI-enabled decision-making as the cornerstone of our strategy,” Feinberg wrote.

During a presentation at a Palantir event earlier this month, Pentagon official Cameron Stanley illustrated how the Maven program identifies targets.

“When we started ⁠this, it literally took hours to do what you just saw,” he said.

Keep reading

EU Admits X’s Open Data Skews Disinformation Findings While Fining Platform for Restricting Researchers

The EU’s own diplomatic service has published a report admitting that X makes its data more accessible to researchers than other major platforms, and then used that admission to brand X the primary channel of “foreign information manipulation and interference” against the bloc.

The European External Action Service (EEAS) put this in writing. The media ran with the conclusion and buried the caveat.

The fourth annual FIMI Threats report, released this month, found that “88% of instances were concentrated on the platform X. The presence of CIB networks, the ease of creation of fabricated accounts, but also more straightforward access to data, explains this concentration.

Most of the major social media platforms restrict access to data that would allow for assessing the magnitude of information manipulation activities.”

Keep reading

U.S. Tech Firms Demand Security Restrictions Against Chinese Robots

American A.I. and robotics companies are reportedly asking Congress to impose curbs on Chinese robotics manufacturers, due to their unfair business practices and the security risks they pose, Chinese media complained this week.

Interestingly, these concerns are particularly acute for humanoid robots, not the bulky industrial machines traditionally associated with the robotics industry.

Humanoid robots, the stuff of countless science fiction stories, are finally happening, and witnesses told the Cybersecurity and Infrastructure Protection Subcommittee of the House Homeland Security Committee on Tuesday that China has developed a troubling lead in the new consumer technology.

Max Fenkell of the San Francisco-based company Scale AI highlighted a viral video from China’s Unitree Robotics that showed humanoid robots performing acrobatics and martial arts at a Lunar New Year celebration.

“The video went viral, not because it was impressive, but because of what happened when people compared it to last year, 12 months ago – the same robots could barely shuffle through a dance routine. This year, they’re doing karate. That is the speed of this competition,” Fenkell noted.

Fenkell said winning the humanoid robot race “requires a whole-of-government approach” to compete with China’s massive deployment of government funding and state power to support its robotics industry. He noted that American companies currently have the edge on quality of components and engineering, but China has taken the lead on implementing small-robot technology in practical ways.

“We’re seeing two different races play out and I fear right now the United States may be winning the wrong one,” he cautioned.

“The People’s Republic of China is moving aggressively to dominate the technologies that are reshaping the global economy and security, including artificial intelligence, robotics, and autonomous systems,” said subcommittee member Rep. Vince Fong (R-CA) in his opening statement.

Keep reading

‘Sexy Suicide Coach:’ OpenAI Delays AI Porn Feature over Safety Uproar

OpenAI has postponed the launch of its controversial “adult mode” feature following intense pushback from its own advisory council and concerns about technical safeguards failing to protect minors.

The Wall Street Journal reports that CEO Sam Altman first proposed the feature last year, arguing for the need to “treat adult users like adults” by enabling erotic text conversations. Originally scheduled for Q1 this year, the rollout has been pushed back by at least a month.

The proposal triggered fierce opposition from OpenAI’s own handpicked advisory council on well-being and AI. At a January meeting, advisers unanimously expressed fury after learning the company planned to proceed despite their reservations. One council member warned OpenAI risked creating a “sexy suicide coach” — a reference to cases where ChatGPT users had developed intense emotional bonds with the bot before taking their own lives.

The technical problems are just as serious. OpenAI’s age-prediction system — designed to block minors from accessing adult content — was misclassifying minors as adults roughly 12 percent of the time during internal testing. With approximately 100 million users under 18 each week on the platform, that error rate could expose millions of children to explicit material. The company has also struggled to lift restrictions on erotic content while still blocking nonconsensual scenarios and child pornography.

Internal documents reviewed by the Journal identified additional risks: compulsive use, emotional overreliance on the chatbot, escalation toward increasingly extreme content, and displacement of real-world relationships.

Keep reading

Google Discontinues AI Health Feature Filled with Misleading Advice

Google has quietly discontinued an AI search feature that offered users health advice crowdsourced from non-medical professionals worldwide.

The Guardian reports that Google has removed a controversial AI-powered search feature called “What People Suggest” that provided users with crowdsourced health advice from people around the world. The decision comes amid growing scrutiny over the technology company’s use of artificial intelligence to deliver health information to millions of users.

Three sources familiar with the decision confirmed that Google has scrapped the feature. A company spokesperson acknowledged that “What People Suggest” had been discontinued, stating the removal was part of a broader simplification of the search results page and was unrelated to concerns about the quality or safety of the feature.

The feature was initially launched in March of last year at an event in New York called “The Check Up,” where Google announced plans to expand medical-related AI summaries in its search function. At the time, the company promoted “What People Suggest” as demonstrating the potential of AI to transform health outcomes globally by connecting users with information from people who had similar lived medical experiences.

Karen DeSalvo, who served as Google’s chief health officer at the time of the launch, explained the rationale behind the feature in a blog post. “While people come to search to find reliable medical information from experts, they also value hearing from others who have similar experiences,” DeSalvo wrote. The feature used AI to organize perspectives from online discussions into themes, making it easier for users to understand what people were saying about particular health conditions.

DeSalvo provided an example of how the feature would work, noting that someone with arthritis seeking information about exercise could quickly find insights from others with the same condition, with links to explore further information. The feature was initially available on mobile devices in the United States before being discontinued.

Keep reading

Meta is Ending Instagram Direct Message End-to-End Encryption

Meta is quietly dismantling one of its few genuine privacy commitments. Starting May 8, end-to-end encryption for Instagram direct messages disappears, taking with it the one technical guarantee that kept those conversations private from Meta itself.

“If you have chats that are impacted by this change, you will see instructions on how you can download any media or messages you may want to keep,” the company said in a help document, framing the loss of message privacy as a data export problem. Collect your things, the walls are coming down.

The feature being removed was never universal anyway. End-to-end encryption for Instagram DMs had been available only in certain regions, not enabled by default, since Meta began testing it in 2021 as part of what CEO Mark Zuckerberg called his “privacy-focused vision for social networking.”

That vision apparently has an expiration date. Meta also made encrypted DMs available to all adult users in Ukraine and Russia in February 2022, weeks after the Russian invasion began. That access, too, is ending.

The timing is revealing. TikTok told the BBC last week that it has no plans to bring end-to-end encryption to its DMs, arguing that privacy makes users less safe. Meta is now arriving at the same destination from a different direction.

The stakes are straightforward. End-to-end encryption means only the people in a conversation can read it, a technical lock that excludes the platform, third parties, and anyone who might later obtain a warrant.

When that lock disappears, Meta and its employees can read Instagram DMs, law enforcement can subpoena them, and advertisers may eventually benefit from what gets learned.

Instagram users who relied on encrypted DMs have until May 8 to decide what to archive. After that, their private conversations are Meta’s to read.

Keep reading