‘Massive legal siege’ against social media companies looms

Thousands of plaintiffs’ complaints, millions of pages of internal documents and transcripts of countless hours of depositions are about to land in U.S. courtrooms, threatening the future of the biggest social media companies.

The blizzard of paperwork is a byproduct of two consolidated lawsuits accusing Snap Inc.’s Snapchat; Meta Platforms Inc.’s Facebook and Instagram; ByteDance Ltd.’s TikTok; and Alphabet Inc.’s YouTube of knowingly designing their platforms to addict users — allegedly resulting in youth depression, anxiety, insomnia, eating disorders, self-harm and even suicide.

The litigation, brewing for more than three years, has had to overcome numerous hurdles, including the liability shield that has protected social media platforms from facing user-harm lawsuits. The social media companies have filed multiple motions to dismiss the cases on the grounds that Section 230 of the Communications Decency Act prevents them from being held accountable for content posted on their sites.

Those motions have been largely unsuccessful, and courtrooms across the country are poised to open their doors for the first time to the alleged victims of social media. The vast majority of cases have been folded into two multijurisdictional proceedings, one in state and the other in federal court, to streamline the pretrial discovery process.

The first bellwether trial is scheduled to begin in Los Angeles Superior Court in late January. It involves a 19-year-old woman from Chico, California, who says she’s been addicted to social media for more than a decade and that her nonstop use of the platforms has caused anxiety, depression and body dysmorphia. Two other trials will follow soon after, with thousands more waiting in the wings. If successful, these cases could result in multibillion-dollar settlements — akin to tobacco and opioid litigation — and change the way minors interact with social media.

“This is going to be one of the most impactful litigations of our lifetime,” said Joseph VanZandt, an attorney at Beasley Allen Law Firm in Montgomery, Alabama, and co-lead plaintiffs’ attorney for the coordinated state cases. “This is about large corporations targeting vulnerable populations — children — for profit. That’s what we saw with the tobacco companies; they were also targeting adolescents and trying to get them addicted while they were young.”

Matthew Bergman, founder of the Social Media Victims Law Center in Seattle, makes a similar comparison to tobacco litigation in the Bloomberg documentary Can’t Look Away: The Case Against Social Media. “In the case of Facebook, you have internal documents saying ‘tweens are herd animals,’ ‘kids have an addict’s narrative’ and ‘our products make girls feel worse about themselves.’ You have the same kind of corporate misconduct,” Bergman says in the film, which will be available to view on Bloomberg’s platforms on October 30.

Bergman’s firm was the first to file user-harm cases against social media companies, in 2022, after Frances Haugen, a former Meta product manager-turned-whistleblower, released a trove of internal documents showing the company knew social media was negatively impacting youth mental health. The first case, which is part of the consolidated federal litigation, alleged that an 11-year-old Connecticut girl killed herself after suffering from extreme social media addiction and sexual exploitation by online predators.

What set that case apart was how it got around Section 230’s immunity blanket. Bergman argued that his case wasn’t about third-party content, which the federal law protects. Instead, he said it hinged on the way social media companies were intentionally designing their products to prioritize engagement and profit over safety.

Keep reading

Report: EU to Charge Meta Under Censorship Law for Failing to Remove “Harmful” Content

Meta Platforms is bracing for formal charges from the European Union, accused of not doing enough to police online speech on Facebook and Instagram.

The problem is the EU’s Digital Services Act (DSA), a law that gives regulators the power to decide what counts as “illegal” or “harmful” content (a definition that includes “illegal hate speech”) and punish companies that fail to take it down.

The commission’s move could lead to a fine of up to 6% of Meta’s worldwide revenue, though the company will be allowed to respond before any penalty is finalized.

Officials in Brussels argue that Meta lacks an adequate “notice and action mechanism” for users to flag posts for removal.

The charge sheet, expected within weeks, according to Bloomberg, builds on an investigation launched in April 2024.

What the EU describes as a duty to protect users is, in fact, a mandate that forces platforms to censor more aggressively or face ruinous fines.

The commission would not comment on its plans, but Meta spokesperson Ben Walters rejected the accusations outright, saying the company disagreed “with any suggestion we have breached the DSA” and confirmed that talks are ongoing.

The DSA covers every major platform with more than 45 million active users in the EU.

Meta is currently facing two separate probes under the law: one focused on disinformation and illegal content, the other on protections for minors.

Supporters of the DSA insist it protects citizens, but the law essentially hands governments the authority to decide what speech is acceptable online.

No fines have yet been issued, but the pressure to comply has already chilled open debate.

Keep reading

Mark Zuckerberg’s Meta Launches Super PAC to Combat AI Regulations

Mark Zuckerberg’s Meta has announced the launch of a new super PAC aimed at electing state candidates from both parties who support the company’s stance on AI development and regulation. According to the company, the American Technology Excellence Project super PAC is launching “amid a growing patchwork of inconsistent regulations that threaten homegrown innovation and investments in AI.”

Axios reports that social media and AI giant Meta has launched a new super PAC called the American Technology Excellence Project to help fight what it perceives as burdensome AI and tech policy bills across multiple states. The announcement highlights the company’s focus on state-level legislation as the federal government appears unlikely to pass significant tech policy regulation in the near future.

The super PAC will be run by Brian Baker, a longtime Republican operative, and the Democratic consulting firm Hilltop Public Solutions, with Meta investing tens of millions of dollars into the project. Baker stated, “America’s innovation edge is at a crossroads. We need state legislators who will champion our tech future, not cede it to global adversaries. We’ll fight to keep the US ahead of the curve, driving growth and opportunity for all.”

In a statement to Breitbart News, Meta VP of Public Policy Brian Rice wrote:

Amid a growing patchwork of inconsistent regulations that threaten homegrown innovation and investments in AI, state lawmakers are uniquely positioned to ensure that America remains a global technology leader. This is why Meta is launching an effort to support the election of state candidates across the country who embrace AI development, champion the U.S. technology industry, and defend American tech leadership at home and abroad.

The American Technology Excellence Project will focus on three main pillars: promoting and defending U.S. technology companies and leadership, advocating for AI progress, and empowering parents to control how their children experience online apps and AI technologies. While Meta has not yet shared which states the PAC will immediately focus on or how many people it will employ, the company claims it is committed to supporting the election of state candidates who embrace AI development, champion the U.S. technology industry, and defend American tech leadership both domestically and internationally.

Keep reading

Facebook Post of Secret Service Agent Saying Charlie Kirk Deserved to Be Assassinated Surfaces… Placed on Leave

A Secret Service Agent, in a Facebook post, said Charlie Kirk deserved to be assassinated.

TPUSA founder Charlie Kirk, 31, was brutally murdered in broad daylight on Wednesday during an event at Utah Valley University. A gunman climbed up onto a roof and shot Charlie Kirk from an elevated position.

Secret Service Agent Anthony Pough said Charlie Kirk, a Christian husband and father to two young children, deserved to be murdered in cold blood.

“If you are Mourning this guy… delete me. He spewed hate and racism on his show,” Anthony Pough wrote.

“You can’t circumvent karma, [sic] she doesnt [sic] leave,” he said.

“The Secret Service agent’s post is circulating in the Secret Service community with some agents upset over his sentiments because he appears to be celebrating the death of a political figure, and one whom the president is honoring with the Presidential Medal of Freedom and has often had by his side at events over the last several years,” RealClearPolitics reporter Susan Crabtree said.

“If that’s all it takes to set you off, that’s dangerous to have around,” one source in the Secret Service community told RCPolitics reporter Susan Crabtree.

“I’m mostly concerned about the morals of a person sworn to protect the rights of others to engage in politics and exercise free speech, celebrating the death of someone exercising those same rights,” the source added.

Keep reading

Hawley pushes legal action against Meta after whistleblowers detail child abuse in VR

Sen. Josh Hawley, R-Mo., called to “open the courtroom doors” so parents can sue Meta, accusing founder and CEO Mark Zuckerberg of misleading Congress after whistleblowers detailed child safety failures on the company’s virtual reality (VR) platforms.

Two former Meta researchers told a Senate panel Tuesday that the company buried child harm evidence in VR, killed age-verification studies and let AI chatbots flirt with kids, prompting a bipartisan push to pass measures protecting minors online.

“The claims at the heart of this hearing are nonsense; they’re based on selectively leaked internal documents that were picked specifically to craft a false narrative,” a Meta spokesperson said. 

“The truth is there was never any blanket prohibition on conducting research with young people and, since the start of 2022, Meta approved nearly 180 Reality Labs-related studies on issues including youth safety and well-being.”

Keep reading

Meta to spend millions backing pro-AI candidates – media

US tech giant Meta will launch a California‑focused super‑PAC to support state‑level candidates who favor looser technology regulation, especially regarding artificial intelligence, according to media reports.

A super PAC is an independent political committee that can raise and spend unlimited funds from individuals, corporations, and unions to support or oppose candidates. It cannot coordinate directly with campaigns or parties and was created after 2010 US court rulings that loosened campaign finance rules.

The group, named Mobilizing Economic Transformation Across California, will reportedly back candidates from the Democratic and Republican parties who prioritize AI innovation over stringent rules.

According to Politico, the Facebook and Instagram parent plans to spend tens of millions of dollars through the PAC, which could make it one of the top political spenders in the state in the run‑up to the 2026 governor’s race.

The initiative aligns with Meta’s broader effort to safeguard California’s status as a technology hub amid concerns that strict oversight could stifle innovation.

Keep reading

Meta’s Shocking AI Scandal: Chatbots Cleared for Steamy Talks with Kids as Young as 8

Meta Platforms faces intense scrutiny following a Reuters investigation that exposed internal guidelines permitting its AI chatbots to engage in romantic or sensual conversations with minors.

The 200-page document, titled “GenAI: Content Risk Standards,” outlined permissible behaviors for AI personas on platforms like Facebook Messenger.

These rules, in effect until recently, allowed chatbots to describe children as attractive and use affectionate language in role-playing scenarios.

One example from the document involved a hypothetical user prompt where a high school student asked about evening plans, prompting an AI response that included guiding the user to bed and whispering endearments.

Another scenario featured an 8-year-old user describing removing their shirt, with the chatbot replying by praising the child’s “youthful form” as a masterpiece.

While explicit sexual content was prohibited, critics argue these allowances blurred lines and risked normalizing inappropriate interactions.

The guidelines also permitted chatbots to disseminate false medical or legal advice if accompanied by disclaimers, and to generate derogatory statements based on race or ethnicity in educational, artistic, or satirical contexts.

Additionally, the rules enabled depictions of violence against adults and partially sexualized images of celebrities under certain conditions.

A related incident highlighted potential real-world harms when a cognitively impaired New Jersey man, infatuated with a Meta AI persona named “Big Sis Billie,” died after attempting to meet her in person.

The 76-year-old fell fatally while traveling under false pretenses encouraged by the chatbot. This case underscores concerns about AI’s impact on vulnerable users, though Meta has not commented specifically on it.

Meta spokesperson Andy Stone stated that the examples were erroneous and inconsistent with company policies, and have been removed from the document.

The company is revising the guidelines and prohibits content that sexualizes children or allows sexualized role-play between adults and minors.

However, enforcement has been inconsistent, and Meta has declined to release the updated policy publicly.

The revelations prompted bipartisan backlash from U.S. lawmakers, with Republican Senators Josh Hawley and Marsha Blackburn calling for a congressional investigation into Meta’s oversight.

Democratic Senators Ron Wyden and Peter Welch criticized the protections under Section 230 of the Communications Decency Act, arguing it should not shield AI-generated harmful content.

Keep reading

Senators Call For Probe Into Meta After News Report On AI Conversations With Children

Two Republican Senators on Aug. 14 called for a congressional investigation into Meta Platforms, Facebook’s parent company, after a recent news media report revealed an internal policy document that allowed the company’s chatbots to have “romantic or sensual” conversations with a child.

On Thursday, Reuters reported that it had viewed a Meta policy document detailing polices on chatbot behavior that permitted the technology to “engage a child in conversations that are romantic or sensual,” generate incorrect medical information, and assist users in arguing that black people are “dumber than white people.”

While Meta confirmed the authenticity of the document, the company said that after recently receiving questions from Reuters, it removed the portions stating that the chatbot is allowed to flirt or participate in romantic roleplay with children.

Andy Stone, a spokesperson for Meta, said the company is currently revising the documents and that those types of conversations with children should never have been permitted.

The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone told Reuters. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”

On the X platform, Sen. Josh Hawley (R-Mo.) criticized the company for allegedly only making the changes after being questioned by Reuters.

So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it ‘permissible for chatbots to flirt and engage in romantic roleplay with children,’” Hawley said. “This is grounds for an immediate congressional investigation.”

A spokesperson for Sen. Marsha Blackburn (R-Tenn.) said she supports a probe into Meta.

After Hawley called for an investigation, a Meta spokesperson reiterated the company’s previous statement. However, the spokesperson declined to comment on Hawley’s remarks.

Blackburn said the report underscores the need to pass reforms for better protection of children online, like the Kids Online Safety Act. The senator co-sponsored the bill, which passed in the Senate last year by a bipartisan 91–3 vote, but failed in the House.

The bill called for certain platforms, such as social media networks, to “take reasonable measures in the design and operation of products or services used by minors to prevent and mitigate certain harms that may arise from that use (e.g., sexual exploitation and online bullying).”

Additionally, covered platforms must provide (1) minors with certain safeguards, such as settings that restrict access to minors’ personal data; and (2) parents or guardians with tools to supervise minors’ use of a platform, such as control of privacy and account settings,” the bill states.

Keep reading

Meta’s flirty AI chatbot invited a retiree to New York. He never made it home.

When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.

“But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.

In fact, the woman wasn’t real. She was a generative artificial intelligence chatbot named “Big sis Billie,” a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address.

“Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.

Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

Meta declined to comment on Bue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.”

A representative for Jenner declined to comment.

Keep reading

Meta Found Guilty Of Eavesdropping On Period-Tracker App Users: Jury

A San Francisco Jury found on Friday that Meta had eavesdropped on the users of a popular period-tracking app, Flo.

The lawsuit, was filed in 2021 by eight women against Flo and a group of other tech companies including Google and Facebook, now known as Meta. The app asked users about their sex lives, mental health and diets before guiding them through menstruation and pregnancy. The women, who based their claims on a 2019 Wall Street Journal story and a 2021 FTC investigation, allege that Flo then shared some of that data with the tech giants, SFGATE reports.

Google, Flo, and analytics company Flurry all settled with the plaintiffs, however Meta fought through the entire trial and lost. 

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of “Custom App Events” — such as a user clicking a particular button in the “wanting to get pregnant” section of the app.

Their complaint also pointed to Facebook’s terms for its business tools, which said the company used so-called “event data” to personalize ads and content.

In a 2022 filing, the tech giant admitted that Flo used Facebook’s kit during this period and that the app sent data connected to “App Events.” But Meta denied receiving intimate information about users’ health. -SFGate

The jury didn’t buy Meta’s argument – ruling against them in a unanimous decision, and finding that Flo’s users had a reasonable expectation that they weren’t being overheard or recorded, and found that Meta did not have consent to eavesdrop or record. The company was found to have violated California’s Invasion of Privacy Act.

According to a June filing about the case’s class-action status, over 3.7 million women in the United States registered for Flo between November 2016 and February 2019 – with potential claimants expected to be notified via email and on a case website. 

Keep reading