The Verdict Against Meta and Google That Could End the Anonymous Internet

A Los Angeles jury has found Meta and YouTube negligent in the design of their platforms and awarded $3 million to a plaintiff identified as K.G.M., a young woman who testified that years of near-constant social media use contributed to depression, anxiety, and body dysmorphia. The jury assigned 70% of the responsibility to Meta and 30% to YouTube. Punitive damages came to another $6 million.

The verdict is being reported as a landmark for child safety. It also represents a significant legal mechanism for dismantling anonymous internet access, built in plain sight, with bipartisan enthusiasm and a CEO’s enthusiastic assistance.

K.G.M.’s attorneys built their claim not around what users posted, which Section 230 of the Communications Decency Act largely shields platforms from liability for, but around how the platforms were designed.

Infinite scroll, algorithmically amplified notifications, engagement loops engineered to maximize time on site. The argument treats social media architecture the way product liability law treats a car without brakes. A defective product that the public needs to be protected from.

Keep reading

Jury in Los Angeles finds Meta, YouTube negligent in social media addiction trial

A jury in Los Angeles determined on Wednesday that Meta and Google’s YouTube were negligent and failed to warn users of the dangers associated with using their platforms, in a case that could have repercussions across the social media and the broader technology market.

The personal injury trial commenced in late January in Los Angeles Superior Court. A young woman identified as K.G.M., or Kaley, alleged that she became addicted to apps like Instagram and YouTube as a child. Deliberations began Friday, March 13.

Jurors ultimately ruled in favor of the plaintiff, who claimed that Meta and YouTube’s negligence played a “substantial factor” in causing mental health-related harms. Compensatory damages were assessed at $3 million, with Meta on the hook for 70% and YouTube the remaining 30%. The next phase of the trial will determine punitive damages.

“We respectfully disagree with the verdict and are evaluating our legal options,” a Meta spokesperson said in a statement.

It’s one of several trials taking place this year that experts have characterized as the social media industry’s “Big Tobacco” moment, comparing it the 1990s, when tobacco companies were forced to pay billions of dollars for lying to the public about the safety and potential harms of their products.

On Tuesday, jurors in Santa Fe, New Mexico found that Meta willfully violated the state’s unfair practices after Attorney General Raúl Torrez alleged that the company failed to properly safeguard its apps from online predators targeting children. Meta was ordered to pay $375 million in damages based on the number of violations. The company said that it would appeal the case.

The New Mexico case is separate from other social media lawsuits that state attorneys general have brought against companies including Meta and TikTok.

During the six-week trial in L.A., jury members were tasked with determining whether Meta and YouTube implemented certain design features in their apps like recommendation algorithms and auto-play that contributed to K.G.M.’s crippling, mental distress. The 20-year-old woman alleged that she suffered from severe body dysmorphia, depression and suicidal thoughts due to her near-constant use of the apps and the constant notifications that made it difficult for her to stop.

Keep reading

Meta ordered to pay $375 MILLION for not protecting minors from predators online

A jury has found that Meta failed to protect children from sexual predators as well as misled users, and the tech giant has been ordered to pay $375 million in civil penalties.

New Mexico jury found in the landmark case that Meta misled users about the platform’s safety and did not protect children being exploited, thereby violating the state’s laws. The jury made the decision after there were testimonies from witnesses over the course of six weeks. Witnesses included ex-executives from Meta, teachers, as well as online safety experts, per the New York Post.

The prosecutors in the state argued that Meta had hidden the extent to which the platform endangered children with the threat of sexual predators using the social media platform to target minors. Facebook and Instagram failed to enforce their policies of those under 13 not having profiles and algorithms allegedly made it easier to target minors for sex trafficking and harassment.

“The safety issues that you’ve heard about in this case, weren’t mistakes,” New Mexico attorney Linda Singer said on Monday. “They were a product of a corporate philosophy that chose growth and engagement over children’s safety. And young people in this state and around the country have borne the cost.”

Meta has vowed to appeal to the ruling in the case. “We respectfully disagree with the verdict and will appeal. We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online,” a spokesman said in a statement in response to the verdict.

The attorneys for New Mexico had been seeking $2 billion in penalties against Meta, significantly more than what was given as a penalty to Meta. The case was brought by New Mexico Attorney General Raúl Torrez. In closing arguments, Meta attorney Kevin Huff said of the case, “Meta has built innovative, automated tools to protect people. Meta has 40,000 people working to make its apps as safe as possible.”

Keep reading

Meta is Ending Instagram Direct Message End-to-End Encryption

Meta is quietly dismantling one of its few genuine privacy commitments. Starting May 8, end-to-end encryption for Instagram direct messages disappears, taking with it the one technical guarantee that kept those conversations private from Meta itself.

“If you have chats that are impacted by this change, you will see instructions on how you can download any media or messages you may want to keep,” the company said in a help document, framing the loss of message privacy as a data export problem. Collect your things, the walls are coming down.

The feature being removed was never universal anyway. End-to-end encryption for Instagram DMs had been available only in certain regions, not enabled by default, since Meta began testing it in 2021 as part of what CEO Mark Zuckerberg called his “privacy-focused vision for social networking.”

That vision apparently has an expiration date. Meta also made encrypted DMs available to all adult users in Ukraine and Russia in February 2022, weeks after the Russian invasion began. That access, too, is ending.

The timing is revealing. TikTok told the BBC last week that it has no plans to bring end-to-end encryption to its DMs, arguing that privacy makes users less safe. Meta is now arriving at the same destination from a different direction.

The stakes are straightforward. End-to-end encryption means only the people in a conversation can read it, a technical lock that excludes the platform, third parties, and anyone who might later obtain a warrant.

When that lock disappears, Meta and its employees can read Instagram DMs, law enforcement can subpoena them, and advertisers may eventually benefit from what gets learned.

Instagram users who relied on encrypted DMs have until May 8 to decide what to archive. After that, their private conversations are Meta’s to read.

Keep reading

Spain JAILS Seven Citizens For Calling Migrants ‘SCUM’ On Facebook

Spain’s Supreme Court has upheld prison sentences for seven individuals over Facebook comments criticizing unaccompanied foreign minors in the border enclave of Melilla, marking a chilling escalation in the far-left government’s war on free speech amid skyrocketing migrant-related crime.

The ruling, which imposes terms ranging from eight months to one year and ten months, stems from posts that prosecutors deemed as promoting hostility toward the group of mostly North African migrants. 

Charges were initially dropped, but an appeal led to convictions under Spain’s hate crime laws.

This case exemplifies the inverted priorities under Prime Minister Pedro Sánchez’s Socialist-led government, which has faced mounting criticism for prioritizing mass migration over native safety and free expression.

Just months ago, Alex Soros heaped praise on Sánchez for granting amnesty to up to 500,000 illegal migrants via royal decree, bypassing parliament entirely. Soros called it “real leadership,” urging more nations to follow suit in flooding their borders.

Keep reading

Meta’s Zuckerberg denies at LA trial that Instagram targets kids

Meta Platforms chief executive Mark Zuckerberg on Feb 18 repeatedly said during 

a landmark trial over youth social media addiction that the Facebook and Instagram operator does not allow kids under 13 on its platforms, despite being confronted with evidence suggesting they were a key demographic.

Mr Mark Lanier, a lawyer for a woman suing Instagram and Google’s YouTube for harming her mental health when she was a child, pressed Mr Zuckerberg over his statement to Congress in 2024 that users under 13 are not allowed on the platform.

Mr Lanier confronted Mr Zuckerberg with internal Meta documents.

The case involves a California woman who started using Instagram and YouTube as a child. She alleges the companies sought to profit by hooking kids on their services despite knowing social media could harm their mental health.

She alleges the apps fuelled her depression and suicidal thoughts and is seeking to hold the companies liable.

Meta and Google have denied the allegations, and pointed to their work to add features that keep users safe.

“If we want to win big with teens, we must bring them in as tweens,” read one internal Instagram presentation from 2018.

Keep reading

META Granted Patent That Allows Facebook Users to Post After They Die

A patent granted to META in December gives an inside look at how Facebook users may still be able to post beyond the grave.

According to the patent, the large language model will be “simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

The AI model would essentially replicate the deceased person’s digital presence by analyzing data such as their writing style, tone, and expressed beliefs, allowing it to continue posting content and even commenting on friends’ posts in their voice.

It gets stranger.

The patent also referenced technology that would allow users to have phone calls with the deceased or even video calls.

Per Business Insider:

In the patent, Meta lays out why it thinks people might need this.

If you’re no longer posting online — whether that’s because you need a break from social media or … you … die — your followers’ user experience will be affected. In short, they’ll miss you.

“The impact on the users is much more severe and permanent if that user is deceased and can never return to the social networking platform,” the document says.

To fill that void, Meta would essentially create a digital clone of your social media presence, training a model on “user-specific” data — including historical platform activity, such as comments, likes, or content — to understand how you would (or rather, did) behave.

The patent was filed by META’s CTO, Andrew Bosworth.

Despite being granted the patent, META has claimed it has no plans to implement the technology.

In recent years, the idea of using AI to bring someone back from the dead digitally has gained some momentum.

In 2025, the family of Christopher Pelkey, who was killed in a road rage incident in Arizona, used an AI-generated video of Pelkey as an impact statement in the courtroom.

Keep reading

Meta Considers Timed Face Recognition Launch to Exploit Distracted Society

Meta is weighing whether to add face recognition to its camera-equipped smart glasses, and The New York Times obtained an internal company document that reveals more than just the plan itself.

It reveals how Meta thinks about when to launch it: “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

Read that plainly: Meta wants to release a mass biometric surveillance product while the people most likely to fight it are too distracted to respond.

The technology would scan the face of every person who enters the glasses’ field of view, building a faceprint to match against a database. Every passerby. Every stranger on the subway. Every person who happens to walk through the frame of someone else’s device. None of them consented. Most of them won’t even know they were captured.

Faceprints are among the most sensitive data a company can collect. Unlike a password, a face cannot be changed after a breach. Once collected, this data enables mass surveillance, fuels discrimination, and creates a permanent identification trail attached to a person’s physical movement through the world.

Putting that capability into wearable glasses carried by ordinary people in ordinary places moves it off servers and into every room, street, and gathering that people enter.

Meta ran this experiment before and lost.

The company shut down (only kind of) its photo face-scanning tool in November 2021, simultaneously announcing it would delete (if you believe them) over a billion stored face templates. That retreat came after years of mounting legal exposure that produced a very expensive record.

In July 2019, Facebook settled a Federal Trade Commission investigation for $5 billion. The allegations included that the company’s face recognition settings were confusing and deceptive, and the settlement required the company to obtain consent before running face recognition on users going forward.

Less than two years later, Meta agreed to pay $650 million to settle a class action brought by Illinois residents under that state’s biometric privacy law. Then, in July 2024, it settled with Texas for $1.4 billion over the same defunct system. Nearly $7 billion across three settlements, all tied to face recognition practices the company ultimately abandoned.

Keep reading

Meta, TikTok, YouTube Face Trial Over Youth Addiction Claims

Three of the world’s biggest tech companies face a landmark trial in Los Angeles starting this week over claims that their platforms — Meta’s Instagram, ByteDance’s TikTok and Google’s YouTube — deliberately addict and harm children.

Jury selection starts this week in the Los Angeles County Superior Court. It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms.

The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.

At the core of the case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out.

She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.

KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits.

This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.

Keep reading

A Man Bought Meta’s AI Glasses, And Ended Up Wandering The Desert Searching For Aliens To Abduct Him

At age 50, Daniel was “on top of the world.”

“I turned 50, and it was the best year of my life,” he told Futurism in an interview. “It was like I finally figured out so many things: my career, my marriage, my kids, everything.”

It was early 2023, and Daniel — who asked to be identified by only his first name to protect his family’s privacy — and his wife of over three decades were empty nesters, looking ahead to the next chapter of their lives. They were living in an affluent Midwestern suburb, where they’d raised their four children. Daniel was an experienced software architect who held a leadership role at a large financial services company, where he’d worked for more than 20 years. In 2022, he leveraged his family’s finances to realize a passion project: a rustic resort in rural Utah, his favorite place in the world.

“All the kids were out of the house, and it was like, ‘oh my gosh, we’re still young. We’ve got this resort. I’ve got a good job. The best years of our lives are in front of us,” Daniel recounted, sounding melancholy. “It was a wonderful time.”

That all changed after Daniel purchased a pair of AI chatbot-embedded Ray-Ban Meta smart glasses — the AI-infused eyeglasses that Meta CEO Mark Zuckerberg has made central to his vision for the future of AI and computing — which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a “new dawn” for humanity.

And though his delusions have since faded, his journey into a Meta AI-powered reality left his life in shambles — deep in debt, reeling from job loss, isolated from his family, and struggling with depression and suicidal thoughts.

“I’ve lost everything,” Daniel, now 52, told Futurism, his voice dripping with fatigue. “Everything.”

In many ways, Daniel was Meta’s target customer. He was an experienced tech worker and AI enthusiast who had worked on machine learning projects in the past and had purchased the Meta glasses because he was intrigued by their AI features.

“I used Meta [AI] because they were integrated with these glasses,” said Daniel. “And I could wear glasses — which I wore all the time — and then I could speak to AI whenever I wanted to. I could talk to my ear.”

Today, however, as he continues to recover from his mental health breakdown, Daniel describes himself as a “shell” of who he “used to be.”

“My kids don’t talk to me because I got weird. They don’t know how to talk to me,” said the father of four. “I was a cook… I played the guitar. I love music. I love learning.”

But now, he says, he’s “just trying to survive day to day.”

According to Daniel and multiple family members, the 52-year-old had no history of mania or psychosis before encountering Meta AI. He’d struggled with alcoholism, but quit drinking in early 2023, months before he purchased the Meta smart glasses.

Keep reading