Meta, TikTok, YouTube Face Trial Over Youth Addiction Claims

Three of the world’s biggest tech companies face a landmark trial in Los Angeles starting this week over claims that their platforms — Meta’s Instagram, ByteDance’s TikTok and Google’s YouTube — deliberately addict and harm children.

Jury selection starts this week in the Los Angeles County Superior Court. It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms.

The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.

At the core of the case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out.

She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.

KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits.

This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.

Keep reading

A Man Bought Meta’s AI Glasses, And Ended Up Wandering The Desert Searching For Aliens To Abduct Him

At age 50, Daniel was “on top of the world.”

“I turned 50, and it was the best year of my life,” he told Futurism in an interview. “It was like I finally figured out so many things: my career, my marriage, my kids, everything.”

It was early 2023, and Daniel — who asked to be identified by only his first name to protect his family’s privacy — and his wife of over three decades were empty nesters, looking ahead to the next chapter of their lives. They were living in an affluent Midwestern suburb, where they’d raised their four children. Daniel was an experienced software architect who held a leadership role at a large financial services company, where he’d worked for more than 20 years. In 2022, he leveraged his family’s finances to realize a passion project: a rustic resort in rural Utah, his favorite place in the world.

“All the kids were out of the house, and it was like, ‘oh my gosh, we’re still young. We’ve got this resort. I’ve got a good job. The best years of our lives are in front of us,” Daniel recounted, sounding melancholy. “It was a wonderful time.”

That all changed after Daniel purchased a pair of AI chatbot-embedded Ray-Ban Meta smart glasses — the AI-infused eyeglasses that Meta CEO Mark Zuckerberg has made central to his vision for the future of AI and computing — which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a “new dawn” for humanity.

And though his delusions have since faded, his journey into a Meta AI-powered reality left his life in shambles — deep in debt, reeling from job loss, isolated from his family, and struggling with depression and suicidal thoughts.

“I’ve lost everything,” Daniel, now 52, told Futurism, his voice dripping with fatigue. “Everything.”

In many ways, Daniel was Meta’s target customer. He was an experienced tech worker and AI enthusiast who had worked on machine learning projects in the past and had purchased the Meta glasses because he was intrigued by their AI features.

“I used Meta [AI] because they were integrated with these glasses,” said Daniel. “And I could wear glasses — which I wore all the time — and then I could speak to AI whenever I wanted to. I could talk to my ear.”

Today, however, as he continues to recover from his mental health breakdown, Daniel describes himself as a “shell” of who he “used to be.”

“My kids don’t talk to me because I got weird. They don’t know how to talk to me,” said the father of four. “I was a cook… I played the guitar. I love music. I love learning.”

But now, he says, he’s “just trying to survive day to day.”

According to Daniel and multiple family members, the 52-year-old had no history of mania or psychosis before encountering Meta AI. He’d struggled with alcoholism, but quit drinking in early 2023, months before he purchased the Meta smart glasses.

Keep reading

Meta Chose Revenue Over Policing Chinese Scam Ads, Documents Show

Meta knowingly tolerated large volumes of fraudulent advertising from China to protect billions of dollars in revenue, a new investigation from Reuters unveiled this week. Internal documents show executives prioritized minimizing “revenue impact” over fully cracking down on scams, illegal gambling, pornography and other banned ads.

Although Meta platforms are blocked inside China, Chinese companies are allowed to advertise to users abroad, according to Reuters. That business grew rapidly, reaching more than $18 billion in revenue in 2024—about 11% of Meta’s global sales. Internal estimates showed roughly 19% of that revenue, more than $3 billion, came from prohibited or fraudulent ads.

Meta documents reviewed by Reuters describe China as the company’s top “Scam Exporting Nation,” responsible for roughly a quarter of scam ads worldwide. Victims ranged from U.S. and Canadian investors to consumers in Taiwan. An internal presentation warned, “We need to make significant investment to reduce growing harm.”

In 2024, Meta briefly did just that. A dedicated China-focused anti-fraud team cut problematic ads roughly in half, from 19% to 9% of China-related revenue. But after what one document described as an “Integrity Strategy pivot and follow-up from Zuck,” the team was asked to pause its work. Meta later disbanded the unit, lifted restrictions on new Chinese ad agencies, and shelved additional anti-scam measures.

Keep reading

Australia: Meta begins deactivating accounts ahead of 16-year-old minimum social media age limit

Meta has begun removing social media accounts belonging to Australian children under 16 years old from its platforms, Instagram, Facebook and Threads.

The tech giant has started notifying users aged 13 to 15 years old that their accounts would cease to exist on December 4th. Starting December 10th, social media companies will face fines up to A$49.5 million ($33million USD) should they fail to take steps to halt children under 16 years old from owning accounts.

Australian eSafety Commissioners will send major platforms notices on December 11th demanding statistics about exactly how many accounts were removed from their sites. Additionally, monthly notices are planned for 2026.

It is estimated that 150,000 Facebook accounts and 325,000 Instagram users will be terminated. 

“The government recognizes that age assurance may require several days or weeks to complete fairly and accurately,” Communications Minister Anika Wells reported.

“However, if eSafety identifies systemic breaches of the law, the platforms will face fines,” she added.

Google sent out a notice on Wednesday stating that anyone in Australia under 16 would be signed out of YouTube on December 10th and will lose features accessible available only to account holders, such as playlists.

Google states it determines YouTube users’ ages “based on personal data contained in associated Google accounts and other signals.”

“We have consistently said this rushed legislation misunderstands our platform, the way young Australians use it and, most importantly, it does not fulfill its promise to make kids safer online,” a Google statement reported.

Users over 16 years old who were wrongfully revoked account access have the option to verify their age through government-issued ID or a Video selfie, per Meta.

Platforms such as X and Reddit contacted underage users, suggesting that they download their posted pictures and freeze their accounts until they become of age.

The Australian government claims the ban will protect children from the harms of social media. However, critics say this decision may isolate certain groups who depend on the platforms for connection and push children to other, potentially more harmful corners of the internet.

Keep reading

Meta Pushes Canada for App Store Age Verification ID Laws

Meta is working to convince the Canadian government to introduce new laws that would make age verification mandatory at the app store level. The company has been lobbying Ottawa for months and says it has received positive feedback from officials drafting online safety legislation.

To support its push, Meta paid Counsel Public Affairs to poll Canadians on what kinds of digital safety measures they want for teens.

The poll found that 83 percent of parents favor requiring app stores to confirm users’ ages before app downloads.

Meta highlighted those results, saying “the Counsel data clearly indicates that parents are seeking consistent, age-appropriate standards that better protect teens and support parents online. And the most effective way to understand this is by obtaining parental approval and verifying age on the app store.”

Rachel Curran, Meta Canada’s director of public policy, described the idea as “by far the most effective, privacy-protective, efficient way to determine a user’s age.”

That phrase may sound privacy-conscious, but in practice, the plan would consolidate control over personal data inside a small circle of corporations such as Meta, Apple, and Google, while forcing users to identify themselves to access basic online services.

Google has criticized Meta’s proposal, calling it an attempt to avoid direct responsibility. “Time and time again, all over the world, you’ve seen them push forward proposals that would have app stores change their practices and do something new without any change by Meta,” said Kareem Ghanem, Google’s senior director of government affairs.

Behind these corporate disputes lies a much bigger question: should anyone be required to verify their identity in order to use the internet?

Keep reading

Court Filings Allege Meta Downplayed Risks to Children and Misled the Public

Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” 

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.

The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.  

“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.” 

The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffs—including children and parents, school districts, and state attorneys general—have joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. 

Keep reading

Brussels Aims at WhatsApp in the Next Round of Speech Control

Meta’s WhatsApp platform is set to come under tighter European oversight as regulators prepare to bring its “channels” feature under the European Union’s far-reaching censorship law, the Digital Services Act (DSA), the same framework that already pressures Facebook and Instagram.

According to Bloomberg, people familiar with the matter say the European Commission has informed Meta that WhatsApp’s channels are being prepared for designation as a “Very Large Online Platform.”

That classification carries extensive responsibilities for content censorship. Although no public date has been announced, the Commission’s notice indicates that WhatsApp will soon face some of the most demanding digital rules in the world.

Channels, which allow public updates from news outlets, public figures, and organizations, function more like social media feeds than private chats.

WhatsApp reported earlier this year that these channels reached around 46.8 million users in Europe by late 2024, slightly above the DSA’s 45 million-user threshold for stricter oversight.

Once a service crosses that line, it must perform regular assessments of how illegal or “harmful” content circulates and develop strategies to limit its spread. Platforms are also required to publish user figures twice a year and risk fines of up to 6 percent of global revenue for failing to comply.

The DSA does not apply to private, encrypted communication, so WhatsApp’s core messaging service will remain unaffected.

Still, the EU’s decision to expand its regulatory reach into new areas of online conversation has caused concern that these rules could burden companies and discourage open dialogue in the name of safety.

The European Commission has remained cautious about providing details, saying only that it “cannot confirm the timeline for a potential future designation.”

For Meta, the move adds another chapter to its ongoing disputes with European regulators.

Keep reading

Senate Grills Meta and Google Over Biden Administration’s Role in COVID-Era Content Censorship

A Senate hearing this week discussed government influence on online speech, as senior executives from Meta and Google faced questions about the Biden administration’s communications with their companies during the COVID-19 pandemic.

The session, titled “Part II of Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans,” highlighted the growing concern in Washington over what lawmakers describe as government-driven pressure to suppress lawful expression.

Senator Ted Cruz (R-TX), who led the hearing, began by declaring that “the right to speak out is the foundation of a free society” and warning that “censorship around the world is growing.”

He accused the Biden administration of pushing technology companies to restrict Americans’ speech during the pandemic, and he faulted both the companies and Democrats for failing to resist that pressure.

“Today, we pick off where the story left off,” Cruz said, pointing to Meta and Google as examples of firms that “were pressured by the Biden administration to censor the American people.”

He pledged to introduce the Jawbone Act, which he said would “provide a robust right to redress when Americans are targeted by their own government.”

Markham Erickson, Google’s Vice President of Government Affairs and Public Policy, defended the company’s approach, emphasizing that its moderation decisions are guided by long-standing internal policies, not by government direction.

“While we are a company dedicated to the goal of making the world’s information universally accessible, that doesn’t mean that we don’t have certain rules,” Erickson said, citing restrictions on “terrorist content, child sexual abuse material, hate speech, and other harmful content.”

He acknowledged that officials in the Biden administration had contacted Google during the pandemic to urge the removal of certain COVID-19 content from YouTube.

But Erickson maintained that the company “develop[ed] and enforce[d] our policies independently” and “rejected suggestions that did not align with those policies.”

Keep reading

Prosecutors Drop Charges Against Tennessee Man Over Facebook Meme

Last month, Tennessee authorities arrested a man for posting a Facebook meme, a clear violation of his First Amendment rights, and held him on a $2 million bond. This week, prosecutors dropped the case, but that doesn’t negate the weeks he spent in jail on a bogus charge.

As Reason previously reported, police arrested 61-year-old Larry Bushart for posting a meme on Facebook. In a thread about the murder of Charlie Kirk, Bushart posted a meme with a picture of President Donald Trump and the quote “We have to get over it,” which Trump said after a January 2024 shooting at Perry High School in Perry, Iowa.

Sheriff Nick Weems of nearby Perry County said Bushart intentionally posted the meme to make people think he was referring to Perry County High School. “Investigators believe Bushart was fully aware of the fear his post would cause and intentionally sought to create hysteria within the community,” Weems told The Tennesseean.

On September 21, deputies arrested Bushart at his house and booked him on a charge of Threats of Mass Violence on School Property and Activities, a felony that carries at least a year in prison. In body camera footage posted online by Liliana Segura of The Intercept, Bushart is incredulous when presented with the charge. “I don’t think I committed a crime,” he tells the officer, jokingly admitting that “I may have been an asshole.”

“That’s not illegal,” the officer replies as he leads Bushart into a cell.

Unfortunately, it was no laughing matter: A judge imposed a $2 million bond. Getting out on bail would require Bushart to come up with at least $210,000. According to the Perry County Circuit Court website, Bushart had a hearing scheduled for October 9, where he could file a motion for a reduced bond, but a court clerk told Reason that the hearing was “reset” for December 4. As a result, Bushart sat in jail for weeks.

Right away, it should have been clear how flimsy the case was. But the sheriff doubled down.

As Segura reported at The Intercept, Weems personally responded to people on Facebook suggesting Bushart was arrested because authorities misread a picture that briefly referenced a prior news event on the other side of the country. “We were very much aware of the meme being from an Iowa shooting,” Weems wrote. But it “created mass hysteria to parents and teachers…that led the normal person to conclude that he was talking about our Perry County High School.”

“Yet there were no public signs of this hysteria,” Segura notes. “Nor was there much evidence of an investigation—or any efforts to warn county schools.”

Keep reading

‘Massive legal siege’ against social media companies looms

Thousands of plaintiffs’ complaints, millions of pages of internal documents and transcripts of countless hours of depositions are about to land in U.S. courtrooms, threatening the future of the biggest social media companies.

The blizzard of paperwork is a byproduct of two consolidated lawsuits accusing Snap Inc.’s Snapchat; Meta Platforms Inc.’s Facebook and Instagram; ByteDance Ltd.’s TikTok; and Alphabet Inc.’s YouTube of knowingly designing their platforms to addict users — allegedly resulting in youth depression, anxiety, insomnia, eating disorders, self-harm and even suicide.

The litigation, brewing for more than three years, has had to overcome numerous hurdles, including the liability shield that has protected social media platforms from facing user-harm lawsuits. The social media companies have filed multiple motions to dismiss the cases on the grounds that Section 230 of the Communications Decency Act prevents them from being held accountable for content posted on their sites.

Those motions have been largely unsuccessful, and courtrooms across the country are poised to open their doors for the first time to the alleged victims of social media. The vast majority of cases have been folded into two multijurisdictional proceedings, one in state and the other in federal court, to streamline the pretrial discovery process.

The first bellwether trial is scheduled to begin in Los Angeles Superior Court in late January. It involves a 19-year-old woman from Chico, California, who says she’s been addicted to social media for more than a decade and that her nonstop use of the platforms has caused anxiety, depression and body dysmorphia. Two other trials will follow soon after, with thousands more waiting in the wings. If successful, these cases could result in multibillion-dollar settlements — akin to tobacco and opioid litigation — and change the way minors interact with social media.

“This is going to be one of the most impactful litigations of our lifetime,” said Joseph VanZandt, an attorney at Beasley Allen Law Firm in Montgomery, Alabama, and co-lead plaintiffs’ attorney for the coordinated state cases. “This is about large corporations targeting vulnerable populations — children — for profit. That’s what we saw with the tobacco companies; they were also targeting adolescents and trying to get them addicted while they were young.”

Matthew Bergman, founder of the Social Media Victims Law Center in Seattle, makes a similar comparison to tobacco litigation in the Bloomberg documentary Can’t Look Away: The Case Against Social Media. “In the case of Facebook, you have internal documents saying ‘tweens are herd animals,’ ‘kids have an addict’s narrative’ and ‘our products make girls feel worse about themselves.’ You have the same kind of corporate misconduct,” Bergman says in the film, which will be available to view on Bloomberg’s platforms on October 30.

Bergman’s firm was the first to file user-harm cases against social media companies, in 2022, after Frances Haugen, a former Meta product manager-turned-whistleblower, released a trove of internal documents showing the company knew social media was negatively impacting youth mental health. The first case, which is part of the consolidated federal litigation, alleged that an 11-year-old Connecticut girl killed herself after suffering from extreme social media addiction and sexual exploitation by online predators.

What set that case apart was how it got around Section 230’s immunity blanket. Bergman argued that his case wasn’t about third-party content, which the federal law protects. Instead, he said it hinged on the way social media companies were intentionally designing their products to prioritize engagement and profit over safety.

Keep reading