Meta Chose Revenue Over Policing Chinese Scam Ads, Documents Show

Meta knowingly tolerated large volumes of fraudulent advertising from China to protect billions of dollars in revenue, a new investigation from Reuters unveiled this week. Internal documents show executives prioritized minimizing “revenue impact” over fully cracking down on scams, illegal gambling, pornography and other banned ads.

Although Meta platforms are blocked inside China, Chinese companies are allowed to advertise to users abroad, according to Reuters. That business grew rapidly, reaching more than $18 billion in revenue in 2024—about 11% of Meta’s global sales. Internal estimates showed roughly 19% of that revenue, more than $3 billion, came from prohibited or fraudulent ads.

Meta documents reviewed by Reuters describe China as the company’s top “Scam Exporting Nation,” responsible for roughly a quarter of scam ads worldwide. Victims ranged from U.S. and Canadian investors to consumers in Taiwan. An internal presentation warned, “We need to make significant investment to reduce growing harm.”

In 2024, Meta briefly did just that. A dedicated China-focused anti-fraud team cut problematic ads roughly in half, from 19% to 9% of China-related revenue. But after what one document described as an “Integrity Strategy pivot and follow-up from Zuck,” the team was asked to pause its work. Meta later disbanded the unit, lifted restrictions on new Chinese ad agencies, and shelved additional anti-scam measures.

Keep reading

Australia: Meta begins deactivating accounts ahead of 16-year-old minimum social media age limit

Meta has begun removing social media accounts belonging to Australian children under 16 years old from its platforms, Instagram, Facebook and Threads.

The tech giant has started notifying users aged 13 to 15 years old that their accounts would cease to exist on December 4th. Starting December 10th, social media companies will face fines up to A$49.5 million ($33million USD) should they fail to take steps to halt children under 16 years old from owning accounts.

Australian eSafety Commissioners will send major platforms notices on December 11th demanding statistics about exactly how many accounts were removed from their sites. Additionally, monthly notices are planned for 2026.

It is estimated that 150,000 Facebook accounts and 325,000 Instagram users will be terminated. 

“The government recognizes that age assurance may require several days or weeks to complete fairly and accurately,” Communications Minister Anika Wells reported.

“However, if eSafety identifies systemic breaches of the law, the platforms will face fines,” she added.

Google sent out a notice on Wednesday stating that anyone in Australia under 16 would be signed out of YouTube on December 10th and will lose features accessible available only to account holders, such as playlists.

Google states it determines YouTube users’ ages “based on personal data contained in associated Google accounts and other signals.”

“We have consistently said this rushed legislation misunderstands our platform, the way young Australians use it and, most importantly, it does not fulfill its promise to make kids safer online,” a Google statement reported.

Users over 16 years old who were wrongfully revoked account access have the option to verify their age through government-issued ID or a Video selfie, per Meta.

Platforms such as X and Reddit contacted underage users, suggesting that they download their posted pictures and freeze their accounts until they become of age.

The Australian government claims the ban will protect children from the harms of social media. However, critics say this decision may isolate certain groups who depend on the platforms for connection and push children to other, potentially more harmful corners of the internet.

Keep reading

Meta Pushes Canada for App Store Age Verification ID Laws

Meta is working to convince the Canadian government to introduce new laws that would make age verification mandatory at the app store level. The company has been lobbying Ottawa for months and says it has received positive feedback from officials drafting online safety legislation.

To support its push, Meta paid Counsel Public Affairs to poll Canadians on what kinds of digital safety measures they want for teens.

The poll found that 83 percent of parents favor requiring app stores to confirm users’ ages before app downloads.

Meta highlighted those results, saying “the Counsel data clearly indicates that parents are seeking consistent, age-appropriate standards that better protect teens and support parents online. And the most effective way to understand this is by obtaining parental approval and verifying age on the app store.”

Rachel Curran, Meta Canada’s director of public policy, described the idea as “by far the most effective, privacy-protective, efficient way to determine a user’s age.”

That phrase may sound privacy-conscious, but in practice, the plan would consolidate control over personal data inside a small circle of corporations such as Meta, Apple, and Google, while forcing users to identify themselves to access basic online services.

Google has criticized Meta’s proposal, calling it an attempt to avoid direct responsibility. “Time and time again, all over the world, you’ve seen them push forward proposals that would have app stores change their practices and do something new without any change by Meta,” said Kareem Ghanem, Google’s senior director of government affairs.

Behind these corporate disputes lies a much bigger question: should anyone be required to verify their identity in order to use the internet?

Keep reading

Court Filings Allege Meta Downplayed Risks to Children and Misled the Public

Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” 

“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.

The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.  

“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.” 

The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffs—including children and parents, school districts, and state attorneys general—have joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. 

Keep reading

Brussels Aims at WhatsApp in the Next Round of Speech Control

Meta’s WhatsApp platform is set to come under tighter European oversight as regulators prepare to bring its “channels” feature under the European Union’s far-reaching censorship law, the Digital Services Act (DSA), the same framework that already pressures Facebook and Instagram.

According to Bloomberg, people familiar with the matter say the European Commission has informed Meta that WhatsApp’s channels are being prepared for designation as a “Very Large Online Platform.”

That classification carries extensive responsibilities for content censorship. Although no public date has been announced, the Commission’s notice indicates that WhatsApp will soon face some of the most demanding digital rules in the world.

Channels, which allow public updates from news outlets, public figures, and organizations, function more like social media feeds than private chats.

WhatsApp reported earlier this year that these channels reached around 46.8 million users in Europe by late 2024, slightly above the DSA’s 45 million-user threshold for stricter oversight.

Once a service crosses that line, it must perform regular assessments of how illegal or “harmful” content circulates and develop strategies to limit its spread. Platforms are also required to publish user figures twice a year and risk fines of up to 6 percent of global revenue for failing to comply.

The DSA does not apply to private, encrypted communication, so WhatsApp’s core messaging service will remain unaffected.

Still, the EU’s decision to expand its regulatory reach into new areas of online conversation has caused concern that these rules could burden companies and discourage open dialogue in the name of safety.

The European Commission has remained cautious about providing details, saying only that it “cannot confirm the timeline for a potential future designation.”

For Meta, the move adds another chapter to its ongoing disputes with European regulators.

Keep reading

Senate Grills Meta and Google Over Biden Administration’s Role in COVID-Era Content Censorship

A Senate hearing this week discussed government influence on online speech, as senior executives from Meta and Google faced questions about the Biden administration’s communications with their companies during the COVID-19 pandemic.

The session, titled “Part II of Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans,” highlighted the growing concern in Washington over what lawmakers describe as government-driven pressure to suppress lawful expression.

Senator Ted Cruz (R-TX), who led the hearing, began by declaring that “the right to speak out is the foundation of a free society” and warning that “censorship around the world is growing.”

He accused the Biden administration of pushing technology companies to restrict Americans’ speech during the pandemic, and he faulted both the companies and Democrats for failing to resist that pressure.

“Today, we pick off where the story left off,” Cruz said, pointing to Meta and Google as examples of firms that “were pressured by the Biden administration to censor the American people.”

He pledged to introduce the Jawbone Act, which he said would “provide a robust right to redress when Americans are targeted by their own government.”

Markham Erickson, Google’s Vice President of Government Affairs and Public Policy, defended the company’s approach, emphasizing that its moderation decisions are guided by long-standing internal policies, not by government direction.

“While we are a company dedicated to the goal of making the world’s information universally accessible, that doesn’t mean that we don’t have certain rules,” Erickson said, citing restrictions on “terrorist content, child sexual abuse material, hate speech, and other harmful content.”

He acknowledged that officials in the Biden administration had contacted Google during the pandemic to urge the removal of certain COVID-19 content from YouTube.

But Erickson maintained that the company “develop[ed] and enforce[d] our policies independently” and “rejected suggestions that did not align with those policies.”

Keep reading

Prosecutors Drop Charges Against Tennessee Man Over Facebook Meme

Last month, Tennessee authorities arrested a man for posting a Facebook meme, a clear violation of his First Amendment rights, and held him on a $2 million bond. This week, prosecutors dropped the case, but that doesn’t negate the weeks he spent in jail on a bogus charge.

As Reason previously reported, police arrested 61-year-old Larry Bushart for posting a meme on Facebook. In a thread about the murder of Charlie Kirk, Bushart posted a meme with a picture of President Donald Trump and the quote “We have to get over it,” which Trump said after a January 2024 shooting at Perry High School in Perry, Iowa.

Sheriff Nick Weems of nearby Perry County said Bushart intentionally posted the meme to make people think he was referring to Perry County High School. “Investigators believe Bushart was fully aware of the fear his post would cause and intentionally sought to create hysteria within the community,” Weems told The Tennesseean.

On September 21, deputies arrested Bushart at his house and booked him on a charge of Threats of Mass Violence on School Property and Activities, a felony that carries at least a year in prison. In body camera footage posted online by Liliana Segura of The Intercept, Bushart is incredulous when presented with the charge. “I don’t think I committed a crime,” he tells the officer, jokingly admitting that “I may have been an asshole.”

“That’s not illegal,” the officer replies as he leads Bushart into a cell.

Unfortunately, it was no laughing matter: A judge imposed a $2 million bond. Getting out on bail would require Bushart to come up with at least $210,000. According to the Perry County Circuit Court website, Bushart had a hearing scheduled for October 9, where he could file a motion for a reduced bond, but a court clerk told Reason that the hearing was “reset” for December 4. As a result, Bushart sat in jail for weeks.

Right away, it should have been clear how flimsy the case was. But the sheriff doubled down.

As Segura reported at The Intercept, Weems personally responded to people on Facebook suggesting Bushart was arrested because authorities misread a picture that briefly referenced a prior news event on the other side of the country. “We were very much aware of the meme being from an Iowa shooting,” Weems wrote. But it “created mass hysteria to parents and teachers…that led the normal person to conclude that he was talking about our Perry County High School.”

“Yet there were no public signs of this hysteria,” Segura notes. “Nor was there much evidence of an investigation—or any efforts to warn county schools.”

Keep reading

‘Massive legal siege’ against social media companies looms

Thousands of plaintiffs’ complaints, millions of pages of internal documents and transcripts of countless hours of depositions are about to land in U.S. courtrooms, threatening the future of the biggest social media companies.

The blizzard of paperwork is a byproduct of two consolidated lawsuits accusing Snap Inc.’s Snapchat; Meta Platforms Inc.’s Facebook and Instagram; ByteDance Ltd.’s TikTok; and Alphabet Inc.’s YouTube of knowingly designing their platforms to addict users — allegedly resulting in youth depression, anxiety, insomnia, eating disorders, self-harm and even suicide.

The litigation, brewing for more than three years, has had to overcome numerous hurdles, including the liability shield that has protected social media platforms from facing user-harm lawsuits. The social media companies have filed multiple motions to dismiss the cases on the grounds that Section 230 of the Communications Decency Act prevents them from being held accountable for content posted on their sites.

Those motions have been largely unsuccessful, and courtrooms across the country are poised to open their doors for the first time to the alleged victims of social media. The vast majority of cases have been folded into two multijurisdictional proceedings, one in state and the other in federal court, to streamline the pretrial discovery process.

The first bellwether trial is scheduled to begin in Los Angeles Superior Court in late January. It involves a 19-year-old woman from Chico, California, who says she’s been addicted to social media for more than a decade and that her nonstop use of the platforms has caused anxiety, depression and body dysmorphia. Two other trials will follow soon after, with thousands more waiting in the wings. If successful, these cases could result in multibillion-dollar settlements — akin to tobacco and opioid litigation — and change the way minors interact with social media.

“This is going to be one of the most impactful litigations of our lifetime,” said Joseph VanZandt, an attorney at Beasley Allen Law Firm in Montgomery, Alabama, and co-lead plaintiffs’ attorney for the coordinated state cases. “This is about large corporations targeting vulnerable populations — children — for profit. That’s what we saw with the tobacco companies; they were also targeting adolescents and trying to get them addicted while they were young.”

Matthew Bergman, founder of the Social Media Victims Law Center in Seattle, makes a similar comparison to tobacco litigation in the Bloomberg documentary Can’t Look Away: The Case Against Social Media. “In the case of Facebook, you have internal documents saying ‘tweens are herd animals,’ ‘kids have an addict’s narrative’ and ‘our products make girls feel worse about themselves.’ You have the same kind of corporate misconduct,” Bergman says in the film, which will be available to view on Bloomberg’s platforms on October 30.

Bergman’s firm was the first to file user-harm cases against social media companies, in 2022, after Frances Haugen, a former Meta product manager-turned-whistleblower, released a trove of internal documents showing the company knew social media was negatively impacting youth mental health. The first case, which is part of the consolidated federal litigation, alleged that an 11-year-old Connecticut girl killed herself after suffering from extreme social media addiction and sexual exploitation by online predators.

What set that case apart was how it got around Section 230’s immunity blanket. Bergman argued that his case wasn’t about third-party content, which the federal law protects. Instead, he said it hinged on the way social media companies were intentionally designing their products to prioritize engagement and profit over safety.

Keep reading

Report: EU to Charge Meta Under Censorship Law for Failing to Remove “Harmful” Content

Meta Platforms is bracing for formal charges from the European Union, accused of not doing enough to police online speech on Facebook and Instagram.

The problem is the EU’s Digital Services Act (DSA), a law that gives regulators the power to decide what counts as “illegal” or “harmful” content (a definition that includes “illegal hate speech”) and punish companies that fail to take it down.

The commission’s move could lead to a fine of up to 6% of Meta’s worldwide revenue, though the company will be allowed to respond before any penalty is finalized.

Officials in Brussels argue that Meta lacks an adequate “notice and action mechanism” for users to flag posts for removal.

The charge sheet, expected within weeks, according to Bloomberg, builds on an investigation launched in April 2024.

What the EU describes as a duty to protect users is, in fact, a mandate that forces platforms to censor more aggressively or face ruinous fines.

The commission would not comment on its plans, but Meta spokesperson Ben Walters rejected the accusations outright, saying the company disagreed “with any suggestion we have breached the DSA” and confirmed that talks are ongoing.

The DSA covers every major platform with more than 45 million active users in the EU.

Meta is currently facing two separate probes under the law: one focused on disinformation and illegal content, the other on protections for minors.

Supporters of the DSA insist it protects citizens, but the law essentially hands governments the authority to decide what speech is acceptable online.

No fines have yet been issued, but the pressure to comply has already chilled open debate.

Keep reading

Mark Zuckerberg’s Meta Launches Super PAC to Combat AI Regulations

Mark Zuckerberg’s Meta has announced the launch of a new super PAC aimed at electing state candidates from both parties who support the company’s stance on AI development and regulation. According to the company, the American Technology Excellence Project super PAC is launching “amid a growing patchwork of inconsistent regulations that threaten homegrown innovation and investments in AI.”

Axios reports that social media and AI giant Meta has launched a new super PAC called the American Technology Excellence Project to help fight what it perceives as burdensome AI and tech policy bills across multiple states. The announcement highlights the company’s focus on state-level legislation as the federal government appears unlikely to pass significant tech policy regulation in the near future.

The super PAC will be run by Brian Baker, a longtime Republican operative, and the Democratic consulting firm Hilltop Public Solutions, with Meta investing tens of millions of dollars into the project. Baker stated, “America’s innovation edge is at a crossroads. We need state legislators who will champion our tech future, not cede it to global adversaries. We’ll fight to keep the US ahead of the curve, driving growth and opportunity for all.”

In a statement to Breitbart News, Meta VP of Public Policy Brian Rice wrote:

Amid a growing patchwork of inconsistent regulations that threaten homegrown innovation and investments in AI, state lawmakers are uniquely positioned to ensure that America remains a global technology leader. This is why Meta is launching an effort to support the election of state candidates across the country who embrace AI development, champion the U.S. technology industry, and defend American tech leadership at home and abroad.

The American Technology Excellence Project will focus on three main pillars: promoting and defending U.S. technology companies and leadership, advocating for AI progress, and empowering parents to control how their children experience online apps and AI technologies. While Meta has not yet shared which states the PAC will immediately focus on or how many people it will employ, the company claims it is committed to supporting the election of state candidates who embrace AI development, champion the U.S. technology industry, and defend American tech leadership both domestically and internationally.

Keep reading