Senate Grills Meta and Google Over Biden Administration’s Role in COVID-Era Content Censorship

A Senate hearing this week discussed government influence on online speech, as senior executives from Meta and Google faced questions about the Biden administration’s communications with their companies during the COVID-19 pandemic.

The session, titled “Part II of Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans,” highlighted the growing concern in Washington over what lawmakers describe as government-driven pressure to suppress lawful expression.

Senator Ted Cruz (R-TX), who led the hearing, began by declaring that “the right to speak out is the foundation of a free society” and warning that “censorship around the world is growing.”

He accused the Biden administration of pushing technology companies to restrict Americans’ speech during the pandemic, and he faulted both the companies and Democrats for failing to resist that pressure.

“Today, we pick off where the story left off,” Cruz said, pointing to Meta and Google as examples of firms that “were pressured by the Biden administration to censor the American people.”

He pledged to introduce the Jawbone Act, which he said would “provide a robust right to redress when Americans are targeted by their own government.”

Markham Erickson, Google’s Vice President of Government Affairs and Public Policy, defended the company’s approach, emphasizing that its moderation decisions are guided by long-standing internal policies, not by government direction.

“While we are a company dedicated to the goal of making the world’s information universally accessible, that doesn’t mean that we don’t have certain rules,” Erickson said, citing restrictions on “terrorist content, child sexual abuse material, hate speech, and other harmful content.”

He acknowledged that officials in the Biden administration had contacted Google during the pandemic to urge the removal of certain COVID-19 content from YouTube.

But Erickson maintained that the company “develop[ed] and enforce[d] our policies independently” and “rejected suggestions that did not align with those policies.”

Keep reading

Secret details of Israel’s mammoth deal with Google and Amazon revealed – media

Israel has forced US tech giants Google and Amazon to violate their own legal obligations under a 2021 cloud services contract with West Jerusalem, according to a joint investigation by several news media outlets, including The Guardian.

The Jewish state’s contracts with US tech platforms have been under close scrutiny following widespread accusations, including from the UN, that its military response to the October 7, 2023 Hamas attack that killed over 1,200 people constitutes a genocide.

Known as Project Nimbus, the $1.2 billion deal reportedly bars the firms from restricting the Israeli government’s access to cloud services they provide, even if it violates their terms of use, the reports, carried by The Guardian along with +972 Magazine and Local Call, suggest.

The deal also reportedly requires the two companies to secretly notify West Jerusalem using a so-called “winking mechanism” should any foreign state or court seek access to Israeli data stored in the cloud.

Keep reading

How NDAs keep AI data center details hidden from Americans

On a March afternoon in Mason County, Kentucky, Dr. Timothy Grosser and his son Andy sat across the table from three men who came with an offer: $10 million for the 250-acre farm where they’d lived and worked for nearly four decades.

That’s 35 times what Grosser bought his land for in 1988 and significantly more than what others in the area had sold their land for recently. But there was a catch — it wasn’t clear who was funding the offer. One of the men said he represented a “Fortune 100 company” that wanted the property for an industrial development, but he refused to say what kind, which company or even his own name.

Instead, he pulled out a non-disclosure agreement.

Grosser said the contract would prevent him from discussing the project’s details with any third parties in exchange for limited information about its purpose, timeline and size. It didn’t disclose the company’s name, which could be discussed only after the company publicly announced its participation in the project.

“We refused to sign it,” Grosser said. “I’m not selling my farm for any amount of money.”

Keep reading

Australia’s eSafety Chief Pressures Big Tech and AI Firms on Verification, Age Checks

Australia’s top online regulator, eSafety Commissioner Julie Inman Grant, is intensifying her push to reshape speech in the digital world.

Her office has formally warned major social platforms and several AI chatbot companies that they could soon be forced to comply with far-reaching new age verification and “online safety” requirements that many see as expanding government control over online communication.

The warnings are part of the government’s effort to enforce the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which would bar Australians under 16 from creating social media accounts.

Letters sent to Meta, TikTok, Snapchat, X, and YouTube make it clear that each company is expected to fall under the scope of the new law.

The Commissioner’s preliminary assessment is that these services exist mainly for “online social interaction,” which brings them within the definition of social media platforms and subjects them to strict age verification and child protection obligations.

Not all of the companies accept that classification. Snapchat claims to be primarily a messaging platform similar to WhatsApp, while YouTube has opposed losing its original exemption.

At this stage, only services with a clear focus on messaging or education, such as WhatsApp, Messenger, YouTube Kids, and Google Classroom, remain excluded from the Commissioner’s oversight.

Keep reading

China, Russia use ‘asymmetric advantage,’ unleash sex warfare to seduce US tech execs, steal secrets: report

China and Russia have deployed attractive women to the United States to seduce unwitting Silicon Valley tech executives as part of a “sex warfare” operation aimed at stealing American technology secrets, according to a report.

Industry insiders told The Times of London that they have been approached by would-be honeypots — some of whom have even managed to ensnare their targets by marrying them and having children.

Chinese and Russian agents are also using social media, startup competitions and venture capital investments to infiltrate the heart of America’s tech industry, the report said.

“I’m getting an enormous number of very sophisticated LinkedIn requests from the same type of attractive young Chinese woman,” James Mulvenon, chief intelligence officer at risk-assessment firm Pamir Consulting, told The Times.

“It really seems to have ramped up recently.”

A former US counterintelligence official who now works for Silicon Valley startups told The Times that he recently investigated one case of a “beautiful” Russian woman who worked at a US-based aerospace company, where she met an American colleague whom she eventually married.

According to the former counterintelligence official, the woman in question attended a modelling academy when she was in her twenties. Afterward, she was enrolled in a “Russian soft-power school” before she fell off the radar for a decade — only to re-emerge in the US as an expert in cryptocurrency.

Keep reading

Researchers expose large-scale YouTube malware distribution network

Check Point researchers have uncovered, mapped and helped set back a stealthy, large-scale malware distribution operation on YouTube they dubbed the “YouTube Ghost Network.”

The network published more than 3,000 videos across compromised or fake channels, luring viewers with game cheats, cracked software, or pirated tools, but instead delivering malware or phishing pages. 

The YouTube Ghost Network

The YouTube Ghost Network is strikingly similar to the Stargazers Ghost Network, a previously uncovered network of fake or hijacked GitHub accounts that served as a malware and phishing link Distribution-as-a-Service.

In the Stargazers Ghost Network, different accounts filled different roles. Some accounts directed targets to malicious downloads, others served malware, and others still starred, forked, and subscribed to malicious repositories, in an obvious attempt to make the other accounts appear legitimate to potential victims.

Similarly, the YouTube Ghost Network consists of video accounts, post accounts, and interact accounts.

Video accounts, which are either hijacked or created by the malware peddlers, upload videos that promise something appealing, e.g., a free/cracked version of Adobe Photoshop, or game hacks for popular games like Roblox. The descriptions contain download links or direct viewers to password-protected archives on services like Dropbox, Google Drive or MediaFire, and they often tell users to temporarily disable Windows Defender before installing the downloaded cracked software.

Post accounts publish community posts with the same links and passwords, and interact accounts flood comment sections with fake endorsements, creating a false sense of trust.

Keep reading

Judge Orders Tech CEOs to Testify in Case Using Algorithmic Design Rules as a New Avenue for Indirect Online Censorship Pressure

Three of the tech industry’s most recognizable leaders, Mark Zuckerberg of Meta, Evan Spiegel of Snap, and Adam Mosseri of Instagram, will be required to testify in court early next year.

The order came from Los Angeles Superior Court Judge Carolyn Kuhl, who ruled that their participation is essential in a lawsuit alleging that social media platforms were deliberately designed to harm young users’ mental health.

Attorneys for the companies had tried to prevent the CEOs from appearing, arguing that earlier depositions and other executive testimonies already provided sufficient information.

Judge Kuhl disagreed, stating, “The testimony of a CEO is uniquely relevant, as that officer’s knowledge of harms, and failure to take available steps to avoid such harms could establish negligence or ratification of negligent conduct.”

She also noted that their testimony would be “unique,” since the claims center on design features built to “be addictive” and “drive compulsive” use among minors.

Meta argued that compelling both Zuckerberg and Mosseri to testify would disrupt their ability to manage the business and “set a precedent” for future cases. Snap’s lawyers said the decision to call Spiegel to the stand was an “abuse of discretion.”

Judge Kuhl rejected both arguments, saying that those in charge must directly answer questions about their companies’ conduct instead of delegating that responsibility.

After the ruling, Meta declined to comment.

Keep reading

‘Massive legal siege’ against social media companies looms

Thousands of plaintiffs’ complaints, millions of pages of internal documents and transcripts of countless hours of depositions are about to land in U.S. courtrooms, threatening the future of the biggest social media companies.

The blizzard of paperwork is a byproduct of two consolidated lawsuits accusing Snap Inc.’s Snapchat; Meta Platforms Inc.’s Facebook and Instagram; ByteDance Ltd.’s TikTok; and Alphabet Inc.’s YouTube of knowingly designing their platforms to addict users — allegedly resulting in youth depression, anxiety, insomnia, eating disorders, self-harm and even suicide.

The litigation, brewing for more than three years, has had to overcome numerous hurdles, including the liability shield that has protected social media platforms from facing user-harm lawsuits. The social media companies have filed multiple motions to dismiss the cases on the grounds that Section 230 of the Communications Decency Act prevents them from being held accountable for content posted on their sites.

Those motions have been largely unsuccessful, and courtrooms across the country are poised to open their doors for the first time to the alleged victims of social media. The vast majority of cases have been folded into two multijurisdictional proceedings, one in state and the other in federal court, to streamline the pretrial discovery process.

The first bellwether trial is scheduled to begin in Los Angeles Superior Court in late January. It involves a 19-year-old woman from Chico, California, who says she’s been addicted to social media for more than a decade and that her nonstop use of the platforms has caused anxiety, depression and body dysmorphia. Two other trials will follow soon after, with thousands more waiting in the wings. If successful, these cases could result in multibillion-dollar settlements — akin to tobacco and opioid litigation — and change the way minors interact with social media.

“This is going to be one of the most impactful litigations of our lifetime,” said Joseph VanZandt, an attorney at Beasley Allen Law Firm in Montgomery, Alabama, and co-lead plaintiffs’ attorney for the coordinated state cases. “This is about large corporations targeting vulnerable populations — children — for profit. That’s what we saw with the tobacco companies; they were also targeting adolescents and trying to get them addicted while they were young.”

Matthew Bergman, founder of the Social Media Victims Law Center in Seattle, makes a similar comparison to tobacco litigation in the Bloomberg documentary Can’t Look Away: The Case Against Social Media. “In the case of Facebook, you have internal documents saying ‘tweens are herd animals,’ ‘kids have an addict’s narrative’ and ‘our products make girls feel worse about themselves.’ You have the same kind of corporate misconduct,” Bergman says in the film, which will be available to view on Bloomberg’s platforms on October 30.

Bergman’s firm was the first to file user-harm cases against social media companies, in 2022, after Frances Haugen, a former Meta product manager-turned-whistleblower, released a trove of internal documents showing the company knew social media was negatively impacting youth mental health. The first case, which is part of the consolidated federal litigation, alleged that an 11-year-old Connecticut girl killed herself after suffering from extreme social media addiction and sexual exploitation by online predators.

What set that case apart was how it got around Section 230’s immunity blanket. Bergman argued that his case wasn’t about third-party content, which the federal law protects. Instead, he said it hinged on the way social media companies were intentionally designing their products to prioritize engagement and profit over safety.

Keep reading

Big Tech is paying millions to train teachers on AI, in a push to bring chatbots into classrooms

On a scorching hot Saturday in San Antonio, dozens of teachers traded a day off for a glimpse of the future. The topic of the day’s workshop: enhancing instruction with artificial intelligence.

After marveling as AI graded classwork instantly and turned lesson plans into podcasts or online storybooks, one high school English teacher raised a concern that was on the minds of many: “Are we going to be replaced with AI?”

That remains to be seen. But for the nation’s 4 million teachers to stay relevant and help students use the technology wisely, teachers unions have forged an unlikely partnership with the world’s largest technology companies. The two groups don’t always see eye to eye but say they share a common goal: training the future workforce of America.

Microsoft, OpenAI and Anthropic are providing millions of dollars for AI training to the American Federation of Teachers, the country’s second-largest teachers union. In exchange, the tech companies have an opportunity to make inroads into schools and win over students in the race for AI dominance.

AFT President Randi Weingarten said skepticism guided her negotiations, but the tech industry has something schools lack: deep pockets.

“There is no one else who is helping us with this. That’s why we felt we needed to work with the largest corporations in the world,” Weingarten said. “We went to them — they didn’t come to us.”

Weingarten first met with Microsoft CEO Brad Smith in 2023 to discuss a partnership. She later reached out to OpenAI to pursue an “agnostic” approach that means any company’s AI tools could be used in a training session.

Under the arrangement announced in July, Microsoft is contributing $12.5 million to AFT over five years. OpenAI is providing $8 million in funding and $2 million in technical resources, and Anthropic has offered $500,000.

Keep reading

Google Dismisses Privacy Fears Over App ID Policy

Google has responded to growing concern over its decision to require identity verification for all Android app developers by September 2026.

The company plans to block sideloaded apps from developers who decline to verify their identity, a move that many view as a threat to both developer privacy and the health of independent app distribution outside the Play Store.

The policy will require developers to submit government-issued identification to Google.

During a video explanation, a Google employee brushed aside questions about anonymity, saying “it’s not clear when anonymity is absolutely required,” and tried to ease fears about data collection by stating “it’s not like Google’s gonna share that information with the public or anything like that.”

The company’s position was outlined across multiple formats, including a blog post, a support page, and a lengthy video featuring Google employees discussing the shift.

However, many concerns, particularly around privacy, decentralization, and the viability of third-party app stores, were left either unaddressed or downplayed.

Google’s messaging implies that the changes are aimed at improving app security.

The company claims identity checks will help weed out malicious actors, especially those distributing malware.

But there is growing unease about the level of control this policy gives Google over the Android ecosystem.

Under the proposed system, Google will have the unilateral authority to determine who is allowed to distribute apps and could effectively block any developer it suspects of wrongdoing, potentially even without a clear or transparent justification.

While the video suggests the focus is currently limited to those linked with malware, nothing in the policy guarantees that future enforcement will not extend further.

Developers flagged in error could lose their ability to share apps entirely, with no clear recourse.

Keep reading