China, Russia use ‘asymmetric advantage,’ unleash sex warfare to seduce US tech execs, steal secrets: report

China and Russia have deployed attractive women to the United States to seduce unwitting Silicon Valley tech executives as part of a “sex warfare” operation aimed at stealing American technology secrets, according to a report.

Industry insiders told The Times of London that they have been approached by would-be honeypots — some of whom have even managed to ensnare their targets by marrying them and having children.

Chinese and Russian agents are also using social media, startup competitions and venture capital investments to infiltrate the heart of America’s tech industry, the report said.

“I’m getting an enormous number of very sophisticated LinkedIn requests from the same type of attractive young Chinese woman,” James Mulvenon, chief intelligence officer at risk-assessment firm Pamir Consulting, told The Times.

“It really seems to have ramped up recently.”

A former US counterintelligence official who now works for Silicon Valley startups told The Times that he recently investigated one case of a “beautiful” Russian woman who worked at a US-based aerospace company, where she met an American colleague whom she eventually married.

According to the former counterintelligence official, the woman in question attended a modelling academy when she was in her twenties. Afterward, she was enrolled in a “Russian soft-power school” before she fell off the radar for a decade — only to re-emerge in the US as an expert in cryptocurrency.

Keep reading

Researchers expose large-scale YouTube malware distribution network

Check Point researchers have uncovered, mapped and helped set back a stealthy, large-scale malware distribution operation on YouTube they dubbed the “YouTube Ghost Network.”

The network published more than 3,000 videos across compromised or fake channels, luring viewers with game cheats, cracked software, or pirated tools, but instead delivering malware or phishing pages. 

The YouTube Ghost Network

The YouTube Ghost Network is strikingly similar to the Stargazers Ghost Network, a previously uncovered network of fake or hijacked GitHub accounts that served as a malware and phishing link Distribution-as-a-Service.

In the Stargazers Ghost Network, different accounts filled different roles. Some accounts directed targets to malicious downloads, others served malware, and others still starred, forked, and subscribed to malicious repositories, in an obvious attempt to make the other accounts appear legitimate to potential victims.

Similarly, the YouTube Ghost Network consists of video accounts, post accounts, and interact accounts.

Video accounts, which are either hijacked or created by the malware peddlers, upload videos that promise something appealing, e.g., a free/cracked version of Adobe Photoshop, or game hacks for popular games like Roblox. The descriptions contain download links or direct viewers to password-protected archives on services like Dropbox, Google Drive or MediaFire, and they often tell users to temporarily disable Windows Defender before installing the downloaded cracked software.

Post accounts publish community posts with the same links and passwords, and interact accounts flood comment sections with fake endorsements, creating a false sense of trust.

Keep reading

Judge Orders Tech CEOs to Testify in Case Using Algorithmic Design Rules as a New Avenue for Indirect Online Censorship Pressure

Three of the tech industry’s most recognizable leaders, Mark Zuckerberg of Meta, Evan Spiegel of Snap, and Adam Mosseri of Instagram, will be required to testify in court early next year.

The order came from Los Angeles Superior Court Judge Carolyn Kuhl, who ruled that their participation is essential in a lawsuit alleging that social media platforms were deliberately designed to harm young users’ mental health.

Attorneys for the companies had tried to prevent the CEOs from appearing, arguing that earlier depositions and other executive testimonies already provided sufficient information.

Judge Kuhl disagreed, stating, “The testimony of a CEO is uniquely relevant, as that officer’s knowledge of harms, and failure to take available steps to avoid such harms could establish negligence or ratification of negligent conduct.”

She also noted that their testimony would be “unique,” since the claims center on design features built to “be addictive” and “drive compulsive” use among minors.

Meta argued that compelling both Zuckerberg and Mosseri to testify would disrupt their ability to manage the business and “set a precedent” for future cases. Snap’s lawyers said the decision to call Spiegel to the stand was an “abuse of discretion.”

Judge Kuhl rejected both arguments, saying that those in charge must directly answer questions about their companies’ conduct instead of delegating that responsibility.

After the ruling, Meta declined to comment.

Keep reading

‘Massive legal siege’ against social media companies looms

Thousands of plaintiffs’ complaints, millions of pages of internal documents and transcripts of countless hours of depositions are about to land in U.S. courtrooms, threatening the future of the biggest social media companies.

The blizzard of paperwork is a byproduct of two consolidated lawsuits accusing Snap Inc.’s Snapchat; Meta Platforms Inc.’s Facebook and Instagram; ByteDance Ltd.’s TikTok; and Alphabet Inc.’s YouTube of knowingly designing their platforms to addict users — allegedly resulting in youth depression, anxiety, insomnia, eating disorders, self-harm and even suicide.

The litigation, brewing for more than three years, has had to overcome numerous hurdles, including the liability shield that has protected social media platforms from facing user-harm lawsuits. The social media companies have filed multiple motions to dismiss the cases on the grounds that Section 230 of the Communications Decency Act prevents them from being held accountable for content posted on their sites.

Those motions have been largely unsuccessful, and courtrooms across the country are poised to open their doors for the first time to the alleged victims of social media. The vast majority of cases have been folded into two multijurisdictional proceedings, one in state and the other in federal court, to streamline the pretrial discovery process.

The first bellwether trial is scheduled to begin in Los Angeles Superior Court in late January. It involves a 19-year-old woman from Chico, California, who says she’s been addicted to social media for more than a decade and that her nonstop use of the platforms has caused anxiety, depression and body dysmorphia. Two other trials will follow soon after, with thousands more waiting in the wings. If successful, these cases could result in multibillion-dollar settlements — akin to tobacco and opioid litigation — and change the way minors interact with social media.

“This is going to be one of the most impactful litigations of our lifetime,” said Joseph VanZandt, an attorney at Beasley Allen Law Firm in Montgomery, Alabama, and co-lead plaintiffs’ attorney for the coordinated state cases. “This is about large corporations targeting vulnerable populations — children — for profit. That’s what we saw with the tobacco companies; they were also targeting adolescents and trying to get them addicted while they were young.”

Matthew Bergman, founder of the Social Media Victims Law Center in Seattle, makes a similar comparison to tobacco litigation in the Bloomberg documentary Can’t Look Away: The Case Against Social Media. “In the case of Facebook, you have internal documents saying ‘tweens are herd animals,’ ‘kids have an addict’s narrative’ and ‘our products make girls feel worse about themselves.’ You have the same kind of corporate misconduct,” Bergman says in the film, which will be available to view on Bloomberg’s platforms on October 30.

Bergman’s firm was the first to file user-harm cases against social media companies, in 2022, after Frances Haugen, a former Meta product manager-turned-whistleblower, released a trove of internal documents showing the company knew social media was negatively impacting youth mental health. The first case, which is part of the consolidated federal litigation, alleged that an 11-year-old Connecticut girl killed herself after suffering from extreme social media addiction and sexual exploitation by online predators.

What set that case apart was how it got around Section 230’s immunity blanket. Bergman argued that his case wasn’t about third-party content, which the federal law protects. Instead, he said it hinged on the way social media companies were intentionally designing their products to prioritize engagement and profit over safety.

Keep reading

Big Tech is paying millions to train teachers on AI, in a push to bring chatbots into classrooms

On a scorching hot Saturday in San Antonio, dozens of teachers traded a day off for a glimpse of the future. The topic of the day’s workshop: enhancing instruction with artificial intelligence.

After marveling as AI graded classwork instantly and turned lesson plans into podcasts or online storybooks, one high school English teacher raised a concern that was on the minds of many: “Are we going to be replaced with AI?”

That remains to be seen. But for the nation’s 4 million teachers to stay relevant and help students use the technology wisely, teachers unions have forged an unlikely partnership with the world’s largest technology companies. The two groups don’t always see eye to eye but say they share a common goal: training the future workforce of America.

Microsoft, OpenAI and Anthropic are providing millions of dollars for AI training to the American Federation of Teachers, the country’s second-largest teachers union. In exchange, the tech companies have an opportunity to make inroads into schools and win over students in the race for AI dominance.

AFT President Randi Weingarten said skepticism guided her negotiations, but the tech industry has something schools lack: deep pockets.

“There is no one else who is helping us with this. That’s why we felt we needed to work with the largest corporations in the world,” Weingarten said. “We went to them — they didn’t come to us.”

Weingarten first met with Microsoft CEO Brad Smith in 2023 to discuss a partnership. She later reached out to OpenAI to pursue an “agnostic” approach that means any company’s AI tools could be used in a training session.

Under the arrangement announced in July, Microsoft is contributing $12.5 million to AFT over five years. OpenAI is providing $8 million in funding and $2 million in technical resources, and Anthropic has offered $500,000.

Keep reading

Google Dismisses Privacy Fears Over App ID Policy

Google has responded to growing concern over its decision to require identity verification for all Android app developers by September 2026.

The company plans to block sideloaded apps from developers who decline to verify their identity, a move that many view as a threat to both developer privacy and the health of independent app distribution outside the Play Store.

The policy will require developers to submit government-issued identification to Google.

During a video explanation, a Google employee brushed aside questions about anonymity, saying “it’s not clear when anonymity is absolutely required,” and tried to ease fears about data collection by stating “it’s not like Google’s gonna share that information with the public or anything like that.”

The company’s position was outlined across multiple formats, including a blog post, a support page, and a lengthy video featuring Google employees discussing the shift.

However, many concerns, particularly around privacy, decentralization, and the viability of third-party app stores, were left either unaddressed or downplayed.

Google’s messaging implies that the changes are aimed at improving app security.

The company claims identity checks will help weed out malicious actors, especially those distributing malware.

But there is growing unease about the level of control this policy gives Google over the Android ecosystem.

Under the proposed system, Google will have the unilateral authority to determine who is allowed to distribute apps and could effectively block any developer it suspects of wrongdoing, potentially even without a clear or transparent justification.

While the video suggests the focus is currently limited to those linked with malware, nothing in the policy guarantees that future enforcement will not extend further.

Developers flagged in error could lose their ability to share apps entirely, with no clear recourse.

Keep reading

Witnesses Testify on CISA, GEC, and Tech Firms Coordinated Effort to Silence Dissent Online

A contentious Senate Commerce Committee hearing on Tuesday laid bare deep divisions over the role of the federal government in influencing what Americans are allowed to say online.

While Republican lawmakers and witnesses presented extensive evidence of federal agencies pressuring tech platforms to silence the dissent of the public, Democrats largely sidestepped those concerns and instead zeroed in on a controversy involving late-night host Jimmy Kimmel where ABC owner Disney temporarily took him off air over comments related to the assassination of political activist and commentator Charlie Kirk.

Senator Eric Schmitt (R-MO) opened the hearing with a warning about what he described as a “vast censorship enterprise” operating under the Biden administration.

He called for the passage of two bills aimed at curbing such activity: the Collude Act, which would revoke Section 230 protections from tech firms that censor speech at the request of government officials or affiliated organizations, and the Censorship Accountability Act, which would allow citizens to sue federal employees who violate First Amendment rights by coordinating with private companies to suppress lawful expression.

“Congress must act to dismantle this unconstitutional alliance between Big Tech and Big Government that has deprived Americans of their most fundamental right,” Schmitt stated.

The hearing, titled “Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans,” featured testimony from several individuals who said they had been targeted as a result of government-backed efforts to influence speech online.

Sean Davis, CEO of The Federalist, testified that his outlet was hit by a coordinated censorship campaign involving both US agencies and foreign-linked organizations.

Keep reading

Tech Firms Unite in Open Letter Against EU Chat Scanning Law

With the vote approaching, the European Commission’s plan to scan private digital messages is moving toward final approval.

The regulation, called Chat Control 2.0, has gone through a year of resistance, warnings from experts, and objections from technology companies.

It is presented as a child safety measure, designed to inspect messages, photos, and videos across the EU before they are sent.

The privacy implications are immense.

Alice Weidel, co-leader of Germany’s AfD party, described the proposal as “an absolutely totalitarian project” and “a comprehensive general attack on central citizens and freedoms.”

She said the measure would install scanning software on personal devices, intercepting content before it reaches its recipient. The system would remove the protection offered by end-to-end encryption and treat every user as a potential suspect.

Weidel said the use of child safety language was “a cheap pretext” for real-time surveillance.

“Even the Stasi could only dream of such a full force,” she said, comparing the plan to intercepting and photographing every private letter for review by a government authority.

She warned that once the system exists, its function can expand to include other categories such as “politically offensive content” and “so-called hate speech.” The structure of the law allows the criteria to be adjusted through political decisions.

Technology companies have joined in opposition. Hundreds of privacy-oriented firms, including encrypted messengers, cloud storage services, and VPN providers, signed a joint letter urging EU ministers to reject the regulation.

Their message called for the protection of encryption and for an end to mandatory message scanning.

Signal has announced that it will leave the EU if forced to comply. The platform has stated that it cannot operate under a framework requiring message inspection.

The regulation creates an obligation to weaken the systems that enable private communication and turns encryption into a technical formality rather than a guarantee of privacy.

Supporters of the proposal say it will catch child abusers. Critics point out that criminal networks conduct their operations in offline settings or hidden spaces beyond the reach of such scanning.

“Criminals are already using offline or so-called dark rooms for their illegal businesses,” Weidel said.

Keep reading

California’s Vague ‘Hate Speech’ Bill Would Force Big Tech To Censor Mainstream Conservative Views

alifornia lawmakers are once again leading the charge — not toward progress, but toward repression. Their latest move, Senate Bill 771 (SB-771), is being packaged as a bold stand against “hate” on social media. In reality, it’s a direct assault on the free expression and constitutionally protected speech of ministries, minority groups, and faith-based organizations.

The bill would force Big Tech to remove content that could be interpreted as “harassment” or “intimidation” based on race, gender identity, sexual orientation, and more — or face financially devastating lawsuits.

If Gov. Gavin Newsom signs this bill into law as expected, it will become one of the most dangerous speech-restricting laws in the country. Cloaked in the language of civil rights, SB-771 is built to punish dissent from progressive orthodoxy.

The target is anyone who dares to speak publicly about values or perspectives that conflict with the state’s ever-expanding list of protected identities. In practice, this means community groups sharing discussions on traditional family structures, cultural views on gender roles, or advocacy for certain social issues may find themselves silenced — not by law enforcement, but by tech giants eager to avoid legal risk.

The bills says:

California law prohibits all persons and entities, including corporations, from engaging in, aiding, abetting, or conspiring to commit acts of violence, intimidation, or coercion based on race, religion, gender, sexual orientation, immigration status, or other protected characteristics.

 3273.73. (a) A social media platform that violates Section 51.7, 51.9, 52, or 52.1 through its algorithms that relay content to users or aids, abets, acts in concert, or conspires in a violation of any of those sections, or is a joint tortfeasor in a violation of any of those sections, shall, in addition to any other remedy, be liable to a prevailing plaintiff for a civil penalty for each violation sufficient to deter future violations but not to exceed the following:

(1) For an intentional, knowing, or willful violation, a civil penalty of up to one million dollars

(2) For a reckless violation, a civil penalty of up to five hundred thousand dollars.


This language may appear just, but its sweeping terms — “intimidation,” “coercion,” even “aiding” — are dangerously vague. In the hands of ideologically motivated actors, they can be weaponized to silence constitutionally protected discourse under the guise of enforcing civil rights.

That’s the chilling brilliance of SB-771: it outsources censorship to the private sector under threat of state-enforced financial ruin. The law doesn’t need to directly ban speech — it just makes the cost of hosting it too high for Big Tech to tolerate. This will especially impact small ministries, minority-led organizations, and faith-based nonprofits with limited legal or technical resources. For them, one flagged post — perhaps a cultural reference taken out of context — could mean being shadow-banned or deplatformed altogether.

Keep reading

X Urges EU to Reject “Chat Control 2.0” Surveillance Law Threatening End-to-End Encryption

X is urging European governments to reject a major surveillance proposal that the company warns would strip EU citizens of core privacy rights.

In a public statement ahead of a key Council vote scheduled for October 14, the platform called on member states to “vigorously oppose measures to normalize surveillance of its citizens,” condemning the proposed regulation as a direct threat to end-to-end encryption and private communication.

The draft legislation, widely referred to as “Chat Control 2.0,” would require providers of messaging and cloud services to scan users’ content, including messages, photos, and links, for signs of child sexual abuse material (CSAM).

Central to the proposal is “client-side scanning” (CSS), a method that inspects content directly on a user’s device before it is encrypted.

X stated plainly that it cannot support any policy that would force the creation of “de facto backdoors for government snooping,” even as it reaffirmed its longstanding commitment to fighting child exploitation.

The company has invested heavily in detection and removal systems, but draws a clear line at measures that dismantle secure encryption for everyone.

Privacy experts, researchers, and technologists across Europe have echoed these warnings.

By mandating that scans occur before encryption is applied, the regulation would effectively neutralize end-to-end encryption, opening private conversations to potential access not only by providers but also by governments and malicious third parties.

The implications reach far beyond targeted investigations. Once CSS is implemented, any digital platform subject to the regulation would be forced to scrutinize every message and file sent by its users.

This approach could also override legal protections enshrined in the EU Charter of Fundamental Rights, specifically Articles 7 and 8, which safeguard privacy and the protection of personal data.

A coalition of scientists issued a public letter warning that detection tools of this kind are technically flawed and unreliable at scale.

High error rates could lead to false accusations against innocent users, while actual abuse material could evade detection.

Keep reading