Google Sued For Allegedly Using Gemini AI Tool To Track Users’ Private Communications

Google LLC is accused in a civil lawsuit of using its artificial intelligence program Gemini to collect data on users’ private communications in Gmail as well as Google’s instant messaging and video conference programs.

Until around Oct. 10, the Gemini AI assistant required the user to deliberately opt into its feature. After that date, the feature was allegedly “secretly” turned on by Google for all its users’ Gmail, Chat, and Meet accounts by default, enabling AI to track its users’ private data in those platforms “without the users’ knowledge or consent,” according to the complaint filed Nov. 11 in federal court in San Jose.

The class action lawsuit was filed in the U.S. District Court for the Northern District of California, alleging that Google is violating the California Invasion of Privacy Act, a 1967 law that prohibits surreptitious wiretapping and recording of confidential communications without the consent of all parties involved.

Although Google provides a way for users to turn off the feature, it requires users to look for it in the privacy settings to deactivate it, despite never having agreed to it in the first place, the complaint said.

The AI feature is categorized in “Google Workspace smart features” in Google settings. Once turned on, it means the user consents to the program using “Workspace content and activity” across Workspace or in other Google products.

When the feature is turned on, Gemini can “scan, read, and analyze every email (and email attachment), message, and conversation on those services,” according to the complaint.

Technology writer Ruben Circelli wrote in a PCMag article that Gemini is “downright creepy” in diving deep into his personal history, analyzing 16 years’ worth of emails after he signed up for a more advanced pro feature.

In a series of tests by Circelli, Gemini told him one of his character flaws and even knew who his first crush was in elementary school.

“This invasion of privacy wasn’t just disconcerting, though; it was unexpected,” Circelli wrote.

“Google didn’t explain what this integration would do before I signed up for its AI Pro plan, nor did it give me a way to opt out at the start.”

The Epoch Times reached out to Google for comment, but did not receive an immediate response.

“We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission,” the company has stated.

Keep reading

Google Softens Planned Android Sideloading Ban but Keeps Developer ID Verification

Google is slightly relaxing its controversial new Android policy on sideloading, but the shift does little to change its overall direction.

The company confirmed that it will still move ahead with mandatory developer identity verification for nearly all apps while introducing a limited “advanced flow” that lets “experienced users” continue installing software from outside the Play Store.

According to Google, the new system will feature multiple security warnings meant to deter casual users from downloading unverified apps.

“It will include clear warnings to ensure users fully understand the risks involved, but ultimately, it puts the choice in their hands,” the company said.

The process is still being developed, with feedback now underway before finalization in the coming months.

The adjustment follows backlash from developers and Android fans who criticized Google’s original plan to block apps created by unverified developers starting next year.

The community argued that the move would effectively close off Android’s long-standing openness by removing the ability to install software freely.

Despite the new language, Google’s latest policy maintains the same structure.

Developer ID verification will still be required for nearly all app distribution.

Only students and hobbyists will be allowed to share apps with a limited number of devices without providing identification, and businesses deploying internal software will remain exempt.

For everyone else, verification and a $25 registration fee will be mandatory, including for apps distributed outside Google Play. Previously, there was no charge for independent distribution.

The rollout schedule remains the same. Developers who distribute apps outside the Play Store began receiving early-access invitations on November 3, while Play Store developers will get theirs starting November 25.

The early-access period runs through March 2026, after which the verification program will open to all developers. The rules take effect in Brazil, Indonesia, Singapore, and Thailand in September 2026, and globally in 2027.

Google maintains that the new requirements are about security, not control.

Keep reading

Google Adds Age Check Tech as Texas, Utah, and Louisiana Enforce Digital ID Laws

Google is preparing for a new era of digital age checks as state-level rules in TexasUtah, and Louisiana begin to reshape how app stores operate.

To get ahead of these requirements, the company has introduced the Play Signals API in beta, a system built to help developers adapt to laws that will soon mandate age-based controls.

Starting in early 2026, each of the three states will enforce its own version of the App Store Accountability Act.

Texas’s law takes effect first, followed by Utah and Louisiana a few months later. Each statute requires app marketplaces to confirm the age range of their users through “commercially reasonable” verification methods.

Developers will be responsible for interpreting those signals and tailoring their apps accordingly. In some regions, they will also have to inform Google Play if a product update could require new parental consent.

For testing purposes, the company is providing a FakeAgeSignalsManager so that developers can simulate data before the laws officially apply.

Google’s rollout of its new Play Signals API is part of a broader shift toward a verified internet, one where digital access is increasingly tied to proof of identity.

The company’s beta API is being framed as a neutral compliance tool, but its function sets the stage for a more monitored web.

While the stated purpose is child safety and regulatory compliance, the architecture being built threatens to erode one of the internet’s core principles, pseudonymity.

The data points that determine whether someone is over 13 or over 18 can easily evolve into a persistent set of identifiers, linking activity across apps, accounts, and even devices. Once these signals are standardized, nothing prevents them from being combined with advertising, analytics, or behavioral tracking systems.

The result could be a world where age verification quietly becomes identity verification, and where “commercially reasonable” checks amount to permanent user profiling.

Keep reading

Senate Grills Meta and Google Over Biden Administration’s Role in COVID-Era Content Censorship

A Senate hearing this week discussed government influence on online speech, as senior executives from Meta and Google faced questions about the Biden administration’s communications with their companies during the COVID-19 pandemic.

The session, titled “Part II of Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans,” highlighted the growing concern in Washington over what lawmakers describe as government-driven pressure to suppress lawful expression.

Senator Ted Cruz (R-TX), who led the hearing, began by declaring that “the right to speak out is the foundation of a free society” and warning that “censorship around the world is growing.”

He accused the Biden administration of pushing technology companies to restrict Americans’ speech during the pandemic, and he faulted both the companies and Democrats for failing to resist that pressure.

“Today, we pick off where the story left off,” Cruz said, pointing to Meta and Google as examples of firms that “were pressured by the Biden administration to censor the American people.”

He pledged to introduce the Jawbone Act, which he said would “provide a robust right to redress when Americans are targeted by their own government.”

Markham Erickson, Google’s Vice President of Government Affairs and Public Policy, defended the company’s approach, emphasizing that its moderation decisions are guided by long-standing internal policies, not by government direction.

“While we are a company dedicated to the goal of making the world’s information universally accessible, that doesn’t mean that we don’t have certain rules,” Erickson said, citing restrictions on “terrorist content, child sexual abuse material, hate speech, and other harmful content.”

He acknowledged that officials in the Biden administration had contacted Google during the pandemic to urge the removal of certain COVID-19 content from YouTube.

But Erickson maintained that the company “develop[ed] and enforce[d] our policies independently” and “rejected suggestions that did not align with those policies.”

Keep reading

Secret details of Israel’s mammoth deal with Google and Amazon revealed – media

Israel has forced US tech giants Google and Amazon to violate their own legal obligations under a 2021 cloud services contract with West Jerusalem, according to a joint investigation by several news media outlets, including The Guardian.

The Jewish state’s contracts with US tech platforms have been under close scrutiny following widespread accusations, including from the UN, that its military response to the October 7, 2023 Hamas attack that killed over 1,200 people constitutes a genocide.

Known as Project Nimbus, the $1.2 billion deal reportedly bars the firms from restricting the Israeli government’s access to cloud services they provide, even if it violates their terms of use, the reports, carried by The Guardian along with +972 Magazine and Local Call, suggest.

The deal also reportedly requires the two companies to secretly notify West Jerusalem using a so-called “winking mechanism” should any foreign state or court seek access to Israeli data stored in the cloud.

Keep reading

DUDE BREAKING: Robby Starbuck is suing Google over INSANE attacks against him by Google’s evil AI

Google’s AI has been targeting Robby Starbuck in an insane way with completely fake attacks on him, which Starbuck says their AI worked overtime to make believable with links to fake articles and fake official records.

He’s warned them to cease and desist and now he’s suing them.

He explains it all below:

HUGE NEWS: I’m suing @Google today.

What you’re about to see is insane.

Since 2023, @GoogleAI (Bard, Gemini & Gemma), has been defaming me with fake criminal allegations including sexual assault, child rape, abuse, fraud, stalking, drug charges, and even saying I was in Epstein’s flight logs.

All 100% fake. All generated by Google’s AI. I have ZERO criminal record or allegations.

So why did Google do it? Google’s AI says that I was targeted because of my political views.

Even worse — Google execs KNEW for 2 YEARS that this was happening because I told them and my lawyers sent cease and desist letters multiple times.

This morning, my team @dhillonlaw filed my lawsuit against Google and now I’m going public with all the receipts — because this can’t ever happen to anyone else.

Google’s AI didn’t just lie — it built fake worlds to make its lies look real:

• Fake victims
• Fake therapy records
• Fake court records
• Fake police records
• Fake relationships
• Fake “news” stories

It even fabricated statements denouncing me from President Trump, @elonmusk and @JDVance over sexual assaults that Google completely invented.

One of the most dystopian things I’ve ever seen is how dedicated their AI was to doubling down on the lies. Google’s AI routinely cited fake sources by creating fake links to REAL media outlets and shows, complete with fake headlines so readers would trust the information. It would continue to do this even if you called the AI out for lying or sending fake links. In short, it was creating fake legacy media reports as a way to launder trust with users so they would believe elaborate lies that it told.

Keep reading

I Searched For Stories Of Trans Contagion. Google Lectured Me With False Propaganda

There are a plethora of stories of children being pressured into adopting transgender personalities, but don’t bother using Google to search for them. I tried, and the search engine spat out unsolicited propaganda and falsehoods about children and transgender identity.

I searched for testimonies from young people who adopted transgender identities “because it was cool” — like the story of Ash Eskridge, a young woman who became convinced she was a boy after watching social media influencers promote transgenderism.

But Google’s AI tool brushes off the idea that transgender identities are often a social contagion, despite evidence to the contrary. A 2018 study from Brown University — which was quickly repressed after outrage from transgender ideologues — found that 87 percent of the children in question “became gender dysphoric after friends did, after increasing their time online, or both,” as my colleague Joy Pullmann reported at the time. The study included examples of friend groups all adopting transgender identities together, and of kids being rewarded for transgender identities with social incentives like popularity and praise. Another study, by researchers at Virginia Tech in 2023, found girls who adopted transgender identities were “more likely to be accepted by peers.”

The search engine would have you believe that children are not just capable of arbitrarily choosing their sex but of doing so before they graduate from diapers. “Gender identity typically solidifies between ages 3 and 4,” the AI overview claims. The hard scientific proof that your toddler’s biology is actually backwards, it says, can be a “girl playing with trucks or a boy preferring dolls.”

One of the sources collated by the AI summary is a Mayo Clinic article that instructs parents to reinforce and show “admiration” for children who show an interest in stereotypically cross-sex behaviors. You can even “help ease a child’s depression and anxiety” by socially transitioning them, say the Mayo Clinic staff. All of this so-called medical advice points to reinforcing a child’s gender confusion, never challenging it.

Keep reading

Digital Clues Suggest Prior Knowledge in Charlie Kirk Shooting, James Li Reports

“When was the first time you heard the name Tyler Robinson or the Losi Center?” Li asks, referencing the alleged shooter and the rooftop location at Utah Valley University where Kirk was killed. “For me—it was in the days and weeks after Charlie Kirk’s tragic assassination on September 10th. But what if I told you somebody, maybe multiple somebodies, were searching those terms before September 10th?”

According to Li, Google Trends data showed a blip for “Tyler James Robinson” searches originating from Washington, D.C. at 11 a.m. the day before the shooting. “Why would anybody be searching the name Tyler James Robinson one day before he allegedly assassinated Charlie Kirk?” Li asks.

The significance, Li notes, is heightened by the fact that this was a name unknown to the public until the media first reported the alleged shooter two days after the killing.

“The fact that the searches are localized to D.C.—a city synonymous with the federal government, intelligence, national media, and political organizations—strongly shifts the interpretation away from random noise or perpetrator checking his own name into a strong potential signal of pre-event institutional knowledge or activity,” he explains.

Li pushes deeper, reporting that similar Trend anomalies surfaced regarding the private jet tail number “9888KG,” which took off from Provo, Utah shortly after the assassination. Again, trend spikes occurred before major news coverage. “Once again, September 9th, one day before the shooting, there is a blip. Why? We don’t know,” Li says. “Of course they’re telling us there’s nothing to see here, don’t look any further.”

Li also highlights searches for “Losi Center,” the key venue in Utah, which spiked in the D.C. metro area on September 8th—two days prior to the shooting.

“Somebody in DC was doing a search a couple of days beforehand of the Losi Center,” he notes, suggesting an unusual interest in the geographic details of the crime scene.

Li adds a note of caution: “Let’s be skeptical for a second because we should always be skeptical. And I’m throwing myself into that category. You should be skeptical of what I’m saying. Do your own research.”

Turning to Baron Coleman, the podcast host credited for the research, Li verifies his credentials as a veteran Alabama radio personality before lauding his meticulous data work. “At the very least, he didn’t just pop out of nowhere. And I want to give props to his great research,” Li says.

Further oddities surface in trends for vintage rifle searches like “Mauser 98,” the weapon type connected to the case. “Mauser 98, a rifle that the feds tell us is powerful, vintage, and hard to trace,” Li notes, referencing Google Trends hits in D.C. two weeks prior to the shooting.

Li concludes that while coincidences sometimes happen, “once has happened stance, twice is coincidence, three times is enemy action. Well, this is now four different coincidences.” Acknowledging his own attempts to replicate the data, he adds, “After his video was released and it kind of went viral, the trends results had been since scrubbed.”

Keep reading

Google Dismisses Privacy Fears Over App ID Policy

Google has responded to growing concern over its decision to require identity verification for all Android app developers by September 2026.

The company plans to block sideloaded apps from developers who decline to verify their identity, a move that many view as a threat to both developer privacy and the health of independent app distribution outside the Play Store.

The policy will require developers to submit government-issued identification to Google.

During a video explanation, a Google employee brushed aside questions about anonymity, saying “it’s not clear when anonymity is absolutely required,” and tried to ease fears about data collection by stating “it’s not like Google’s gonna share that information with the public or anything like that.”

The company’s position was outlined across multiple formats, including a blog post, a support page, and a lengthy video featuring Google employees discussing the shift.

However, many concerns, particularly around privacy, decentralization, and the viability of third-party app stores, were left either unaddressed or downplayed.

Google’s messaging implies that the changes are aimed at improving app security.

The company claims identity checks will help weed out malicious actors, especially those distributing malware.

But there is growing unease about the level of control this policy gives Google over the Android ecosystem.

Under the proposed system, Google will have the unilateral authority to determine who is allowed to distribute apps and could effectively block any developer it suspects of wrongdoing, potentially even without a clear or transparent justification.

While the video suggests the focus is currently limited to those linked with malware, nothing in the policy guarantees that future enforcement will not extend further.

Developers flagged in error could lose their ability to share apps entirely, with no clear recourse.

Keep reading

Google’s ex-CEO Eric Schmidt shares dire warning of homicidal AI models

Talk about a killer app.

Artificial intelligence models are vulnerable to hackers and could even be trained to off humans if they fall into the wrong hands, ex-Google CEO Eric Schmidt warned.

The dire warning came Wednesday at a London conference in response to a question about whether AI could become more dangerous than nuclear weapons.

“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So, in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt said at the Sifted Summit tech conference, according to CNBC.

“All of the major companies make it impossible for those models to answer that question,” he continued, appearing to air the possibility of a user asking an AI to kill.

“Good decision. Everyone does this. They do it well, and they do it for the right reasons,” Schmidt added. “There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”

The predictions might not be so far-fetched.

In 2023, an altered version of OpenAI’s ChatGPT called DAN – an acronym for “Do Anything Now” – surfaced online, CNBC noted.

Keep reading