‘This Message Seems Dangerous’: Google Censors Emails, Private Groups

“It is intolerable to us that an erroneous thought should exist anywhere in the world, however secret and powerless it may be.” — O’Brien, Officer of the Inner Party (1984, by George Orwell, Berkley/Penguin p. 225)

“We’re letting you know that we’ve permanently removed [your] content…An external report flagged the content for illegal or policy violations. As a result, our legal content and policy standards team removed the content for the following reason: unwanted content.” — Google Groups email sent to me. June 27, 2024

On the morning of June 27, the presidential debate between President Donald Trump and President Joe Biden, I noticed an announcement on a Substack post that Robert F. Kennedy Jr. was going to join the debate, even though CNN had excluded him based on technicalities. Powered by Elon Musk’s X, the Real Debate would be broadcast at the same time, with Kennedy giving his answers after Biden and Trump.

Despite CNN’s claim that he didn’t qualify for the presidential debate and the Democrat Party’s continual barriers to Kennedy’s name appearing on state ballots, he is running for president of the U.S. and has significant popular support.

To every normal American, it’s obvious there is a benefit to hearing from all viable candidates running for president, regardless of one’s political leanings. In that spirit, I sent out a few texts and a notice in a Google Group, with a link to the Real Debate website.

Some commentary on the debate went back and forth in the group. Thirty minutes after my first post, I received the following email from Google Groups stating that they had “permanently removed” my content because “an external report flagged the content for illegal content or policy violations.”

My post was removed “for the following reason: unwanted content,” and I was informed, “You may have the option to pursue your claims in court.”

Keep reading

Google News Ranks AI-Generated Spam as Top Results Above Actual Stories

Despite recent algorithm changes aimed at improving search quality, Google News continues to face challenges with AI-generated spam content appearing in its top results, frequently outranking original reporting.

Wired reports that while artificial intelligence becomes more commonplace across the internet, Google News finds itself grappling with a persistent issue: the proliferation of AI-generated spam content in its search results. This problem persists even after the search giant implemented significant algorithm updates earlier this year to combat unhelpful and unoriginal content.

The issue came to light when a journalist, while searching for information on Adobe’s AI policies, stumbled upon a plagiarized article at the top of Google News results. The article, published on a website called Syrus #Blog, was a near-exact copy of a piece originally published by Wired, with only minor alterations in phrasing. Further investigation revealed that this was not an isolated incident, as similar plagiarized content was found in multiple languages, affecting articles from various reputable news sources.

Elizabeth Tucker, a director of product management at Google, had announced in April that the company had completed a major rollout of changes to its search engine ranking system. These updates were intended to reduce low-quality, unoriginal content in search results by 45 percent. However, the persistence of AI-generated spam in Google News suggests that the problem is far from solved.

Lily Ray, senior director of search engine optimization at Amsive, describes this as a “rampant problem” on Google. She notes that many clients have reported instances of their articles being rewritten and republished using AI tools. This practice not only undermines the work of original content creators but also poses a significant challenge to Google’s efforts to provide high-quality, reliable information to its users.

The creators of Syrus #Blog, when contacted, claimed that their content creation process involves AI tools that “analyze and synthesize information from various sources while always respecting intellectual property.” They pointed to a single hyperlink at the bottom of their articles as sufficient attribution. However, SEO experts and content creators argue that this practice falls far short of ethical standards and violates Google’s spam policies.

Keep reading

Wife of Google Whistleblower Killed In Car Crash

Dr. Robert Epstein, on X @DrREpstein, suggests his wife’s 2019 fatal car crash was NOT an accident.

Epstein said that after he briefed a group of state AGs about Google’s power to rig elections, one of them approached him and said, “I think you’re going to die in an accident in a few months.”

His wife died a few months later in a fatal crash that Epstein claims might be connected to his testimony about Google’s search engine bias in US elections.

“They are picking our presidents, our senators, our members of congress, our attorneys general,” Epstein said on The Alex Jones Show in 2023.

Bottom line: If we don’t stop them in the 2024 election, Google alone will be able to shift between 6.4 and 25.5 million votes to one candidate.

Keep reading

Google Introduces App Store Censorship Rules, Bans AI Generating Various Types of “Restricted Content,” Including “Hate Speech”

Developers of apps for Android will have to adhere to a new set of rules if they wish to publish on the Google Play Store.

The “guidance” is seen by critics as yet another wave of sweeping censorship tied to AI, as Google continues to crack down on what it considers to be hate speech, profanity, bullying, harassment, and other content listed as “restricted.”

One of the types of content developers are now banned from generating refers to sensitive events – and Google’s description is another example of what is likely a deliberately vague definition, so it can be left open to arbitrary interpretation.

Namely, this is content about sensitive events that include things that “capitalize on or are insensitive toward a sensitive event with significant social, cultural, or political impact.”

In its support pages, Google is telling developers that the intent behind the new policies is to make sure AI-generated content is “safe for all users.” And, the giant wants to make sure developers allow users to flag what they see as offensive, and incorporate that “feedback” for the sake of “responsible innovation.”

According to the rules, developers are instructed to utilize user reports “to inform content filtering and moderation in their apps.”

Keep reading

Google accidentally published internal Search documentation to GitHub

Google apparently accidentally posted a big stash of internal technical documents to GitHub, partially detailing how the search engine ranks webpages. For most of us, the question of search rankings is just “are my web results good or bad,” but the SEO community is both thrilled to get a peek behind the curtain and up in arms since the docs apparently contradict some of what Google has told them in the past. Most of the commentary on the leak is from SEO experts Rand Fishkin and Mike King.

Google confirmed the authenticity of the documents to The Verge, saying, “We would caution against making inaccurate assumptions about Search based on out-of-context, outdated, or incomplete information. We’ve shared extensive information about how Search works and the types of factors that our systems weigh, while also working to protect the integrity of our results from manipulation.”

The fun thing about accidentally publishing to the GoogleAPI GitHub is that, while these are sensitive internal documents, Google technically released them under an Apache 2.0 license. That means anyone who stumbled across the documents was granted a “perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license” to them, so these are freely available online now, like here.

Keep reading

News Publishers Try To Sic the Government on Google AI

Google’s pivot to artificial intelligence has news publishers freaking out—and running to the government.

“Agency intervention is necessary to stop the existential threat Google poses to original content creators,” the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority “to stop Google’s latest expansion of AI Overviews,” a search engine innovation that Google has been rolling out recently.

Google’s plain old top-of-page links to news outlets or other informational sites are disappearing in many searches. Now much of this prime search-results real estate is taken up by what Google is calling AI Overviews.

Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. (If you haven’t used Google in a while, try it now and see for yourself.)

The results have been far from perfect (would you like some glue with that pizza?) and leave a lot of room for skepticism and interpretation. (This past weekend, Overviews fed me seemingly contradictory advice about baby fevers within a two-paragraph span.) But that’s also often true of what you would find from an old-school list of links and snippets. And Google has been inching away from link-prominent results for a while now, featuring brief bits of (non-AI-generated) content in response to many search queries and in the form of Q&A text throughout link pages. So the recent appearance of AI-generated text answers wasn’t even immediately noticeable to me.

But newspaper and magazine publishers sure are noticing.

Keep reading

Google Tightens Influence on UK Elections with New “Moderation” Tactics

Google has found itself yet another election to “support.”

After the company made announcements to this effect related to the EU (European Parliament) June ballot, voters in the UK can now also look forward to – or dread, as the case may be – the tech giant’s role in their upcoming general election.

blog post by Google UK Director of Government Affairs and Public Policy Katie O’Donovan announced even more “moderation” and a flurry of other measures, most of which have become tried-and-tested instruments of Google’s censorship over the past years.

They are divided in three categories – pushing (“surfacing”) content and sources of information picked by Google as authoritative and of high quality, along with YouTube information panels, investing in what it calls Trust & Safety operations, as well as “equipping campaigns with the best-in-class security tools and training.”

Another common point is combating “misinformation” – together with what the blog post refers to as “the wider ecosystem.” That concerns Google News Initiative and PA Media, a private news agency, and their Election Check 24, which is supposed to safeguard the UK election from “mis- and dis-information.”

Searches related to voting are “rigged” to return results manipulated to boost what Google considers authoritative sources – notably, the UK government’s site.

As for AI, the promise is that users of Google platforms will receive “help navigating” that type of content.

This includes the obligation for advertisers to reveal that ads “include synthetic content that inauthentically depicts real or realistic-looking people or events” (this definition can easily be stretched to cover parody, memes, and similar).

“Disclosure” here, however, is still differentiated from Google’s outright ban on manipulated media that it decides “misleads people.” Such content is labeled, and banned if considered as having the ability to maybe pose “a serious risk of egregious harm.”

Keep reading

Google’s Woke AI Is Hilariously But Frighteningly Broken

Google’s hastily rolled out AI Overview feature is disastrously broken, returning searches claiming that people should spread glue on pizzas, eat rocks, and that it’s safe for pregnant women to smoke cigarettes.

The Verge reports that Google is scrambling to manually disable the AI Overview feature for certain searches after users found it giving our some truly bizarre advice, and information that is just made up nonsense.

Apparently cockroaches are so named because they live in penis holes.

Keep reading

Google’s AI-First Ambitions Sideline Publishers, Boost Its Ability To Filter and Control Information

The internet’s most frequented page is on the verge of a transformation unlike any in its 25-year history.

Last week, at Google I/O 2024, as Liz Reid, Google’s head of Search, gushed on stage about their AI-powered future, one couldn’t help but feel a pang of irony. “Google will do the Googling for you,” she proclaimed, envisioning a future where Google’s AI sifts through the web’s content and spits out neatly packaged summaries, removing the need to visit any websites.

How convenient – for Google, that is.

An ideologically driven monopoly further inserting itself between people and content, filtering out what it thinks you should be allowed to see (and what you shouldn’t) at a level never seen before. What could possibly go wrong?

At the event, the tech behemoth unveiled its latest shiny toys – an AI agent named Astra, a potentially reincarnated Google Glass, and something called Gems. Amidst the fanfare, though, there was a glaring omission: any mention of the voices who populate the web with the very work that makes Google’s empire possible.

But the origins of Google’s powerful monopoly and control over much of the internet’s content came a couple of decades ago when publishers and website creators made a deal with a devil whose motto was, at the time, “Don’t be evil.”

Keep reading

Google has funded EcoHealth’s virus research for at least 14 years

The 2018 paper, titled ‘Serologic and behavioural risk survey of workers with wildlife contact in China’, reported on a study conducted in Guangdong Province, China, which aimed to identify risky populations, occupations and behaviours that contribute to the transmission of zoonotic pathogens with pandemic potential.

It was authored by researchers from Yale University, MetabiotaEcoHealth Alliance, the Guangdong Provincial Centre for Disease Control and Prevention and the University of Washington Centre for One Health Research – one of them being Peter Daszak.

But, as Natural News wrote, check out the conflict of interest statement: “Metabiota Inc. is a commercial company that received funding from Google/Skoll.”

The Skoll Foundation was created in 1999 by Jeffrey Skoll, who made his fortune as eBay’s first full-time hire and president.

It turns out that Google.org, the charity arm of Google, has been funding studies carried out by EcoHealth Alliance researchers, including Peter Daszak, for at least 14 years.  A 2010 study on bat flaviviruses lists both Daszak and EcoHealth vice president Jonathan Epstein as authors – and like the 2018 study mentioned above, this 2010 study thanks Google for funding it.

Yet another paper on henipavirus spillover that was published in 2014 shows the same authors and funding from Google, demonstrating a lengthy relationship between these entities.

Natalie Winters – who first wrote about Google funding research conducted by Daszak’s EcoHealth Alliance in 2021 – posted a Twitter thread about it earlier this month to remind us.  Her thread also lists another paper published in 2015, tying Google to Daszak and EcoHealth.

Keep reading