YouTube Tightens Stranglehold On Firearms Content — Blocks All Gun-related Sponsors

When Google and YouTube first announced that they would be demonetizing a host of channels back in 2017 (including firearms-related content) they said is was because advertisers were “complaining” about their ads being featured in videos that were contrary to their messaging.  In other words, the excuse was that ads embedded on firearms channels might give their customers the “wrong impression” about those companies and their products, and Google didn’t want to anger their advertising partners.

It’s hard to say how accurate this claim was. The exposure of ESG and Big Tech collusion with government agencies to censor conservative platforms supports the idea that there was probably an organized corporate push to suppress the political opposition on YouTube as much as there was an effort to shut them down on social media.

The majority of conservative content creators understood that this was not about advertisers, it was about narratives.  The exploding popularity of gun channels runs contrary to the media assertion that American society is moving increasingly to the left.  And, even though gun channels mostly focus on firearms and instruction, they also promoted conservative and constitutional values which represent a thorn in the side of the establishment.

Keep reading

Google Plans New Content-Scanning Censorship Tech

Earlier in the year, Google filed an application to patent new methods, systems, and media for what the giant calls “identifying videos containing objectionable content” that are uploaded to a social site or video service.

For example, YouTube – though the filing doesn’t explicitly name this platform.

The patent application, which has just been published this month, is somewhat different from other automated “methods and systems” Google and other giants, notably Microsoft, already have to power their censorship apparatus; with this one, the focus is more on how AI can be added to the mix.

More and more often, various countries are introducing censorship laws where the speed at which content is removed or accounts blocked is a major requirement made of social media companies. Google could have this in mind when the patent’s purpose is said to be to improve on detecting objectionable content quickly, “for potential removal.”

No surprise here, but what should be the key question – namely, what is considered as “objectionable content” – is less of a definition and more a list that can be further expanded, variously interpreted, etc., and the list includes such items as violence, pornography, objectionable language, animal abuse, and then the cherry on top – “and/or any other type of objectionable content.”

The filing details how Google’s new system works, and we equally unsurprisingly learn that AI here means machine learning (ML) and neural networks. This technology is supposed to mimic the human brain but comes down to a series of equations, differentiated from ordinary algorithms by “learning” about what an image (or a video in this case) is, pixel by pixel.

Keep reading

‘This Message Seems Dangerous’: Google Censors Emails, Private Groups

“It is intolerable to us that an erroneous thought should exist anywhere in the world, however secret and powerless it may be.” — O’Brien, Officer of the Inner Party (1984, by George Orwell, Berkley/Penguin p. 225)

“We’re letting you know that we’ve permanently removed [your] content…An external report flagged the content for illegal or policy violations. As a result, our legal content and policy standards team removed the content for the following reason: unwanted content.” — Google Groups email sent to me. June 27, 2024

On the morning of June 27, the presidential debate between President Donald Trump and President Joe Biden, I noticed an announcement on a Substack post that Robert F. Kennedy Jr. was going to join the debate, even though CNN had excluded him based on technicalities. Powered by Elon Musk’s X, the Real Debate would be broadcast at the same time, with Kennedy giving his answers after Biden and Trump.

Despite CNN’s claim that he didn’t qualify for the presidential debate and the Democrat Party’s continual barriers to Kennedy’s name appearing on state ballots, he is running for president of the U.S. and has significant popular support.

To every normal American, it’s obvious there is a benefit to hearing from all viable candidates running for president, regardless of one’s political leanings. In that spirit, I sent out a few texts and a notice in a Google Group, with a link to the Real Debate website.

Some commentary on the debate went back and forth in the group. Thirty minutes after my first post, I received the following email from Google Groups stating that they had “permanently removed” my content because “an external report flagged the content for illegal content or policy violations.”

My post was removed “for the following reason: unwanted content,” and I was informed, “You may have the option to pursue your claims in court.”

Keep reading

Google News Ranks AI-Generated Spam as Top Results Above Actual Stories

Despite recent algorithm changes aimed at improving search quality, Google News continues to face challenges with AI-generated spam content appearing in its top results, frequently outranking original reporting.

Wired reports that while artificial intelligence becomes more commonplace across the internet, Google News finds itself grappling with a persistent issue: the proliferation of AI-generated spam content in its search results. This problem persists even after the search giant implemented significant algorithm updates earlier this year to combat unhelpful and unoriginal content.

The issue came to light when a journalist, while searching for information on Adobe’s AI policies, stumbled upon a plagiarized article at the top of Google News results. The article, published on a website called Syrus #Blog, was a near-exact copy of a piece originally published by Wired, with only minor alterations in phrasing. Further investigation revealed that this was not an isolated incident, as similar plagiarized content was found in multiple languages, affecting articles from various reputable news sources.

Elizabeth Tucker, a director of product management at Google, had announced in April that the company had completed a major rollout of changes to its search engine ranking system. These updates were intended to reduce low-quality, unoriginal content in search results by 45 percent. However, the persistence of AI-generated spam in Google News suggests that the problem is far from solved.

Lily Ray, senior director of search engine optimization at Amsive, describes this as a “rampant problem” on Google. She notes that many clients have reported instances of their articles being rewritten and republished using AI tools. This practice not only undermines the work of original content creators but also poses a significant challenge to Google’s efforts to provide high-quality, reliable information to its users.

The creators of Syrus #Blog, when contacted, claimed that their content creation process involves AI tools that “analyze and synthesize information from various sources while always respecting intellectual property.” They pointed to a single hyperlink at the bottom of their articles as sufficient attribution. However, SEO experts and content creators argue that this practice falls far short of ethical standards and violates Google’s spam policies.

Keep reading

Wife of Google Whistleblower Killed In Car Crash

Dr. Robert Epstein, on X @DrREpstein, suggests his wife’s 2019 fatal car crash was NOT an accident.

Epstein said that after he briefed a group of state AGs about Google’s power to rig elections, one of them approached him and said, “I think you’re going to die in an accident in a few months.”

His wife died a few months later in a fatal crash that Epstein claims might be connected to his testimony about Google’s search engine bias in US elections.

“They are picking our presidents, our senators, our members of congress, our attorneys general,” Epstein said on The Alex Jones Show in 2023.

Bottom line: If we don’t stop them in the 2024 election, Google alone will be able to shift between 6.4 and 25.5 million votes to one candidate.

Keep reading

Google Introduces App Store Censorship Rules, Bans AI Generating Various Types of “Restricted Content,” Including “Hate Speech”

Developers of apps for Android will have to adhere to a new set of rules if they wish to publish on the Google Play Store.

The “guidance” is seen by critics as yet another wave of sweeping censorship tied to AI, as Google continues to crack down on what it considers to be hate speech, profanity, bullying, harassment, and other content listed as “restricted.”

One of the types of content developers are now banned from generating refers to sensitive events – and Google’s description is another example of what is likely a deliberately vague definition, so it can be left open to arbitrary interpretation.

Namely, this is content about sensitive events that include things that “capitalize on or are insensitive toward a sensitive event with significant social, cultural, or political impact.”

In its support pages, Google is telling developers that the intent behind the new policies is to make sure AI-generated content is “safe for all users.” And, the giant wants to make sure developers allow users to flag what they see as offensive, and incorporate that “feedback” for the sake of “responsible innovation.”

According to the rules, developers are instructed to utilize user reports “to inform content filtering and moderation in their apps.”

Keep reading

Google accidentally published internal Search documentation to GitHub

Google apparently accidentally posted a big stash of internal technical documents to GitHub, partially detailing how the search engine ranks webpages. For most of us, the question of search rankings is just “are my web results good or bad,” but the SEO community is both thrilled to get a peek behind the curtain and up in arms since the docs apparently contradict some of what Google has told them in the past. Most of the commentary on the leak is from SEO experts Rand Fishkin and Mike King.

Google confirmed the authenticity of the documents to The Verge, saying, “We would caution against making inaccurate assumptions about Search based on out-of-context, outdated, or incomplete information. We’ve shared extensive information about how Search works and the types of factors that our systems weigh, while also working to protect the integrity of our results from manipulation.”

The fun thing about accidentally publishing to the GoogleAPI GitHub is that, while these are sensitive internal documents, Google technically released them under an Apache 2.0 license. That means anyone who stumbled across the documents was granted a “perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license” to them, so these are freely available online now, like here.

Keep reading

News Publishers Try To Sic the Government on Google AI

Google’s pivot to artificial intelligence has news publishers freaking out—and running to the government.

“Agency intervention is necessary to stop the existential threat Google poses to original content creators,” the News/Media Alliance—a major news industry trade group—wrote in a letter to the Department of Justice (DOJ) and the Federal Trade Commission (FTC). It asked the agencies to use antitrust authority “to stop Google’s latest expansion of AI Overviews,” a search engine innovation that Google has been rolling out recently.

Google’s plain old top-of-page links to news outlets or other informational sites are disappearing in many searches. Now much of this prime search-results real estate is taken up by what Google is calling AI Overviews.

Overviews offer up short, AI-generated summaries paired with brief bits of text from linked websites. (If you haven’t used Google in a while, try it now and see for yourself.)

The results have been far from perfect (would you like some glue with that pizza?) and leave a lot of room for skepticism and interpretation. (This past weekend, Overviews fed me seemingly contradictory advice about baby fevers within a two-paragraph span.) But that’s also often true of what you would find from an old-school list of links and snippets. And Google has been inching away from link-prominent results for a while now, featuring brief bits of (non-AI-generated) content in response to many search queries and in the form of Q&A text throughout link pages. So the recent appearance of AI-generated text answers wasn’t even immediately noticeable to me.

But newspaper and magazine publishers sure are noticing.

Keep reading

Google Tightens Influence on UK Elections with New “Moderation” Tactics

Google has found itself yet another election to “support.”

After the company made announcements to this effect related to the EU (European Parliament) June ballot, voters in the UK can now also look forward to – or dread, as the case may be – the tech giant’s role in their upcoming general election.

blog post by Google UK Director of Government Affairs and Public Policy Katie O’Donovan announced even more “moderation” and a flurry of other measures, most of which have become tried-and-tested instruments of Google’s censorship over the past years.

They are divided in three categories – pushing (“surfacing”) content and sources of information picked by Google as authoritative and of high quality, along with YouTube information panels, investing in what it calls Trust & Safety operations, as well as “equipping campaigns with the best-in-class security tools and training.”

Another common point is combating “misinformation” – together with what the blog post refers to as “the wider ecosystem.” That concerns Google News Initiative and PA Media, a private news agency, and their Election Check 24, which is supposed to safeguard the UK election from “mis- and dis-information.”

Searches related to voting are “rigged” to return results manipulated to boost what Google considers authoritative sources – notably, the UK government’s site.

As for AI, the promise is that users of Google platforms will receive “help navigating” that type of content.

This includes the obligation for advertisers to reveal that ads “include synthetic content that inauthentically depicts real or realistic-looking people or events” (this definition can easily be stretched to cover parody, memes, and similar).

“Disclosure” here, however, is still differentiated from Google’s outright ban on manipulated media that it decides “misleads people.” Such content is labeled, and banned if considered as having the ability to maybe pose “a serious risk of egregious harm.”

Keep reading

Google’s Woke AI Is Hilariously But Frighteningly Broken

Google’s hastily rolled out AI Overview feature is disastrously broken, returning searches claiming that people should spread glue on pizzas, eat rocks, and that it’s safe for pregnant women to smoke cigarettes.

The Verge reports that Google is scrambling to manually disable the AI Overview feature for certain searches after users found it giving our some truly bizarre advice, and information that is just made up nonsense.

Apparently cockroaches are so named because they live in penis holes.

Keep reading