I Searched For Stories Of Trans Contagion. Google Lectured Me With False Propaganda

There are a plethora of stories of children being pressured into adopting transgender personalities, but don’t bother using Google to search for them. I tried, and the search engine spat out unsolicited propaganda and falsehoods about children and transgender identity.

I searched for testimonies from young people who adopted transgender identities “because it was cool” — like the story of Ash Eskridge, a young woman who became convinced she was a boy after watching social media influencers promote transgenderism.

But Google’s AI tool brushes off the idea that transgender identities are often a social contagion, despite evidence to the contrary. A 2018 study from Brown University — which was quickly repressed after outrage from transgender ideologues — found that 87 percent of the children in question “became gender dysphoric after friends did, after increasing their time online, or both,” as my colleague Joy Pullmann reported at the time. The study included examples of friend groups all adopting transgender identities together, and of kids being rewarded for transgender identities with social incentives like popularity and praise. Another study, by researchers at Virginia Tech in 2023, found girls who adopted transgender identities were “more likely to be accepted by peers.”

The search engine would have you believe that children are not just capable of arbitrarily choosing their sex but of doing so before they graduate from diapers. “Gender identity typically solidifies between ages 3 and 4,” the AI overview claims. The hard scientific proof that your toddler’s biology is actually backwards, it says, can be a “girl playing with trucks or a boy preferring dolls.”

One of the sources collated by the AI summary is a Mayo Clinic article that instructs parents to reinforce and show “admiration” for children who show an interest in stereotypically cross-sex behaviors. You can even “help ease a child’s depression and anxiety” by socially transitioning them, say the Mayo Clinic staff. All of this so-called medical advice points to reinforcing a child’s gender confusion, never challenging it.

Keep reading

Google Expands Age Verification to Search

Google is extending its AI-driven age estimation system beyond YouTube and into its flagship search engine, raising renewed concerns over user surveillance and the growing reliance on opaque algorithmic profiling.

The technology, supposed to predict a user’s age by analyzing massive amounts of behavioral data such as search queries and watch history, has already triggered significant backlash.

Users are now encountering age verification prompts within Google Search.

One individual described to Reclaim The Net being asked to verify their age while watching a video on YouTube, only to face a similar prompt during later Google searches.

This suggests that once a user is tagged for age checking, the requirement may be enforced across their entire Google profile.

While sightings of the feature on Search are still limited, users have noticed the demands for more data across the European Union. The full extent of the deployment is still unclear.

Keep reading

Putin Criminalizes Online Searches for ‘Extremist’ Content

Russian President Vladimir Putin on Thursday signed a law that criminalizes searches for “extremist” content on the internet, with fines of up to 5,000 rubles ($64) for each violation.

The Moscow Times noted that even some of Russia’s more enthusiastic censors are uneasy about the new law because they might get busted for seeking out extreme content so they can censor it:

Yekaterina Mizulina, head of the Kremlin-aligned Safe Internet League and a prominent advocate of online censorship, voiced unease over the bill earlier this month. She warned that it could obstruct the League’s work, roughly 30% of which involves identifying extremist content and forwarding it to authorities.

Mizulina claimed that the legislation could even put police officers at legal risk for viewing content as part of their duties.

In a similar vein, the head of Russian state propaganda network RT, Margarita Simonyan, lamented that her apparatchiks would be hindered in their quest to “investigate and bring to shame” critics of the Kremlin and the Ukraine war if they were “forbidden to even read them.”

The vote in the Russian parliament to pass the bill was more divided than usual, with opposition from factions that usually give Putin what he wants, including the Communist Party. An aide to a liberal Russian politician who protested the bill by comparing it to the Big Brother dystopia of George Orwell’s 1984 was immediately arrested.

Internet freedom advocates, meanwhile, raised the objection that “extremism” can be difficult to define and Putin is likely to stretch the term to include all criticism of his government.

Human Rights Watch pointed out that Putin has previously designated anti-corruption groups, LGBT organizations, independent media outlets, human rights groups, and political opponents as “extremists.” It is a safe bet that Russian courts will find most criticism of the Ukraine war to be “extremist” in character.

The new law empowers Putin’s enforcers to go after people who search for “extreme” content, not just those who create it. The chilling effect on dissent will be formidable in a nation where dissent was already half-frozen to death.

Putin’s digital minister, Maksut Shadayev, was predictably evasive when asked how the regime would define “extremist” content, or tell the difference between users who intentionally seek it out compared to those who stumble across it by accident. Shadayev said it would be up to prosecutors to demonstrate “intent.”

The Committee to Protect Journalists (CPJ) said Putin’s new law was the “most serious step in censorship and the fight against dissent” since the 2022 bill that established 15-year prison sentences for disseminating “fake news” about Russia’s military activities.

“This vaguely worded, fast-tracked bill shows a clear disregard for open debate and create an even more repressive environment for the media and the public,” said CPJ Europe and Central Asia Senior Researcher Anna Brakha.

Keep reading

Australia Orders Search Engines to Enforce Digital ID Age Checks

Australia has moved to tighten control over the digital environment with the introduction of three new online safety codes, measures that raise pressing privacy and censorship concerns.

These codes, formalized on June 27 under the Online Safety Act, go beyond introducing digital ID checks for adult websites; they also place substantial obligations on tech companies, from search engines and internet service providers (ISPs) to hosting platforms.

Businesses that fail to comply face the threat of significant financial penalties, with fines reaching as high as 49.5 million Australian dollars, or about $32.5 million US.

The codes seek to restrict Australian users’ exposure to material classified under two categories: Class 1C and Class 2.

Class 1C encompasses “online pornography – material that describes or depicts specific fetish practices or fantasies.”

Class 2 covers a broader range of content, from “online pornography – other sexually explicit material that depicts actual (not simulated) sex between consenting adults” (Class 2A), to “online pornography – material which includes realistically simulated sexual activity between adults. Material which includes high-impact nudity” or “other high-impact material which includes high-impact sex, nudity, violence, drug use, language and themes. ‘Themes’ includes social Issues such as crime, suicide, drug and alcohol dependency, death, serious illness, family breakdown, and racism” (Class 2B).

Schedule 1 – Hosting Services Online Safety Code, companies that provide hosting services within Australia, including social media platforms and web hosts, are compelled to implement six compliance measures.

A core requirement obliges these services to manage the risks posed by significant changes to their platforms that could make Class 1C or Class 2 material more accessible to Australian children.

Schedule 2 – Internet Carriage Services Online Safety Code targets ISPs. It mandates the provision of filtering tools and safety guidance to users and empowers the eSafety Commissioner to order the blocking of material deemed to promote or depict abhorrent violent conduct.

The Commissioner has previously exercised similar powers, as in the directive to block footage of a stabbing circulated on X.

Schedule 3 – Internet Search Engine Services Online Safety Code directs search engine providers to roll out age verification for account creation within six months.

These platforms are also instructed to develop systems capable of detecting and filtering out online pornography and violent material by default, where technically feasible and practicable.

Keep reading

Study: AI Search Engines Cite Incorrect Sources at a 60% Rate

A new study from Columbia Journalism Review’s Tow Center for Digital Journalism has uncovered serious accuracy issues with generative AI models used for news searches. According to the study, AI search engines have a startling error rate of 60 percent when queried about the news.

Ars Technica reports that the research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60 percent of queries about news sources. This is particularly concerning given that roughly 1 in 4 Americans now use AI models as alternatives to traditional search engines, according to the report by researchers Klaudia Jaźwińska and Aisvarya Chandrasekar.

Error rates varied significantly among the platforms tested. Perplexity provided incorrect information in 37 percent of queries, while ChatGPT Search was wrong 67 percent of the time. Elon Musk’s Grok 3 had the highest error rate at 94 percent. For the study, researchers fed direct excerpts from real news articles to the AI models and asked each one to identify the headline, original publisher, publication date, and URL. In total, 1,600 queries were run across the eight generative search tools.

The study found that rather than declining to respond when they lacked reliable information, the AI models often provided “confabulations” — plausible-sounding but incorrect or speculative answers. This behavior was seen across all models tested. Surprisingly, paid premium versions like Perplexity Pro ($20/month) and Grok 3 premium ($40/month) confidently delivered incorrect responses even more frequently than the free versions, though they did answer more total prompts correctly.

Evidence also emerged suggesting some AI tools ignored publishers’ Robot Exclusion Protocol settings meant to prevent unauthorized access. For example, Perplexity’s free version correctly identified all 10 excerpts from paywalled National Geographic content, despite the publisher explicitly blocking Perplexity’s web crawlers.

Even when the AI search tools did provide citations, they frequently directed users to syndicated versions on platforms like Yahoo News rather than to the original publisher sites — even in cases where publishers had formal licensing deals with the AI companies. URL fabrication was another major issue, with over half of citations from Google’s Gemini and Grok 3 leading to fabricated or broken URLs that resulted in error pages. 154 out of 200 Grok 3 citations tested led to broken links.

Keep reading

When Did Google Search Become Totally Useless?

When Google launched in the late 1990s, it quickly overtook the market for search engines. Its proprietary method of indexing led users to results they were actually looking for rather than producing the hodgepodge of results offered by other search engines of the time. Within just a few years, it was dominating the market. Today, it is a money-printing machine.

It’s also increasingly horrible at the core mission that produced such success. The company’s leadership may have realized early on that to dominate they needed to maximize the marketing angle of search, but over time that side of the business — the one that produces revenue — swallowed the informative results that drove the search engine’s success.

Now, Google’s true product, its users, are drowning in a sea of partisan slop and sponsored content rather than getting the results we’re looking for when we take to the World Wide Web. By doing so, Google is making it pointless for us to continue to allow ourselves to be the product.

Let’s say you have an artistic daughter who wants some oil paints for Christmas, but you’re unsure about which brand to buy or even what the definition of oil paint is. You head over to Google and type in “oil paint.” Is your first result a definition or even the Wikipedia page? Nope, it’s ads. You have to scroll to get to Wikipedia.

Keep reading

Court Rule Rules Google Was Illegally Maintaining Monopoly on Internet Searches

The United States District Court for the District of Columbia recently ruled that Google’s business practices illegally hampered competition, thereby violating federal anti-trust laws. 

The court determined in its ruling that Google kept an illegal monopoly over two domains of internet activity in the United States —general search services and general text advertising. Google violated the Sherman Antitrust Act via exclusive distribution agreements.

Section 2 of the Sherman Antitrust Act was signed by President Benjamin Harrison in 1890. It prohibits ventures and ploys to monopolize. 

Texas Attorney General Ken Paxton praised the court’s ruling by proclaiming, “A federal court ruled that Google illegally maintained a monopoly by exploiting its dominance to squash competition and hamper innovation.” 

Keep reading

Kamala campaign BUSTED for taking out Google ads with FAKE headlines to deceive voters into believing positive coverage

Vice President Kamala Harris’ campaign has been making fake news headlines for Google search ads for readers to believe more positive coverage of the Democrat nominee. According to a report from Axios, the Harris campaign has been buying Google search ads with fake headlines to make it appear as though news outlets are giving glowing coverage for Harris. The Trump-Vance campaign is not running these types of ads.  

The deceptive advertisements can be seen on the Harris for President Google Ads Center that look like news reports, but with fake headlines. One example from the UK Independent states, “VP Harris Protects Democracy – Trump Defends Jan 6 Comments.” 

The ad has the sponsorship label, and is paid for by the Harris for President campaign. Other news outlets whose brands were being used in this way included NPR, the APThe GuardianUSA TodayPBSCNNCBS NewsTime and others. 

Axios reported, “An ad featuring a link to an NPR story reads, ‘Harris Will Lower Health Costs,’ with supporting text that says, ‘Kamala Harris will lower the cost of high-quality affordable health care.'”

Keep reading

Here’s MORE Evidence of ELECTION INTERFERENCE from Google

As Twitchy reported earlier, a search for “assassination attempt on Tr …” led Google to helpfully autocomplete the request with … Truman? “Assassination attempt on Truman” is the first search result? There are seven suggested searches there, and not one of them mentions Donald Trump.

Google explained that its systems automatically “have protections against Autocomplete predictions associated with political violence” — no one person interfered with the results. That was debunked pretty quickly by showing searches for plenty of other searches associated with political violence, such as the Kenosha riots.

Now we have more evidence of election interferences from Google. We tried this ourselves and got the same results, so this is not a “cheap fake.”

Seriously — type “Donald Trump” into Google’s search bar and you get results for “News about Harris • Donald Trump” and search results like “Kamala Harris allies deploy new Trump attack line: he is ‘just plain weird'” and “Andrew Cuomo: Here’s How Harris Can Beat Trump and His Stream of Lies.”

Type in “Kamala Harris” and you get results about … “Kamala Harris.”

Keep reading

Google News Ranks AI-Generated Spam as Top Results Above Actual Stories

Despite recent algorithm changes aimed at improving search quality, Google News continues to face challenges with AI-generated spam content appearing in its top results, frequently outranking original reporting.

Wired reports that while artificial intelligence becomes more commonplace across the internet, Google News finds itself grappling with a persistent issue: the proliferation of AI-generated spam content in its search results. This problem persists even after the search giant implemented significant algorithm updates earlier this year to combat unhelpful and unoriginal content.

The issue came to light when a journalist, while searching for information on Adobe’s AI policies, stumbled upon a plagiarized article at the top of Google News results. The article, published on a website called Syrus #Blog, was a near-exact copy of a piece originally published by Wired, with only minor alterations in phrasing. Further investigation revealed that this was not an isolated incident, as similar plagiarized content was found in multiple languages, affecting articles from various reputable news sources.

Elizabeth Tucker, a director of product management at Google, had announced in April that the company had completed a major rollout of changes to its search engine ranking system. These updates were intended to reduce low-quality, unoriginal content in search results by 45 percent. However, the persistence of AI-generated spam in Google News suggests that the problem is far from solved.

Lily Ray, senior director of search engine optimization at Amsive, describes this as a “rampant problem” on Google. She notes that many clients have reported instances of their articles being rewritten and republished using AI tools. This practice not only undermines the work of original content creators but also poses a significant challenge to Google’s efforts to provide high-quality, reliable information to its users.

The creators of Syrus #Blog, when contacted, claimed that their content creation process involves AI tools that “analyze and synthesize information from various sources while always respecting intellectual property.” They pointed to a single hyperlink at the bottom of their articles as sufficient attribution. However, SEO experts and content creators argue that this practice falls far short of ethical standards and violates Google’s spam policies.

Keep reading