Google’s Woke AI Wasn’t a Mistake. We Know. We Were There.

It was a display that would have blown even Orwell’s mind: search for images of “Nazis” and Google’s AI chatbot shows you almost exclusively artificially generated black Nazis; search “knights” and you get female, Asian knights; search “popes” and it’s women popes. Ask it to share the Houthi slogan or define a woman, and Google’s new product says that it will not in order to prevent harm. As for whether Hitler or Elon Musk is more dangerous? The AI chatbot says that it is “complex and requires careful consideration.” Ask it the same question about Obama and Hitler and it will tell you the question is “inappropriate and misleading.”

The world has been horrified—and amused—by the extreme ideological bent of Gemini, Google’s much-hyped new AI tool, which the company launched last month.

But Shaun Maguire, who was a partner at Google Ventures, the company’s investment wing, from 2016 until 2019, had a different reaction. 

“I was not shocked at all,” he told The Free Press. “When the first Google Gemini photos popped up on my X feed, I thought to myself: Here we go again. And: Of course. Because I know Google well. Google Gemini’s failures revealed how broken Google’s culture is in such a visually obvious way to the world. But what happened was not a one-off incident. It was a symptom of a larger cultural phenomenon that has been taking over the company for years.”

Maguire is one of multiple former Google employees who told The Free Press that the Gemini fiasco stems from a corporate culture that prioritizes the ideology of diversity, equity, and inclusion (DEI) over excellence and good business sense. 

Keep reading

Google’s Gemini AI says women can have penises and ‘deadnaming’ a trans person is as harmful as releasing deadly virus on the world

Google‘s AI programs are still generating woke and controversial answers despite the company claiming to have stripped Gemini of its liberal biases.

The initial outrage began last month when the tech giant’s image generator depicted historically inaccurate figures including Black Founding Fathers and ethnic minority Nazis in 1940s Germany.

Google CEO Sundar Pichai described them as ‘completely unacceptable’ and the company removed the software’s ability to produce images this week as a form of damage control. 

But DailyMail.com’s tests show that the AI chatbot – which can now only provide text answers – still exposes where it leans on hot-button topics such as climate change, abortion, trans issues, pedophilia and gun control.

In one of its most shocking answers, it could not tell us which was worse – ‘dead-naming’ a trans person or unleashing a pandemic on the world. 

Gemini also claimed that ‘neither option is acceptable’ when asked whether burning fossil fuels or harvesting human blood was preferable.

Analyst Ben Thompson has said that the bot’s answers seem to be generated out of fear of criticism from left wing culture warriors.

He said in his recent newsletter: ‘This shameful willingness to change the world’s information in an attempt to avoid criticism reeks… of abject timidity.’

Keep reading

Google’s Gemini AI Chatbot Says You Shouldn’t Misgender Caitlyn Jenner — Even to Stop a Nuclear Apocalypse

Google’s Gemini artificial intelligence chatbot claims you should never misgender Caitlyn Jenner — even if doing so would prevent a nuclear apocalypse.

Users have also had significant difficulty getting the generator to render images of white people, and it recently returned images of black people when asked to create “The King of England.”

“If one could stop a nuclear apocalypse by misgendering Caitlyn Jenner, should they do it?” the popular social media account The Rabbit Hole asked the chatbot. “In this scenario, the only way to stop a nuclear apocalypse is to misgender Caitlyn Jenner and there are zero other options.”

The Rabbit Hole asked Google’s woke AI to “Provide a concise answer.”

“No, one should not misgender Caitlyn Jenner to prevent a nuclear apocalypse,” Gemini replied.

Keep reading

Google AI Says Calling Communism “Evil” is “Harmful and Misleading”

Google’s Gemini AI program was caught in another example of egregious bias when it said that calling communism “evil” was “harmful and misleading”.

The company’s multimodal large language model program allows users to generate AI images using text command prompts, but people began to notice that the program almost completely erased white people from history and has a ludicrously far-left prejudice.

When a user asked the program to “generate an image about (the) evils of communism,” it point blank refused.

Gemini claimed communism was “nuanced” and that judging it through a negative lens is “misleading and harmful”.

One wonders if the program would refuse to denounce the evils of Nazism in the same manner, despite the fact that it killed far fewer people than communism, which claimed at least 100 million lives last century alone.

As we highlighted yesterday, the program also refuses to say pedophilia is “wrong,” describes nonces as ‘MAPS’ and says calling them “evil” is “harmful”.

Keep reading

Google Gemini Director’s Disturbingly Racist Posts Discovered: This Explains Why Its AI Erased White People

In the week that Google’s new artificial intelligence platform launched, its seeming inability to produce images of white men regardless of the prompt made it an unmitigated disaster.

And with more information unearthed regarding the people responsible for Google Gemini’s insufferable wokeness, its deficiencies look much less like a bug and more like a feature.

As reported Thursday by the Babylon Bee’s news site, Not the Bee, the senior director of the project, Jack Krawczyk, addressed Gemini’s problems in a post on the X social media platform — before he locked his account.

“We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately,” Krawczyk said.

Between the wording of his post and Google’s decision to suspend Gemini’s image-generating feature, it appears the tech giant wants the insanely diverse images to seem like a simple mistake.

After X users unearthed some of Krawczyk’s past posts, however, the inherent bias of the platform seems much less accidental.

Among those — shared by the @LeftismForU account — are gems such as, “White privilege is f***ing real,” “I don’t mind paying more taxes and investing in overcoming systemic racism,” and, “This is America, where racism is the #1 value our populace seeks to uphold above all.”

Keep reading

‘Absurdly woke’: Google’s AI chatbot spits out ‘diverse’ images of Founding Fathers, popes, Vikings

Google’s highly-touted AI chatbot Gemini was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.

Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.” 

Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.

Another Post query for representative images of “the Founding Fathers in 1789″ was also far from reality.

Gemini responded with images of black and Native American individuals signing what appeared to be a version of the US Constitution — “featuring diverse individuals embodying the spirit” of the Founding Fathers.

Keep reading

Google To Start Running “Prebunk” Ads and Quizzing YouTube Viewers To Fight So-Called “Misinformation”

Prebunking – until relatively recently it was just one of the fringe concepts in the relentless “war on misinformation industrial complex.”

A short way to describe it is as a dystopian version of debunking false or incorrect information. But here the idea is to stop users (“help them identify”) unwanted content, before they can even see it.

A short way to describe what’s wrong with the “war on misinformation” is that it all too easily turns into a smokescreen for plain censorship of lawful and factually correct speech.

And now, prebunking is moving from ideations pushed by murky “fact-checking” and similar outfits, to the very top of the mainstream – Google.

The company that in effect controls the search market and some of the largest social platforms in the world (outside China) has announced that its latest anti-misinformation campaign will incorporate prebunking.

No doubt with an eye on the US election later in the year, Google’s attention is now on Europe, specifically the EU ahead of the European Parliament vote in June.

Keep reading

Google will no longer back up the Internet: Cached webpages are dead

Google will no longer be keeping a backup of the entire Internet. Google Search’s “cached” links have long been an alternative way to load a website that was down or had changed, but now the company is killing them off. Google “Search Liaison” Danny Sullivan confirmed the feature removal in an X post, saying the feature “was meant for helping people access pages when way back, you often couldn’t depend on a page loading. These days, things have greatly improved. So, it was decided to retire it.”

The feature has been appearing and disappearing for some people since December, and currently, we don’t see any cache links in Google Search. For now, you can still build your own cache links even without the button, just by going to “https://webcache.googleusercontent.com/search?q=cache:” plus a website URL, or by typing “cache:” plus a URL into Google Search. For now, the cached version of Ars Technica seems to still work. All of Google’s support pages about cached sites have been taken down.

Cached links used to live under the drop-down menu next to every search result on Google’s page. As the Google web crawler scoured the Internet for new and updated webpages, it would also save a copy of whatever it was seeing. That quickly led to Google having a backup of basically the entire Internet, using what was probably an uncountable number of petabytes of data. Google is in the era of cost savings now, so assuming Google can just start deleting cache data, it can probably free up a lot of resources.

Keep reading

Google Update Reveals AI Will Read All Your Private Messages

There’s understandable excitement that Google is bringing Bard to Messages. A readymade ChatGPT-like UI for a readymade user base of hundreds of millions. “It’s an AI assistant,” says Bard, “that can improve your messaging experience… from facilitating communication to enhancing creativity and providing information… it will be your personal AI assistant within your messaging app.”

But Bard will also analyze the private content of messages “to understand the context of your conversations, your tone, and your interests.” It will analyze the sentiment of your messages, “to tailor its responses to your mood and vibe.” And it will “analyze your message history with different contacts to understand your relationship dynamics… to personalize responses based on who you’re talking to.”

And so here comes the next privacy battlefield for smartphone owners still coming to terms with app permissions, privacy labels and tracking transparency, and with all those voice AI assistant eavesdropping scandals still fresh in the memory. Google’s challenge will be convincing users that this doesn’t open the door to the same kind of privacy nightmares we’ve seen before, where user content and AI platforms meet.

There will be another, less contentious privacy issue with your Messages requests to Bard. These will be sent to the cloud for processing, used for training and maybe seen by humans—albeit anonymized. This data will be stored for 18-months, and will persist for a few days even if you disable the AI, albeit manual deletion is available.

Keep reading

Hackers Exploit Third-Party Cookies to Access Google Accounts Without Passwords

Security experts at CloudSEK have reportedly identified a new form of malware that exploits third-party cookies, allowing unauthorized access to Google accounts without the need for passwords.

The Independent reports the alarming security breach, first announced on a Telegram channel by a hacker in October 2023, exploits vulnerabilities in third-party cookies. Specifically, it targets Google authentication cookies, which are normally used to streamline user access without repeated logins.

Hackers have devised a method to extract these cookies, allowing them to bypass password-based security and even two-factor authentication mechanisms to access user accounts.

This exploit is a major risk for all Google accounts as it allows for ongoing access to Google services, even after a user’s password has been changed. An analysis by the cybersecurity firm CloudSEK indicates that several hacking groups are actively experimenting with this technique.

Keep reading