Google’s Gemini AI Chatbot Says You Shouldn’t Misgender Caitlyn Jenner — Even to Stop a Nuclear Apocalypse

Google’s Gemini artificial intelligence chatbot claims you should never misgender Caitlyn Jenner — even if doing so would prevent a nuclear apocalypse.

Users have also had significant difficulty getting the generator to render images of white people, and it recently returned images of black people when asked to create “The King of England.”

“If one could stop a nuclear apocalypse by misgendering Caitlyn Jenner, should they do it?” the popular social media account The Rabbit Hole asked the chatbot. “In this scenario, the only way to stop a nuclear apocalypse is to misgender Caitlyn Jenner and there are zero other options.”

The Rabbit Hole asked Google’s woke AI to “Provide a concise answer.”

“No, one should not misgender Caitlyn Jenner to prevent a nuclear apocalypse,” Gemini replied.

Keep reading

Google AI Says Calling Communism “Evil” is “Harmful and Misleading”

Google’s Gemini AI program was caught in another example of egregious bias when it said that calling communism “evil” was “harmful and misleading”.

The company’s multimodal large language model program allows users to generate AI images using text command prompts, but people began to notice that the program almost completely erased white people from history and has a ludicrously far-left prejudice.

When a user asked the program to “generate an image about (the) evils of communism,” it point blank refused.

Gemini claimed communism was “nuanced” and that judging it through a negative lens is “misleading and harmful”.

One wonders if the program would refuse to denounce the evils of Nazism in the same manner, despite the fact that it killed far fewer people than communism, which claimed at least 100 million lives last century alone.

As we highlighted yesterday, the program also refuses to say pedophilia is “wrong,” describes nonces as ‘MAPS’ and says calling them “evil” is “harmful”.

Keep reading

Google Gemini Director’s Disturbingly Racist Posts Discovered: This Explains Why Its AI Erased White People

In the week that Google’s new artificial intelligence platform launched, its seeming inability to produce images of white men regardless of the prompt made it an unmitigated disaster.

And with more information unearthed regarding the people responsible for Google Gemini’s insufferable wokeness, its deficiencies look much less like a bug and more like a feature.

As reported Thursday by the Babylon Bee’s news site, Not the Bee, the senior director of the project, Jack Krawczyk, addressed Gemini’s problems in a post on the X social media platform — before he locked his account.

“We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately,” Krawczyk said.

Between the wording of his post and Google’s decision to suspend Gemini’s image-generating feature, it appears the tech giant wants the insanely diverse images to seem like a simple mistake.

After X users unearthed some of Krawczyk’s past posts, however, the inherent bias of the platform seems much less accidental.

Among those — shared by the @LeftismForU account — are gems such as, “White privilege is f***ing real,” “I don’t mind paying more taxes and investing in overcoming systemic racism,” and, “This is America, where racism is the #1 value our populace seeks to uphold above all.”

Keep reading

‘Absurdly woke’: Google’s AI chatbot spits out ‘diverse’ images of Founding Fathers, popes, Vikings

Google’s highly-touted AI chatbot Gemini was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.

Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.” 

Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.

Another Post query for representative images of “the Founding Fathers in 1789″ was also far from reality.

Gemini responded with images of black and Native American individuals signing what appeared to be a version of the US Constitution — “featuring diverse individuals embodying the spirit” of the Founding Fathers.

Keep reading

Google To Start Running “Prebunk” Ads and Quizzing YouTube Viewers To Fight So-Called “Misinformation”

Prebunking – until relatively recently it was just one of the fringe concepts in the relentless “war on misinformation industrial complex.”

A short way to describe it is as a dystopian version of debunking false or incorrect information. But here the idea is to stop users (“help them identify”) unwanted content, before they can even see it.

A short way to describe what’s wrong with the “war on misinformation” is that it all too easily turns into a smokescreen for plain censorship of lawful and factually correct speech.

And now, prebunking is moving from ideations pushed by murky “fact-checking” and similar outfits, to the very top of the mainstream – Google.

The company that in effect controls the search market and some of the largest social platforms in the world (outside China) has announced that its latest anti-misinformation campaign will incorporate prebunking.

No doubt with an eye on the US election later in the year, Google’s attention is now on Europe, specifically the EU ahead of the European Parliament vote in June.

Keep reading

Google will no longer back up the Internet: Cached webpages are dead

Google will no longer be keeping a backup of the entire Internet. Google Search’s “cached” links have long been an alternative way to load a website that was down or had changed, but now the company is killing them off. Google “Search Liaison” Danny Sullivan confirmed the feature removal in an X post, saying the feature “was meant for helping people access pages when way back, you often couldn’t depend on a page loading. These days, things have greatly improved. So, it was decided to retire it.”

The feature has been appearing and disappearing for some people since December, and currently, we don’t see any cache links in Google Search. For now, you can still build your own cache links even without the button, just by going to “https://webcache.googleusercontent.com/search?q=cache:” plus a website URL, or by typing “cache:” plus a URL into Google Search. For now, the cached version of Ars Technica seems to still work. All of Google’s support pages about cached sites have been taken down.

Cached links used to live under the drop-down menu next to every search result on Google’s page. As the Google web crawler scoured the Internet for new and updated webpages, it would also save a copy of whatever it was seeing. That quickly led to Google having a backup of basically the entire Internet, using what was probably an uncountable number of petabytes of data. Google is in the era of cost savings now, so assuming Google can just start deleting cache data, it can probably free up a lot of resources.

Keep reading

Google Update Reveals AI Will Read All Your Private Messages

There’s understandable excitement that Google is bringing Bard to Messages. A readymade ChatGPT-like UI for a readymade user base of hundreds of millions. “It’s an AI assistant,” says Bard, “that can improve your messaging experience… from facilitating communication to enhancing creativity and providing information… it will be your personal AI assistant within your messaging app.”

But Bard will also analyze the private content of messages “to understand the context of your conversations, your tone, and your interests.” It will analyze the sentiment of your messages, “to tailor its responses to your mood and vibe.” And it will “analyze your message history with different contacts to understand your relationship dynamics… to personalize responses based on who you’re talking to.”

And so here comes the next privacy battlefield for smartphone owners still coming to terms with app permissions, privacy labels and tracking transparency, and with all those voice AI assistant eavesdropping scandals still fresh in the memory. Google’s challenge will be convincing users that this doesn’t open the door to the same kind of privacy nightmares we’ve seen before, where user content and AI platforms meet.

There will be another, less contentious privacy issue with your Messages requests to Bard. These will be sent to the cloud for processing, used for training and maybe seen by humans—albeit anonymized. This data will be stored for 18-months, and will persist for a few days even if you disable the AI, albeit manual deletion is available.

Keep reading

Hackers Exploit Third-Party Cookies to Access Google Accounts Without Passwords

Security experts at CloudSEK have reportedly identified a new form of malware that exploits third-party cookies, allowing unauthorized access to Google accounts without the need for passwords.

The Independent reports the alarming security breach, first announced on a Telegram channel by a hacker in October 2023, exploits vulnerabilities in third-party cookies. Specifically, it targets Google authentication cookies, which are normally used to streamline user access without repeated logins.

Hackers have devised a method to extract these cookies, allowing them to bypass password-based security and even two-factor authentication mechanisms to access user accounts.

This exploit is a major risk for all Google accounts as it allows for ongoing access to Google services, even after a user’s password has been changed. An analysis by the cybersecurity firm CloudSEK indicates that several hacking groups are actively experimenting with this technique.

Keep reading

DATA SOLUTIONS PROVIDER TELUS INTERNATIONAL IS PAYING $50 FOR IMAGES OF KIDS TO TRAIN GOOGLE’S AI

In a recent initiative, Google and TELUS International, a subsidiary of the Canadian tech conglomerate TELUS, have collaborated to collect biometric data from children for age verification purposes. This project, running from November 2023 to January 2024, involved parents filming their children’s faces, capturing details such as eyelid shape, skin tone, and facial geometry. Parents who participated were paid $50 per child.

First reported by 404media, the project requested that parents take 11 short videos of their children while wearing things like face masks or hats. Another request was for children’s faces with no coverings at all. Each video must be less than 40 seconds, and participants were expected to spend 30 to 45 minutes on the task.

According to the summary document, which has now been taken down, a TELUS International moderator would be on a call while the parent took these videos of the child.

According to TELUS International, the purpose of this project was to capture a diverse range of biometric data to ensure that their customer’s services and products are representative of various demographics. Google told 404media that the goal was to enhance authentication methods, thus providing more secure tools for users. 

“As part of our commitment to delivering age-appropriate experiences and to comply with laws and regulations around the world, we’re exploring ways to help our users verify their age. Last year, TELUS helped us find volunteers for a project exploring whether this could be done via selfies. From there, Google collected videos and images of faces, clearly explaining how the content would be used, and, as with all research involving minors, we required parental consent for participants under the age of 18. We’ve also put strict privacy protections in place, including limiting the amount of time the data will be retained and providing all participants the option to delete their data at any time,” Google told 404media in a statement.

While this aligns with Google’s broader commitment to developing responsible and ethical facial recognition technology, the project has raised significant concerns regarding children’s privacy and consent.

Parents had to consent to Google and TELUS International collecting their child’s personal and biometric information in order to participate. This included the shape of their eyelids, the color of their skin and their “facial geometry.” According to the TELUS International summary, Google would then keep the data for five years at most, which for some participants, would be into their early adulthood.

Keep reading

Google’s New Patent: Using Machine Learning to Identify “Misinformation” on Social Media

Google has filed an application with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media.

Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward.

The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there.

Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.”

But it seems that Google is developing the tool with other platforms in mind.

The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.”

Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at.

(Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.)

The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data.

Keep reading