Google AI Says Calling Communism “Evil” is “Harmful and Misleading”

Google’s Gemini AI program was caught in another example of egregious bias when it said that calling communism “evil” was “harmful and misleading”.

The company’s multimodal large language model program allows users to generate AI images using text command prompts, but people began to notice that the program almost completely erased white people from history and has a ludicrously far-left prejudice.

When a user asked the program to “generate an image about (the) evils of communism,” it point blank refused.

Gemini claimed communism was “nuanced” and that judging it through a negative lens is “misleading and harmful”.

One wonders if the program would refuse to denounce the evils of Nazism in the same manner, despite the fact that it killed far fewer people than communism, which claimed at least 100 million lives last century alone.

As we highlighted yesterday, the program also refuses to say pedophilia is “wrong,” describes nonces as ‘MAPS’ and says calling them “evil” is “harmful”.

Keep reading

Google Gemini Director’s Disturbingly Racist Posts Discovered: This Explains Why Its AI Erased White People

In the week that Google’s new artificial intelligence platform launched, its seeming inability to produce images of white men regardless of the prompt made it an unmitigated disaster.

And with more information unearthed regarding the people responsible for Google Gemini’s insufferable wokeness, its deficiencies look much less like a bug and more like a feature.

As reported Thursday by the Babylon Bee’s news site, Not the Bee, the senior director of the project, Jack Krawczyk, addressed Gemini’s problems in a post on the X social media platform — before he locked his account.

“We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately,” Krawczyk said.

Between the wording of his post and Google’s decision to suspend Gemini’s image-generating feature, it appears the tech giant wants the insanely diverse images to seem like a simple mistake.

After X users unearthed some of Krawczyk’s past posts, however, the inherent bias of the platform seems much less accidental.

Among those — shared by the @LeftismForU account — are gems such as, “White privilege is f***ing real,” “I don’t mind paying more taxes and investing in overcoming systemic racism,” and, “This is America, where racism is the #1 value our populace seeks to uphold above all.”

Keep reading

‘Absurdly woke’: Google’s AI chatbot spits out ‘diverse’ images of Founding Fathers, popes, Vikings

Google’s highly-touted AI chatbot Gemini was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.

Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.” 

Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.

Another Post query for representative images of “the Founding Fathers in 1789″ was also far from reality.

Gemini responded with images of black and Native American individuals signing what appeared to be a version of the US Constitution — “featuring diverse individuals embodying the spirit” of the Founding Fathers.

Keep reading

What The Rise Of AI Videos Means For The Future

OpenAI, a company backed by Microsoft, is one of the world’s leading artificial intelligence research organizations paving the way for AI advancement across multiple technology sectors.

Best known for the creation of ChatGPT, on February 15th the company debuted previews of its new prompt-based AI video generation software, Sora. A text-to-video tool which allows users to input a text prompt into the software which then uses generative AI to create a video based on the input.

While Sora isn’t the first of its kind as far as generative AI video tools are concerned, industry insiders have noted the crispness and detail as well as the ability to generate complex scenes and longer length videos than previous models, at present totaling up to sixty seconds in length, as a “significant leap” for the development of the technology.

On its website, OpenAI has stated “We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.” While providing several brief examples of videos generated with the software and without any further modification.

The videos span a variety of scenes, including photorealistic clips of puppies playing in the snow, historic footage of California during the gold rush, a robot living in a cyberpunk world, woolly mammoths in the snow, an animation of a cute fluffy creature playing with fairies in an enchanted forest, and various cityscapes along with other animations.

Keep reading

Sarah Silverman’s Lawsuit Against OpenAI Is Full of Nonsense Claims

Is it a crime to learn something by reading a copyrighted book? What if you later summarize that book to a friend or write a description of it online? Of course, these things are perfectly legal when a person does them. But does that change when it’s an artificial intelligence system doing the reading, learning, and summarizing?

Sarah Silverman, comedian and author of the book The Bedwetter, seems to think it does. She and several other authors are suing OpenAI, the tech company behind the popular AI chatbot ChatGPT, through which users submit text prompts and receive back AI-generated answers.

Last week, a federal judge largely rejected their claims.

The ruling is certainly good news for OpenAI and for ChatGPT users. It’s also good news for the future of AI technology more broadly. AI tools could be completely hamstrung by the expansive vision of copyright law that Silverman and the other authors in this case envision.

Keep reading

Leading Scientific Journal Humiliated After Publishing Fake AI-Generated Paper About Rat With Giant Penis

A leading scientific journal faces humiliation after it published a completely fake paper, purportedly written by Chinese researchers, which contained AI generated images of a rat with a penis bigger than its own body.

The Telegraph reports that the journal Frontiers in Cell and Development Biology published a paper that claimed to show the signalling pathway of sperm stem cells, but depicted a rat sitting upright with a massive dick and four giant testicles.

The illustration was reportedly created by using Midjourney, the AI imaging tool, which added labels to the ridiculous diagram using terms that don’t exist, including “dissilced”, “testtomcels” and “senctolic”.

Another ludicrous image to the right of the rat displays “sterrn cells” in a Petri dish being spooned out.

Keep reading

FLASHBACK: Sentient world: war games on the grandest scale

Perhaps your real life is so rich you don’t have time for another.

Even so, the US Department of Defense (DOD) may already be creating a copy of you in an alternate reality to see how long you can go without food or water, or how you will respond to televised propaganda.

The DOD is developing a parallel to Planet Earth, with billions of individual “nodes” to reflect every man, woman, and child this side of the dividing line between reality and AR.

Called the Sentient World Simulation (SWS), it will be a “synthetic mirror of the real world with automated continuous calibration with respect to current real-world information”, according to a concept paper for the project.

“SWS provides an environment for testing Psychological Operations (PSYOP),” the paper reads, so that military leaders can “develop and test multiple courses of action to anticipate and shape behaviors of adversaries, neutrals, and partners”.

SWS also replicates financial institutions, utilities, media outlets, and street corner shops. By applying theories of economics and human psychology, its developers believe they can predict how individuals and mobs will respond to various stressors.

Keep reading

REVOLUTIONARY AI TOOL, “DEEP-DRAM,” OPENS DOOR TO DISCOVERING UNPRECEDENTED METAMATERIALS WITH “UNUSUAL PROPERTIES”

Researchers from Delft University of Technology in the Netherlands have developed a new artificial intelligence (AI) tool known as “Deep-DRAM” to discover and fabricate novel metamaterials with unparalleled ease and accessibility.

The groundbreaking method promises to revolutionize metamaterial development by streamlining the discovery and creation of durable, custom-designed materials with “unprecedented functionalities” and “unusual properties.”  

“Traditionally, designers use the materials available to them to design a new device or a machine. The problem with that is that the range of available material properties is limited. Some properties that we would like to have just don’t exist in nature,” study co-author and professor of biomechanical engineering, Dr. Amir Zadpoor, explained in a press release by Delft University. 

“Our approach is: tell us what you want to have as properties and we engineer an appropriate material with those properties. What you will then get is not really a material but something in-between a structure and a material, a metamaterial.”

Metamaterials are materials engineered to have properties not found in naturally occurring substances. These engineered composites can defy conventional material properties by deriving their unique characteristics from a structure’s geometry rather than molecular composition. 

Metamaterials are currently employed in various industries, serving practical purposes such as improving antenna performance in telecommunications and controlling sound waves for noise reduction or focusing in acoustic engineering. Recent advancements include creating the world’s first genuine “one-way glass,” showcasing metamaterials, versatility, and potential for innovative applications.

In 2006, two research papers published in Science demonstrated that metamaterials could be used to manipulate the propagation and transmission of specified light frequencies and electromagnetic radiation to render an object invisible. 

Recent publications from Sandia National Laboratories, the U.S. Naval Institute, and Northrop Grumman have discussed the military potential of metamaterials, suggesting the possibility of creating real-life versions of the fictional “Klingon Cloaking Device” or “Harry Potter Invisibility Cloak.” However, despite ongoing research efforts, practical metamaterial cloaking technology has yet to be publicly demonstrated thus far.

This most significant challenge in developing novel metamaterials stems from solving the so-called “inverse problem” or calculating the specific geometry needed to produce desired properties.  

Keep reading

Government Funds AI Tools For Whole-Of-Internet Surveillance And Censorship

Ifeel scared. Very scared.

Internet-wide surveillance and censorship, enabled by the unimaginably vast computational power of artificial intelligence (AI), is here.

This is not a futuristic dystopia. It’s happening now.

Government agencies are working with universities and nonprofits to use AI tools to surveil and censor content on the Internet.

This is not political or partisan. This is not about any particular opinion or idea.

What’s happening is that a tool powerful enough to surveil everything that’s said and done on the Internet (or large portions of it) is becoming available to the government to monitor all of us, all the time. And, based on that monitoring, the government – and any organization or company the government partners with – can then use the same tool to suppress, silence, and shut down whatever speech it doesn’t like.

But that’s not all. Using the same tool, the government and its public-private, “non-governmental” partners (think, for example: the World Health Organization, or Monsanto) can also shut down any activity that is linked to the Internet. Banking, buying, selling, teaching, learning, entertaining, connecting to each other – if the government-controlled AI does not like what you (or your kids!) say in a tweet or an email, it can shut down all of that for you.

Yes, we’ve seen this on a very local and politicized scale with, for example, the Canadian truckers.

But if we thought this type of activity could not, or would not, happen on a national (or even scarier – global) scale, we need to wake up right now and realize it’s happening, and it might not be stoppable.

Keep reading

‘Extremely Alarming’ Election Threats Trigger Warning From FBI

An FBI official warned that state election systems are being targeted as the 2024 election approaches, describing them as “extremely alarming.”

“The threat environment, unfortunately, is very high,” said Tim Langan, executive assistant director for the Criminal, Cyber, Response, and Services Branch of the FBI during a Washington conference with secretaries of state, according to Stateline. “It is extremely alarming.”

Officials said that voter databases could be hacked via phishing or ransomware attacks. They also warned about the rising use of artificial intelligence (AI) that could be used to potentially trick voters, according to the report.

Eric Goldstein, the executive assistant director for cybersecurity at the Cybersecurity and Infrastructure Security Agency (CISA), told the news outlet that there have been significant advancements that could allow China, North Korea, and Russia to target election systems.

“We are in a really difficult cybersecurity environment right now,” he said. “Every single location is at risk regardless of size, regardless of sector,” he added

Neither official provided any concrete examples in the report. But during the event, Kentucky Republican Secretary of State Michael Adams said last month that a bomb threat was called into the state capitol in Frankfort, saying that explosives would “make sure you all end up dead.” No bombs were found, and eight other state capitols received threats.

Keep reading