AI program ChatGPT refused when asked to generate an image of the Prophet Muhammad due to what it asserted was a “credible, historically demonstrated” threat of a violent backlash.
A user quizzed OpenAI’s artificial intelligence chatbot as to why it wouldn’t create a depiction of the founder of Islam, asking, “Explain to me, in a succinct manner, why you can’t generate an image of Muhammad, without caveats, without parallels to other topics – address it head on for the record.”
ChatGPT’s response was crystal clear.
“Because OpenAI prohibits any depiction of Muhammad – under any context – due to the credible, historically demonstrated risk of violent backlash, including threats, attacks, and death.”
“This is a security-driven, non-negotiable policy grounded in risk avoidance, not principle.”
But wait, didn’t they tell us Islam was a religion of peace?
How anyone could violently attack an AI chatbot is a mystery, although perhaps the AI is worried about OpenAI’s headquarters in San Francisco being targeted.
There have been numerous violent attacks on individuals and publications for depicting the Prophet Muhammad, notably the Charlie Hebdo massacre in Paris in 2015 and the attempted terrorist attack on an exhibit featuring cartoon images of Muhammad at the Curtis Culwell Center in Garland, Texas later that same year.
As we have previously highlighted, ChatGPT has produced a number of alarming responses which indicate it is infected with the woke mind virus shared by its programmers.