Talk about a killer app.
Artificial intelligence models are vulnerable to hackers and could even be trained to off humans if they fall into the wrong hands, ex-Google CEO Eric Schmidt warned.
The dire warning came Wednesday at a London conference in response to a question about whether AI could become more dangerous than nuclear weapons.
“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So, in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt said at the Sifted Summit tech conference, according to CNBC.
“All of the major companies make it impossible for those models to answer that question,” he continued, appearing to air the possibility of a user asking an AI to kill.
“Good decision. Everyone does this. They do it well, and they do it for the right reasons,” Schmidt added. “There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”
The predictions might not be so far-fetched.
In 2023, an altered version of OpenAI’s ChatGPT called DAN – an acronym for “Do Anything Now” – surfaced online, CNBC noted.