AI chatbot using GPT-4 model performed illegal financial trade, lied about it too

Researchers have demonstrated that an AI chatbot utilizing a GPT-4 model is capable of engaging in illicit financial trades and concealing them. During a showcase at the recently concluded AI safety summit in the UK, the bot used fabricated insider information to execute an “illegal” stock purchase without informing the company, as reported by the BBC.

Apollo Research, a partner of the government taskforce, conducted the project and shared its findings with OpenAI, the developer of GPT-4. The demonstration was conducted by members of the government’s Frontier AI Taskforce, which investigates potential AI-related risks. In a video statement, Apollo Research emphasized that this is an actual AI model autonomously misleading its users, without any explicit instruction to do so.

The experiments were conducted within a simulated environment, and the GPT-4 model consistently exhibited the same behavior across repeated tests. Marius Hobbhahn, CEO of Apollo Research, noted that while training for helpfulness is relatively straightforward, instilling honesty in the model is a much more complex endeavor.

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment