In the field of artificial intelligence, OpenAI, led by CEO Sam Altman, along with the company’s ChatGTP chatbot and its mysterious Q* AI model, have emerged as leading forces within Silicon Valley.
While advancements in AI may hold the potential for positive future developments, OpenAI’s Q* and other AI platforms have also led to concerns among government officials worldwide, who increasingly warn about possible threats to humanity that could arise from such technologies.
2023’S BIGGEST AI UPSET
Among the year’s most significant controversies involving AI, in November Altman was released from his duties as CEO of OpenAI, only to be reinstated 12 days later amidst a drama that left several questions that, to date, remain unresolved.
On November 22, just days after Altman’s temporary ousting as the CEO of OpenAI, two people with knowledge of the situation told Reuters that “several staff researchers wrote a letter to the board of directors,” which had reportedly warned about “a powerful artificial intelligence discovery that they said could threaten humanity,” the report stated.
In the letter addressed to the board, the researchers highlighted the capabilities and potential risks associated with artificial intelligence. Although the sources did not outline specific safety concerns, some of the researchers who authored the letter to OpenAI’s board had reportedly raised concerns involving an AI scientist team comprised of two earlier “Code Gen” and “Math Gen” teams, warning that the new developments that aroused concern among company employees involved aims to upgrade the AI’s reasoning abilities and ability to engage in scientific tasks.
In a surprising turn of events that occurred two days earlier on November 20, Microsoft announced its decision to onboard Sam Altman and Greg Brockman, the president of OpenAI and one of its co-founders, who had resigned in solidarity with Sam Altman. Microsoft said at the time that the duo was set to run an advanced research lab for the company.
Four days later Sam Altman was reinstated as the CEO of OpenAI after 700 of the company’s employees threatened to quit and join Microsoft. In a recent interview with Altman, he disclosed his initial response to his invitation to return following his dismissal, saying it “took me a few minutes to snap out of it and get over the ego and emotions to then be like, ‘Yeah, of course I want to do that’,” Altman told The Verge.
“Obviously, I really loved the company and had poured my life force into this for the last four and a half years full-time, but really longer than that with most of my time. And we’re making such great progress on the mission that I care so much about, the mission of safe and beneficial AGI,” Altman said.
But the AI soap opera doesn’t stop there. On November 30, Altman announced that Microsoft would join OpenAI’s board. The tech giant, holding a 49 percent ownership stake in the company after a $13 billion investment, will assume a non-voting observer position on the board. Amidst all this turmoil, questions remained about what, precisely, the new Q* model is, and why it had so many OpenAI researchers concerned.