For weeks the big international story has been the war in Gaza, and occasionally there was mention of the one in Ukraine. Little has been more important. Until OpenAI fired its CEO, Sam Altman.
In the days between November 17 and November 21 Altman was first sacked, then offered a job heading AI at Microsoft, and subsequently brought back to OpenAI as CEO. The OpenAI board resigned, except for one director.
At the heart of it is the debate about how to manage the undeniable advantages that AI can offer against its equally undeniable dangers.
That we can look for combinations of molecules to develop a super-poison is an example of the dangers. That we can ask the benign technology of ChatGPT, or any chatbot using a large-language model needed for generative AI services, to create a new strain of malware or for ways to carry out other nefarious activities, is scary, and a reminder that we are good at being nasty to each other.
The former board of OpenAI made a naive attempt to balance these conflicting elements.
The human instinct is to innovate: we invented fire to cook meat, which we hunted by using tools. The tools were used to kill one another and the fire to burn down houses. We innovated ourselves into fire-starting toolmakers.
At the heart of it is the debate about how to manage the undeniable advantages that AI can offer against its equally undeniable dangers
What the Open AI board did — which is another unfortunate trait of the only species to have developed speech and, you know, the internet — was to fail to communicate. What had Altman done that, according to the board, made him not “consistently candid in his communications”? An OpenAI executive admitted it wasn’t anything illegal or nefarious. Was he bad at expressing himself? Was it getting the world hyped on the next great thing: AI that can generate things, hence generative AI, and that he was the face of this nascent movement?
If the former board’s intention in firing Altman was to curb the consequences, or commercialisation-without-guiderails, it failed.
It sounds like crying wolf.
Apart from nuking the former directors’ abilities to pursue their stated fiduciary duty to “humanity, not OpenAI investors”, they display that all too human trait of self-sabotage. OpenAI is now untethered from their oversight. They have instead done humanity a huge disservice. That’s what happens when you put us flawed Homo sapiens in charge.
* Shapshak is editor-in-chief of Stuff.co.za and executive director of Scrolla.Africa






Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.