Founded in December 2015, OpenAI has revolutionized the role of technology in the world with the creation of ChatGPT, an artificial intelligence machine that uses a neural network model to answer any question a user feeds it. Whether it’s solving calculus problems, writing emails, debugging code, writing poetry or explaining scientific concepts, ChatGPT can do it all. From its founding, OpenAI established itself as a nonprofit with the mission to “ensure that artificial general intelligence benefits all of humanity.” In only a few short years, the organization has already created a powerful model with the potential to transform how humans approach learning and the future operations of countless industries. Considering the vast power of AI, however, some speculate that today’s tech reality has begun to resemble the opening of a science fiction novel where artificial intelligence reigns supreme over human beings.
Though such claims may initially sound like an exaggeration, the reality is that intelligent models such as ChatGPT present tangible dangers. Some of these threats, such as the prospect of AI models developing into self-aware, sentient beings with their own agendas, seem fantastical and far off in the future. Others, such as deepfakes (realistic-seeming photos that have been manipulated by artificial intelligence) have already proliferated online, with examples ranging from comedic meme images to illegal manipulations of photos and videos into child pornography.
To counter these concerns, OpenAI formed a superalignment team last summer tasked with ensuring that their AI machine will retain the intended values and goals of its programmers rather than behaving independently. Led by Ilya Sutskever, lead scientist and co-founder of OpenAI, the superalignment team hopes that their “AI systems can take over more and more of [their] alignment work and ultimately conceive, implement, study and develop better alignment techniques than [they] have now,” essentially allowing the machine to regulate itself.
As OpenAI grew and looked to attract investors, a rift began to form between the CEO, Sam Altman, and other board members. While members such as Sutskever wanted to slow down the development of the AI model in order to ensure its safety, Altman, along with the company’s largest investor, Microsoft, pushed for faster expansion in order to beat competitors and attract more investors. In addition to their competing values, the board also claimed that Altman had not been “consistently candid in his communications” with them. On November 17, this internal conflict came to a head as the board of OpenAI staged a coup, removing Sam Altman from his position as CEO.
This controversy within OpenAI begs the larger, industry-wide question of the importance of AI safety and how to ensure that these intelligent machines will remain under human control. While OpenAI has publicly supported the importance of AI regulation, behind the scenes, the nonprofit has also lobbied with EU officials for looser regulations. In the coming years, artificial intelligence models, including OpenAI, have the potential to fundamentally revolutionize the processes of human learning and the methods of operation within every industry. The only question remaining is how to keep these machines within the bounds of human control, preempting the emergence of a sci-fi reality where machines usurp human authority.