Elon Musk’s OpenAI Lawsuit Sparks Questions About GenAI And AGI



Elon Musk sued OpenAI Friday, alleging that the company diverted from its original mission, turning from a non-profit focused on benefiting humanity to a for-profit benefiting Microsoft and others. While only time will tell how the lawsuit plays out, at the crux are two artificial intelligence concepts – generative AI and artificial general intelligence. What’s the difference, and why has the relationship between the two sparked so much controversy?

OpenAI’s agreement with Microsoft does not extend to AGI, and OpenAI’s governing body retains the ability to determine when AGI is reached. Meanwhile, the November ouster (and return) of OpenAI CEO Sam Altman was triggered by beliefs that OpenAI had attained AGI, and some speculate that this is at the core of Musk’s lawsuit as well. One reason for the confusion is that neither genAI nor AGI are well defined, and we are rapidly reaching the point where humanity will require more precise definitions of these terms.

What Is AGI?

During the Open AI CEO firing drama, I wrote an explainer on AGI, which includes the definition of AGI by OpenAI’s chief scientist, Ilya Sutskever at TEDAI 2023.

  • He described a key tenet of AGI as being potentially smarter than humans in anything and everything, with all of human knowledge to back it up.
  • He also described AGI as having the ability to teach itself, thereby creating new, even potentially smarter AGIs.

What Is GenAI And Where Did It Come From?

GenAI took the world by storm with the November 2022 release of ChatGPT, because of their ability to generate coherent text answering questions, forming poetry, creating pictures, and writing code, to name a few examples.

While genAI has many forms depending on what type of content it generates, the current AGI drama is largely focused on the large language models (the type of genAI that generates language and associated content like code). This technology started with a type of AI called natural language processing. The AI reads all the text available to it and learns how to complete sentences. For example, if I start with the words “I am very hungry because”, the AI will learn that “I have not eaten today” is a far more likely completion than “I have not played golf today”. New text is generated via this completion mechanism. This approach has also demonstrated potential for generating code and even for speeding up drug discovery.

A second technology, reinforcement learning with human feedback, is added, with humans teaching the AI the difference between good completions and bad completions. For example, the completions “I have not eaten today”, “no one gave me food”, or alternate completions that say the same thing offensively, are all theoretically reasonable. However, a human may instruct the AI that the first is most likely to be acceptable to a varied human audience (OpenAI’s customers). Over time, the AI learns not just to complete sentences but which complete sentences will be better received by human standards.

The combination of these technologies has led to the powerful language AIs that we see today which seem uncannily human-like in their text responses.

How Close Are genAI And AGI?

While AGI is still considered by most to be a goal, genAI has most certainly arrived. The key is how the evolution of genAI can create AGI. If we examine the two key tenets of AGI above, have they been reached?

  • The first, an AI that has all of human knowledge to back it up, is partway there. GenAI today is capable of consuming most public data into a single model (the AI version of a brain). However, it is better at language-oriented tasks than mathematics and logical reasoning. A lively debate exists over whether genAI (and large language models in particular) are merely “stochastic parrots”, able to convincingly mimic human language without understanding. The AGI “scare” related to OpenAI’s CEO drama was due to a specific improvement (Project Q*) which engineers felt was a leap forward in AI doing the mathematical reasoning considered critical to AGI.
  • The second, the ability to create new AGIs, is also on its way. In November 2023, OpenAI introduced GPTs, a form of agent that can be created to be a personal assistant for any purpose. It is also now possible to create code with ChatGPT and automatically run that code to take some action in the real world. Imagine a GPT that can not just read your email and suggest responses, but will actually respond for you and possibly have multiple email exchanges without your direct involvement. For many people, this is closer to AGI.

Where We Go From Here

The definitions of genAI, AGI, and their associated capabilities, depend on who you ask. The only thing we can say for sure is that genAI is rapidly evolving, and every day brings a new capability that can be seen as a step toward AGI. The Musk/OpenAI lawsuit can force more details about AI technology development to the public domain, and it may even create a legally binding opinion on what AGI is. If nothing else, it will keep this critical topic in the public eye, which I believe will help us think harder about this technology and its impact on humanity.


Source link

You might also like
Leave A Reply

Your email address will not be published.