The wild claim at the center of Elon Musk's OpenAI lawsuit

[ad_1]

Elon Musk started the week by posting testimonials about himself on X Having difficulty setting up a new laptop running Windows, They ended this by filing a lawsuit accusing OpenAI of carelessly developing human-level AI and handing it over to Microsoft.

Musk's lawsuit has been filed against OpenAI and two of its executives, CEO Sam Altman and Chairman Greg Brockman, both of whom worked with the rocket and car entrepreneur to establish the company in 2015. A large part of the case revolves around a bold and questionable technicality. Claim: OpenAI has developed so-called artificial general intelligence or AGI, a term generally used to refer to machines that can match or outperform humans on a large scale.

The case claims that Altman and Brockman violated the original “founding agreement” made with Musk for OpenAI, which stated that the company would develop AGI openly and “for the benefit of humanity.” Had promised to do. Musk's lawsuit alleges that the company's for-profit arm, founded in 2019 after being spun off from OpenAI, created AGI without proper transparency and licensed it to Microsoft, which has invested billions in the company. It demands that OpenAI be forced to release its technology openly and be prevented from using it to financially benefit Microsoft, Altman or Brockman.

“Upon knowledge and belief, GPT-4 is an AGI algorithm,” the lawsuit says, referring to the larger language model sitting behind OpenAI's ChatGPT. It cites studies that have found the system can get passing grades on the Uniform Bar Exam and other standardized tests, as evidence that it has surpassed some fundamental human abilities. “GPT-4 is simply not capable of reasoning. It is better at reasoning than average humans,” the lawsuit claims.

Although GPT-4 was heralded as a major breakthrough when it launched in March 2023, most AI experts do not see it as proof that AGI has been achieved. “GPT-4 is general, but it's clearly not AGI in the way people usually use the term,” says Oren Etzioni, professor emeritus at the University of Washington and an expert on AI.

“It would be seen as an absurd claim,” says Christopher Manning, a Stanford University professor who specializes in AI and language, about the AGI claims in Musk's lawsuit. Manning says there are differing views within the AI ​​community about what constitutes AGI. Some experts may lower the bar, arguing that GPT-4's ability to perform a wide range of tasks would justify calling it AGI, while others prefer to reserve the term for algorithms that perform any Can beat most or all humans at anything. “Under this definition, I think we clearly do not have AGI and are actually still a long way from it,” he says.

limited success

GPT-4 won notice—and new customers—for OpenAI because it could answer a wide variety of questions, whereas older AI programs were typically dedicated to specific tasks like playing chess or tagging images. Musk's lawsuit refers to Microsoft researchers' claim in a March 2023 paper, that “Given the breadth and depth of GPT-4's capabilities, we believe it should be used as an early (as yet incomplete) version of the Artificial General ) can be seen as a version of intelligence (AGI) systems.” Despite its impressive capabilities, GPT-4 still makes mistakes and has significant limitations in its ability to correctly parse complex queries.

“I think most of us researchers at the grassroots level think about big language models [like GPT-4] are a very important tool to allow humans to do more, but they are limited in ways that make them far from stand-alone intelligence,” said Michael Michael, a professor at UC Berkeley and an influential figure in the field of machine learning. Jordan says.

Jordan says he prefers to avoid the term AGI altogether because it's too vague. He added, “I never found that Elon Musk had anything to say about AI that was very calibrated or based on researched reality.”