Tech companies want to create artificial general intelligence. So who decides when to reach AGI?

By | April 4, 2024

A race is on to create artificial general intelligence, a futuristic vision of machines that are broadly as smart as humans or can do it at least as well as humans.

Achieving such a concept, commonly referred to as AGI, is the driving mission of ChatGPT maker OpenAI and a priority for the elite research wings of tech giants Amazon, Google, Meta and Microsoft.

This is also a cause for concern for world governments. Leading AI scientists published a study Thursday in the journal Science warning that uncontrolled AI agents with “long-term planning” skills could pose an existential risk to humanity.

So what exactly is AGI and how do we know it has been achieved? This concept, once on the fringes of computer science, has now become a buzzword, constantly redefined by those trying to realize it.

What is AGI?

Artificial general intelligence, not to be confused with the similar-sounding generative AI that describes the AI ​​systems behind tools that “produce” new documents, images, and sounds, is a more vague idea.

It’s not a technical term, but “a serious, if ill-defined, concept,” he said Geoffrey HintonA pioneering artificial intelligence scientist who has been called the “Godfather of Artificial Intelligence.”

“I don’t think there is agreement on what the term means,” Hinton said by email this week. “I use this to mean AI that is at least as good as humans at almost every cognitive thing humans do.”

Hinton prefers a different term, superintelligence, “for AGIs that are better than humans.”

A small group of early proponents of the term AGI wanted to evoke how computer scientists in the mid-20th century envisioned an intelligent machine. This was before AI research branched out into subfields that were developing customized and commercially viable versions of the technology, from facial recognition to speech-recognizing voice assistants like Siri and Alexa.

Pei Wang, a professor who teaches AGI at Temple University and helped organize the first AGI conference in 2008, said mainstream AI research is “moving away from the original AI vision, which was quite ambitious to begin with.”

Putting the ‘G’ in AGI was a signal to “those who still want to do the big thing”. We don’t want to create tools. “We want to make a thinking machine,” Wang said.

Are we at AGI yet?

Without a clear definition, it is difficult to know when a company or group of researchers will achieve artificial general intelligence or if they have already achieved it.

“Twenty years ago, I think people would happily agree that systems capable of GPT-4 or (Google’s) Gemini had general intelligence comparable to humans,” Hinton said. “Being able to answer any question more or less logically would have passed the test. “But now that AI can do this, people want to change the test.”

Advances in “autoregressive” AI techniques that predict the most plausible next word in a string, combined with massive computing power to train these systems on reams of data, have led to the emergence of impressive chatbots, but they are still not as AI as many people think. I had it in mind. Achieving AGI requires technology that can perform as well as humans on a wide range of tasks, including the ability to reason, plan, and learn from experience.

Some researchers want to reach consensus on how to measure this. This is one of the topics of the YGZ workshop to be held in Vienna, Austria next month; The first of a major AI research conference.

“This really needs the effort and attention of a community so that we can mutually agree on some classifications of AGI,” said workshop organizer Jiaxuan You, an assistant professor at the University of Illinois Urbana-Champaign. One idea is to break this down into levels, the same way automakers try to compare the path between cruise control and fully autonomous vehicles.

Others plan to figure it out on their own. San Francisco company OpenAI has given its nonprofit board of directors, which includes a former U.S. Treasury secretary, the responsibility of deciding when AI systems will reach the point where they will “outperform humans at the most economically valuable jobs.”

“The board of directors determines when we will achieve AGI,” says OpenAI’s description of its management structure. Such success would lead to the company’s largest partner, Microsoft, being denied the rights to commercialize such a system because the terms of their agreement “only apply to pre-AGI technology.”

Is AGI dangerous?

Hinton made global headlines last year when he left Google and warned about the existential dangers of artificial intelligence. A new Science study published Thursday may reinforce these concerns.

Its lead author is Michael Cohen, a researcher at the University of California at Berkeley, who investigates “the expected behavior of intelligent artificial agents in general,” especially those so competent that they “can pose a real threat to us by planning for us.”

Cohen made clear in an interview on Thursday that such long-term AI planning agents do not yet exist. But tech companies “have the potential to happen” as they look to combine today’s chatbot technology with more intentional planning capabilities using a technique known as reinforcement learning.

According to the article, “Giving an advanced AI system the goal of maximizing its reward and withholding the reward from it at some point strongly encourages the AI ​​system to cut humans out of the loop if it has the opportunity.” Authors include leading AI scientists Yoshua Bengio and Stuart Russell, and law professor and former OpenAI consultant Gillian Hadfield.

“I hope we’ve gotten people in government to decide to start thinking seriously about exactly what regulations we need to solve this problem,” Cohen said. For now, “governments only know what these companies decide to tell them.”

Is it very legal to leave AGI?

With so much money being spent on the promise of AI advancement, it’s no surprise that AGI has become a corporate buzzword that sometimes attracts quasi-religious fervor.

Parts of the tech world are divided between those who argue it should be developed slowly and carefully, and others who declare themselves part of an “accelerator” camp, including venture capitalists and rapper MC Hammer.

London-based startup DeepMind, founded in 2010 and now part of Google, was one of the first companies to begin explicitly developing AGI. OpenAI later followed suit with a security-focused commitment in 2015.

But now it may seem like everyone is jumping on the bandwagon. Google co-founder Sergey Brin was recently spotted hanging out at a venue called AGI House in California. Less than three years after changing its name from Facebook to focus on virtual worlds, Meta Platforms revealed in January that AGI was also at the top of its agenda.

Meta CEO Mark Zuckerberg said his company’s long-term goal is to “build a complete general intelligence,” which requires advances in reasoning, planning, coding and other cognitive abilities. While Zuckerberg’s company has long focused researchers on these issues, the tone of his attention has changed.

One sign of new messaging at Amazon was that the chief scientist for voice assistant Alexa changed job titles to become chief scientist for AGI.

While not as concrete as productive AI for Wall Street, publishing AGI goals could help recruit AI talent who have a choice of where they want to work.

You, the University of Illinois researcher, said that when deciding between an “old-school AI institute” and one whose “goal is to create AGI” and has sufficient resources to do so, most people will choose the latter.

Leave a Reply

Your email address will not be published. Required fields are marked *