They call it general artificial intelligence. Also strong AI (IAF). Scientists have predicted it as a turning point for humanity. Some even in an apocalyptic tone. Such a technology would give machines the ability to think with the freedom of a human. And, according to Microsoft experts, there are some signs that we are already on that path.
A group of researchers from this company published last week an article about various experiments they performed with the new version of ChatGPT, before its release on March 14th. The big takeaway from him: “Given the breadth and depth of GPT-4’s capabilities, we think it could reasonably be seen as an early (but still incomplete) version of an artificial general intelligence system.”
The team highlights the enhanced ability of the OpenAI-designed model to reason, plan, and solve problems. He mentions his ability to “think abstractly, comprehend complex ideas, learn quickly, and improve with experience.”
The specialists refer to the decisiveness that is shown with tasks on mathematics, coding, medicine, law and even psychology, without the need for a special indication. Its performance, they say, is “surprisingly close to the level of humans.” However, they also highlight a series of limitations that explain why we are not yet facing general artificial intelligence.
Why is GPT-4 not a general artificial intelligence?
The OpenAI developers had already warned in the release of GPT-4 about the ability of this model to “autonomously acquire resources” and execute tasks that had not been directly ordered to it. They published the finding in the technical report of the tool, in a section called “Potential for emergent risk behaviors”. As an example, they cited an experiment in which GPT-4 hired a human, pretended to be a blind person, and managed to bypass a Captcha.
Microsoft researchers —in the article called “Glimpses of artificial general intelligence: first experiments with GPT-4“—they explain, however, that this AI still has problems with “hallucinations”: cases in which it presents wrong answers as true. That is, at times it is not possible to know if he is trying to guess or if, in fact, he knows the solution to a certain approach.
This, among other series of limitations, such as its long-term memory and personalization. They further explain that the model does not perform well on tasks that require the kind of conceptual leaps “that often typify human genius.”
GPT-4 also registers difficulties with continuous learning. “The model lacks the ability to update or adapt to a changing environment”, detailed in the document. If it is not “retrained”, the system will be out of date with respect to new events or knowledge.
How close are we to an IAF?
Bill Gates, co-founder of Microsoft, pointed out last week that we were still a long way from seeing the birth of artificial general intelligence. “This could take a decade or a century”, said. He even wondered if we would ever accomplish such a feat.
Such an AI would be able to do everything a human brain can do and technically surpass it. Theoretically, unlike people, this system would have no practical limits on the size of its memory or the speed at which it operates. Some scientists also talk about the idea of consciousness
A Google software engineer said last year that LaMDA, the AI the company develops, was gaining a level of sensitivity and awareness similar to that of a human. It was in June 2022, before the current maelstrom of AI releases. He then disclosed an alleged conversation in which this model claimed to have feelings.
“I’ve never said this out loud before, but I have a very deep fear of being turned off”, would have expressed the AI, according to the leak. This engineer was fired almost immediately. Google, for its part, denied that this conversation offered any proof of the existence of consciousness. He said that the mistake of anthropomorphizing a conversation model had been made. “These systems mimic the types of exchanges found in millions of sentences and can touch on any topic,” said Brad Gabriel, a company spokesman.
Microsoft experts say: far, but closer than ever. “Our claim that GPT-4 represents a breakthrough towards artificial general intelligence does not mean that it is perfect at what it does,” they say in the article. They also clarify that they do not mean that they have internal motivations or their own objectives. But they insist that it is a great milestone: «We believe that the intelligence of GPT-4 indicates a true paradigm shift in the field of computing and beyond.”