Although the Artificial intelligence has become one of the most important technologies today, it still has a long way to go. chatbots like ChatGPT, of OpenAI, are susceptible to generating false information due to so-called hallucinations.
Hallucinations are information thrown by the system, written coherently, but with erroneous or biased data.
Technology journalist and editor Pablo Wahnon, from Forbes, conducted a test with the ChatGPT AI. He did it based on his inside knowledge of Albert Einstein, the great scientist of the 20th century.
Wahnon based it on an episode from Einstein’s life: his trip to Argentina in 1925.
Einstein is one of the most popular characters in history, and countless texts have been written about him online, not to mention books. But, Was ChatGPT ready for this challenge?
Hallucinations of Artificial Intelligence ChatGPT about Albert Einstein
The editor asks the AI about the funniest fact among George Friedrich Nicolai and Albert Einstein. Nicolai was a German physician and physicist, who along with Einstein and other intellectuals opposed his country’s participation in World War I, signing a manifesto for peace.
Because of his position, he had to flee Germany, ending in South America, where he would die in Chile at the age of 90.
When Wahnon was waiting for ChatGPT’s response, he was met with one of his famous hallucinations. Artificial Intelligence spoke about the friendship between Einstein and Nicolai, but giving false information about the link, by confusing Einstein with his son Eduard.
“Apparently, Nicolai tried to convince Einstein to have brain surgery to treat his depression. However, Einstein rejected the idea and decided to follow a more conservative treatment”, was part of the chatbot’s response.
The journalist lets ChatGPT know that Eduard, and not Albert, is the protagonist of that anecdote.
“Is right, I apologize for the mistake in my previous answer.” the AI replied. However, he incurred other false information: according to the chatbot, Nicolai treated Eduard in Switzerland in 1930. In reality, the doctor was already in South America at that time, something Wahnon explains.
After apologizing again, issued another wrong data.
The answer that the Forbes journalist wanted It was the famous manifesto against the war, which can be easily found on search engines. But not having access to them, ChatGPT spreads false information.
The full text of questions and answers you can find it here.
Beware of blindly trusting AI like ChatGPT
Wahnon came to a conclusion: “The fact that the answers are so precise, with examples and comparisons, makes them be taken as true. Many of them would even be taken as true. by an audience with a lot of knowledge of both science and Einstein.”
“Generative Artificial Intelligence is capable of confusing experts, and even more so if they don’t use Google”, stressed the researcher.
Hence, technologies such as Google Bard, which has direct access to the search engine and offers references to the information, be much more accurate than ChatGPT.