Sam Altman is finally confronted about the biggest problem of ChatGPT and has avoided the debate with rhetoric. Anyone who has used OpenAI’s Artificial Intelligence (AI) with due care and responsibility by now will have already encountered a common problem in their responses: “hallucinations”.
This is any response generated by the platform that sounds absolutely convincing, articulated and even justified with supposed data, arguments and sources. But by investigating them, it is possible to verify that the answer is simply an invention of AI.
ChatGPT in its most recent version and more robust variations such as Bing Chat They can go so far as to integrate links to the supposed sources of the information that was taken as the basis for the responses generated.
But when confronting the platform about the inaccuracy of the information, the chatbot will most likely end up expelling the user and cutting off all dialogue. Which has gradually become a problem with the growth in popularity of the platform.
Now the CEO of OpenAI has been directly confronted about this phenomenon and we finally got an answer. Although it’s not a really convincing one. On the contrary, he reminds us of the exact way in which his AI avoids the facts.
Sam Altman justifies ChatGPT’s “hallucinations”: he prefers to see them as a distinctive feature rather than a bug
The cycle of conferences of Dreamforce 2023where a series of figures from the Information Technology (IT) industry met to talk about current issues.
The event started strong with an intervention by Marc BenioffCEO of Salesforce, host of the event, who directly pointed out that the term “hallucinations,” to refer to the false but convincing responses of any AI, seemed more like a euphemism to avoid saying that they are “lies.”
Later, the executive had a face-to-face conversation with Sam Altman, CEO of OpenAI, responsible for ChatGPT, where this phenomenon of inaccuracies occurs most frequently.
The curious thing here is that the executive ended up answering to this point of controversy with an approach that seeks to reverse the vision of the problem:
“Much of the value of these systems is strongly related to the fact that they blow your mind. They are more of a feature or characteristic of their functions than a bug or failure. If you want to search for something in a database, there are already more useful things to do it.”
Altman’s response is curious, particularly when he tries to sell ChatGPT as a cutting-edge tool that would relieve those databases that do have the precise information.