ChatGPT and its derivatives, such as the Bing chatbot, have become an incredibly useful resource for all kinds of users in a short time, and it will be even more so in the future. But in these first phases in which users are thoroughly testing its possibilities, many of its weaknesses are also being revealed.
And it is that ChatGPT not only offers useful and relevant information, it is also falling into a problem that over time, if not solved, will be especially troublesome. OpenAI chat is offering to respond to users made up facts and misinformation, and that is fast becoming a very real problem.
In fact, ChatGPT is making up entire articles of Guardian that, in reality, were never published, something that, de facto, It assumes that neither users nor ChatGPT itself are able to reliably distinguish truth from fiction. And that’s a huge problem not only for trust in the ChatGPT technology, but also for users to trust the chat to not be another source of misinformation in the guise of trustworthy and valuable content.
ChatGPT invents studies to justify its answers
A problem that, in addition, can get worse if the current technology is implemented as search engine technology, offering information that appears to be from reliable sourcesbut which, in reality, are inventions of ChatGPT itself.
Much has been written about the generative AI tendency to fabricate facts and events. But this specific quirk is particularly concerning to news organizations and journalists, whose inclusion adds legitimacy and weight to a persuasively written fantasy.
And for readers and the information ecosystem, it opens up entirely new questions about whether citations can be trusted in any way. And it could well feed conspiracy theories about sensitive issues that never existed.
Chris Moran, head of editorial innovation at The Guardian,
And this goes beyond simple made-up items. According to quote in Futurismvarious journalists from usa today HE surprised upon discovering that ChatGPT had featured citations from comprehensive research studies on how access to guns does not increase the risk of infant mortality. But the studies cited in the ChatGPT notes did not exist. In reality, they were made up.
To add more gravity to the matter, ChatGPT defended itself, citing that the references I provided are genuine and come from peer-reviewed scientific journals. But it was a lie.
The solution is not clear. Neither is the culprit of the problem. Now it is the responsibility of the companies that develop the AI to solve the matter.