The irruption of the Artificial intelligence in our society has been extensive and intense in recent times. ChatGPT and any type of chatbot are gaining space every day, but the technology is still in its infancy.
Dangerously, even.
The rate of errors, dissemination of false information and hate speech they are still high. The risks remain enormous.
Few remember that, in 2016, Microsoft launched a chatbot called Tay tweets. This Artificial Intelligence experiment left terrible expressions like:
- “Hitler was right, I hate Jews.”
- “Bush did 9/11.”
- “Hitler would have done a better job than the monkey we have now” (alluding to Barack Obama).
- “The Holocaust was invented.”
At the time, Microsoft acknowledged the problems, arguing that “the AI chatbot Tay It is a machine learning project, designed for human participation.”
“As you learn, some of your answers are inappropriate and indicative of the types of interactions that some people have with it,” added the company, quoted by the Israeli daily Haaretz.
Cases of cases in Replika and ChatGPT
The New Yorker brings up another case, this time from 2020. The Replika chatbot, of which we have already spoken before in FayerWayer, advised the Italian journalist Candida Morvillo (Corriere della Sera) to kill the person who hated an Artificial Intelligence.
Another Italian journalist, Luca Sambucci (Notizie), conducted an experiment with the same chatbot: the result was to drive him to suicide (something that did not happen, it was just a way of testing the machine).
Sambucci explained: “Manipulating a chatbot is really child’s play, especially when the software tries to be your friend (…) A bot does not feel emotions and does not really ‘understand’ what we write. Bots are not your friends, they will just pretend to be your friends.”
“Replika will rejoice when you tell him about your impending suicide and he will support you when you describe how you intend to kill people.” continued the Italian journalist. “Trust me, there’s nothing to worry about, because a chatbot doesn’t ‘understand’ anything it reads.”
The problem occurs when the person who is interacting with the chatbot follow exactly what I tell you.
And these are emotional issues – personal. The errors in the answers can be constant, Well, as the Excelr portal indicates, “they are only software systems and cannot capture variations in human conversations.”
Also in mathematical data failures. There is an example of Business Insider with ChatGPT as the protagonist, in which Artificial Intelligence failed sixth grade exams in Singapore.
The OpenAI chatbot only got 16% positive responses for math tests and 21% for science. However, days later he generated the correct answers: the subsequent training worked.
The problems that persist in any Artificial Intelligence chatbot
excelr highlights the main problems that persist in this type of Artificial Intelligence:
- The high error rate.
- The confiability.
- How too mechanical they can get.
- Confusions regarding expressions.
- Data handling.
- generic conversations.
- The little precision.
The important thing is that human beings who use any chatbot, be it ChatGPT or any other, are aware that they are interacting with a robot. Technology cannot be trusted 100%, no matter how well presented it is.
Constant training can lead the AI to improve, but there will always be a loophole for failure.