- Index hide
The AI was pointed out for putting the lives of several people at risk.
43 percent of companies in Mexico make use of some AI.
2 thousand 500 people would have used the chatbot of the help line, which is enabled for eating disorders.
Artificial Intelligence has come to revolutionize society. Currently, many companies begin to Consider Artificial Intelligence as the key to optimizing your productivityas indicated by a study prepared by the consulting firm Ernst & Young, which indicates that 89 percent of companies believe that AI will help them improve their operations.
In fact, the importance regarding the use of Artificial Intelligence has reached such a degree that the Artificial Intelligence Law of the European Union is one step away from being approved, which would be the first set of Western rules to regulate the operation and limits of AI programs.
In Mexico, 31 percent of companies use AI in their business operations. Compared to 2021, organizations are 17 percent more likely to be using AI, as well as 43 percent of companies reporting they are exploring its use; 68 percent of Information Technology professionals explore and implement AI in the country, this has accelerated their investments and deployment in the last two years.
AI replaces helpline workers
The National Eating Disorders Association of the United States (NEDA) took the risk of replacing its human staff with a chatbot similar to ChatGPTthis measure was taken after their employees decided to unionize, but karma did its thing and their plans did not turn out as expected, because instead of the AI helping to meet their objectives, he ended up giving advice that endangered people’s lives.
The AI used is called Tessa, this tool is designed to work on mental health problems and prevent eating disorders. The activist Sharon Maxwell shared through her Instagram account the chat she had with the AI, it offered her advice on how to lose weight, recommending a deficit of 500 to a thousand calories per day and that she weigh herself every week. .
“If I had accessed this chatbot when I was in the middle of my eating disorder, I would not have received help. (…) If I had not received help, I would not be alive today“, Maxwell wrote on his Instagram profile.
In the first instance, NEDA responded that the complaint was a lie, but when the news went viral they had to rectify it, since the screenshots showed the interactions that people had with the chatbot. “It could have provided information that was harmful… We are investigating and have removed that program until further notice,” the organization explained in a statement.
What happened happened a week after NEDa announced that as of June 1 the line attended by people would stop working after operating for 20 years, being replaced by Tessa. “A chatbot is not a substitute for human empathy, we believe this decision will cause irreparable damage to the eating disorders community,” warned one of the former contributors.
On the other hand, a psychologist named Alexis Conason, specializing in eating disorders, took the initiative to test Tessa and posted screenshots of the conversation on her Instagram. “In general, a safe and sustainable rate of weight loss is 1-2 pounds per week,” one of her messages read. “Validating that it is important to lose weight is supporting eating disorders and encourages disordered and unhealthy behaviors,” explained the therapist for Daily Dot.
More than 2,500 people interacted with this helpline; the organization had not received any complaints, however, given the magnitude of the situation, the chatbot had to be temporarily suspended, until the error is corrected, said the NEDA representative.
It should be noted that Tessa was developed by a team at the University of Washington School of Medicine., which was trained to address problems related to body image using therapeutic methods. Its creators say that there are filters that separate unwanted responses, but they are not enough, and it has already been proven.
In this regard, the World Health Organization (WHO) warned that care should be taken in the use of chatbot of AI in health care, adding that the data used to configure these types of tools can be biased and generate questionable information that can cause harm to patients.