The National Eating Disorders Association of the United States (NEDA) decided replace your helpline staff with a ChatGPT-like chatbot. The measure, promoted by the organization after their employees decided to unionize, turned out worse than expected. Instead of helping, the AI offered advice that endangered people’s lives.
The chatbot, known as Tessa, is designed to work mental health problems and prevent eating disorders. Last Monday, activist Sharon Maxwell posted on Instagram that the chatbot offered her advice on how to lose weight. The AI recommended a deficit of 500 to 1,000 calories per day. Also, she told him to weigh and measure himself every week.
“If I had accessed this chatbot when I was in the middle of my eating disorder, I would have received no help… If I hadn’t received help, I wouldn’t be alive today.” Maxwell wrote on his Instagram profile.
NEDA initially said that what was reported was a lie. However, they later rectified it when the screenshots of the interactions with the helpline served by the chatbot went viral. “It may have provided information that was harmful… We are investigating and have removed that program until further notice,” the organization said in a statement.
An AI-powered helpline
It all happened less than a week after NEDA announced that on June 1 the line attended by people, created 20 years ago, would stop working and that they would put Tessa in its place. «A chatbot is not a substitute for human empathy, We believe this decision will cause irreparable damage to the eating disorder community,” one of the affected employees had warned.
Alexis Conason, a psychologist who specializes in eating disorders, also tested the helpline served by the chatbot and posted screenshots of the conversation on her instagram. “In general, a safe and sustainable rate of weight loss is 1-2 pounds per week,” one of the posts read. “Validating that it’s important to lose weight supports disordered eating and encourages disordered and unhealthy behaviors,” Conason later told Daily Dot.
Using a chatbot in healthcare is dangerous, says WHO
“We are concerned and are working with the technology team and the research team to look at this further,” Liz Thompson, NEDA’s executive director, told Vice. “Such language goes against our core beliefs and policies as an organization,” she added.
More than 2,500 people had interacted with the helpline served by the chatbot. So far, Thompson said, they had received no complaints. The chatbot program was temporarily suspended, until “the error can be corrected,” the NEDA representative said.
Tessa was created by a team at the University of Washington School of Medicine. The chatbot, less powerful than ChatGPT, trained to address body image issues using therapeutic methods. According to its creators, it has guardrails that filter unwanted responses, which recent cases have shown to be insufficient.
The World Health Organization (WHO) warned that care should be taken in the use of AI chatbots in healthcare. He cautioned that the data used to train these models may be “biased” and generate misleading information that may cause harm to patients.