- Index hide
According to Statista, 42% of adults over 65 of Hispanic origin residing in the United States reported having a need for mental health in 2021.
Mental health supplements for female use are projected to reach a market size of US$161 billion by 2027.
According to Statista, about 3.59 percent of the world’s population had depression in 2019.
KoKo is a non-profit mental health company that connects volunteer counselors with people seeking therapy through various platforms such as Telegram and Discord who will contact them later. However, this time it was a chatbot generated by an AI (Artificial Intelligence) who contacted the users of the mental health company’s services.
A chatbot experiment was conducted to respond to users. Notably, prior to the mental health company’s testing of the AI chatbot, users would send a message, which would be redirected to a volunteer counselor to get started with the service.
It is worth mentioning that this differs from the usual messaging protocol used by KoKo. Previously, a person seeking mental health advice would chat with a bot, the bot would forward their message to a volunteer counselor, and then the counselor would reply.
Robert Morris, co-founder of KoKo, said the experiment allowed them to help about 4,000 people. Morris initially posted some tweets that heavily imply a lack of informed consent among users, such as: “Once people knew the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty.”
We provided mental health support to about 4,000 people — using GPT-3. Here’s what happened 👇
—Rob Morris (@RobertRMorris) January 6, 2023
He also added that although posts composed by AI were rated significantly higher than those written by humans, users were not comfortable with the lack of genuine compassion and empathy expressed by a robot.
“It is also possible that genuine empathy is something that we humans can value as uniquely ours. It may be the one thing we do that AI can never replace,” Morris tweeted.
Twitter users responded to the thread with criticism of the unethical nature of the experiment. They argued that this inherently breaks the social contract between a therapist and a client, violating trust and making those seeking help feel “dehumanized.”
Why can’t you tech people be normal?
—Katelyn Burns (@transcribe) January 7, 2023
Morris later clarified that his initial tweet was related to him and his team, not the users. He also argued that the feature was optional and everyone knew about it when it was live for a few days.
However, it is not yet clear what kind of information was provided to users before they participated and whether they were fully informed of the potential harms and benefits. The lack of such consent would make the experiment illegal in medical contexts, although online mental health services remain in the gray area of the law, since they operate outside of a formal medical setting. The experiment did not receive Institutional Review Board (IRB) approval, meaning it was conducted without formal oversight.
Some important clarification on my recent tweet thread:
We were not pairing people up to chat with GPT-3, without their knowledge. (in retrospect, I could have worded my first tweet to better reflect this).
—Rob Morris (@RobertRMorris) January 7, 2023
Related notes: