An artificial intelligence that helps with depression? Microsoft thinks so. The company has just filed an application to patent an application aimed at offering therapeutic treatment, enhanced by fashionable technology.
Microsoft thinks of something like ChatGPT. “Artificial intelligence (AI) chatbot is becoming increasingly popular and is being applied in an increasing number of scenarios,” the company says in the document presented in the US on November 7. He explains that he wants your artificial therapist provides “emotional care” through simulated conversation, in which users can use text, voice and images.
Microsoft details that the application will have a chat window, a processing module and a response database. The system will be able to perform explicit psychological tests. You will ask questions, monitor your responses, and use them to learn from the “patient.”
The artificial therapist will evaluate the users using a “scoring algorithm predefined by psychologists or experts in psychological domains.” In some cases, the chatbot will be able to make suggestions on how to address some issues.
Microsoft gives the example of a conversation in which a person says they “feel bad.” The artificial intelligence asks you why. When the hypothetical user explains that he has problems with the family and that is why he feels tired, the chatbot recommends that he go out “for a 30-minute run to cool off.”
Artificial intelligence as a therapist: good or bad idea?
This artificial intelligence-powered therapist would build a “memory” of the user, not only based on their responses, but also on “signals” obtained from images. Another example shows a conversation in which the supposed patient shares photographs from a trip.
Although it may seem crazy to some, Microsoft is not the first to come up with something like this. Apple, for example, has also thought about developing a health and well-being service based on artificial intelligence capable of detecting your emotions. The project is internally called Quartz. The platform would encourage users to improve their eating habits, exercise and provide recommendations for better sleep.
A similar service has already been tested in the US to staff a helpline. It was a total disaster. The National Eating Disorders Association of the United States (NEDA) decided to replace its helpline staff with a chatbot. Artificial intelligence began to give recommendations on how to lose weight and other dangerous advice.
The World Health Organization (WHO), after the rise of ChatGPT, warned last May that caution was needed in the use of artificial intelligence chatbots in healthcare. The WHO then explained that the data used to train these models can be “biased” and generate misleading information that can cause harm to patients.