The United States Federal Trade Commission (FTC) is not giving up in its battle against technology. After announcing that it would appeal the ruling against it in the purchase of Activision Blizzard, the regulator has set its sights on OpenAIwho will open an investigation to determine if ChatGPT has published harmful information.
A common problem with ChatGPT is its tendency to fabricate facts and back them up with studies that don’t exist. OpenAI knows this and has long included a warning stating that “the system may generate incorrect or misleading information and produce offensive or biased content.” The latter has caught the attention of the FTC, who investigates whether ChatGPT harmed users by posting false data.
According to a report from Wall Street Journalthe FTC sent a civil subpoena to OpenAI where he informs you that there is an ongoing investigation. The regulator wants to know if the company engaged in “unfair or deceptive privacy or data security practices”, including those that affect its consumers. For it has written a long questionnaire that the company should respond as soon as possible.
The document asks OpenAI to detail the operation of its artificial intelligence models and its derived products. The FTC wants a full description of ChatGPT, Dall-E, their plugins and interactions with third parties. In the same way, it asks those led by Sam Altman that explain where the data comes from that they used to train GPT-4 and the procedure to refine it and combat misinformation.
One of the questions in the questionnaire asks OpenAI to describe the extent to which it has taken steps to address or mitigate the risks that products like ChatGPT may “generate false, misleading, or derogatory statements about real people.”
ChatGPT has been in the FTC’s sights for months
The FTC also asked OpenAI to share the complaint logs about ChatGPT, mainly those where you are accused of publishing false, derogatory or harmful statements about a person. In addition to this, the regulator requested information about the security breach that exposed the personal information of chatbot users. the american agency would seek multiple avenues to address privacy and security concerns of OpenAI products.
The new investigation would be the result of a complaint filed in March by the Center for AI and Digital Policy (CAIDP). In her GPT-4 is accused of being a deceptive product that poses a risk to privacy and public safety. According to the CAIDP, the model promoted by ChatGPT violates federal consumer protection law by not having any independent evaluation.
“The FTC has stated that the use of AI must be transparent, explainable, fair, and empirically sound while encouraging accountability. OpenAI’s GPT-4 product does not meet any of these requirements,” said Marc Rotenberg, CAIDP General Counsel. “Artificial intelligence systems have greater potential to reinforce entire ideologies, worldviews, truths and falsehoods, to cement or lock them away, preventing further contestation, reflection and improvement,” he stated.
The nonprofit organization invokes a section of the FTC Act that prohibits deceptive unfair acts and practices. Apps like ChatGPT or Dall-E are commercial products, so consumer protection laws would apply. In case of determining that OpenAI violated the law, the Commission could impose a fine or a brake on future versions of the popular chatbot.