OpenAI could pause the release of future versions of ChatGPT and other AI models. A complaint from the nonprofit Center for AI and Digital Policy (CAIDP) seeks the Federal Trade Commission (FTC) to initiate an investigation against the creators of the popular chatbot. According to the document, the group accuses GPT-4 of being a deceptive product that poses a risk to privacy and public safety.
The CAIDP mentions that OpenAI did not carry out any independent evaluation before the rollout of GPT-4, the latest model that powers ChatGPT. Before the eyes of the group, AI violates federal consumer protection law. That is why it has asked the FTC to open an investigation and suspend the release of other commercial products until OpenAI meets the guidelines imposed by the federal agency.
The Federal Trade Commission has stated that the use of AI must be “transparent, explainable, fair, and empirically sound while encouraging accountability.” OpenAI’s GPT-4 product does not meet any of these requirements. It’s time for the FTC to act. There should be independent monitoring and evaluation of commercial AI products offered in the United States.
Marc Rotenberg, president and general counsel of the CAIDP said that the FTC has a responsibility to investigate and prohibit unfair and deceptive business practices. “We specifically ask the FTC to determine whether the company has complied with the guidance that the federal agency has issued,” Rotenberg said. The group describes Dall-E, GPT-4, ChatGPT and other AI-based services offered by OpenAI, as well as their plugins, as commercial products.
The FTC could stop the development of ChatGPT
The CAIDP started a crusade against OpenAI products because They are a risk to people’s safety. In his complaint, he mentions that the company launched GPT-4 on the market being aware of the dangers that its use represents for disinformation campaigns, arms proliferation, and cybersecurity operations.
AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and falsehoods, to cement or lock them away, preventing further contestation, reflection and improvement.
Merve Hickoc, CAIDP’s director of research said that while they recognize the opportunities and support development, the absence of safeguards to limit bias and deception in these models puts consumers at risk and companies. In the complaint filed with the FTC, the group cites the Europol report detailing how ChatGPT could help crime take place.
The complaint invokes Section 5 of the FTC Act which prohibits unfair and deceptive acts and practices and empowers the Commission to enforce the prohibitions of the law. Following the investigation, if there is reason to believe that the law is violated, the Commission may initiate an enforcement action through administrative or judicial process.
The CAIDP is not just looking for an independent assessment of all LLM models prior to commercial deployment. Also asks the FTC to write a set of rules for generative AI products, such as ChatGPT or Midjourney. If the federal agency listens to the complaint, the companies will have to submit their products to evaluation and guarantee that they meet the standards.