OpenAI, Sam Altman’s company, announced that Artificial Intelligence ChatGPT will serve for content moderation: for them, even “it will be better than human beings.”
A company statement, signed by Lilian Weng, Vik Goel and Andrea Vallone, explains that the GPT-4 version of ChatGPT will work with a content moderation system to interpret rules and nuances in policies.
“We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and alleviate the mental load of a large number of human moderators”, indicates OpenAI.
“Anyone with access to the OpenAI API can implement this approach to create your own AI-assisted moderation system.” indicates the company.
The moderation of content in networks and the contribution of Open AI and ChatGPT
In recent years, the lack of content moderation on networks such as Facebook, Twitter or Instagram has been criticized, that allow insults or the use of inappropriate or toxic material.
Although the system has improved, it enters a new level: the exhaustion of human moderators or the extremism of technology when making decisions.
ChatGPT, or specifically the GPT-4, seeks to work more directly and automatically, but with the sensitivity in decision-making that a human being can have.
“Unlike constitutional AI,” OpenAI says in its statement, “which is primarily based on the model’s own internalized judgment of what is safe and what is not, our approach makes platform-specific content policy iteration much faster and less labor intensive.”
“We encourage Trust and Security professionals to try this process for content moderation, since anyone with access to the OpenAI API can implement the same experiments today”, emphasizes Sam Altman’s company.