One of the most common drawbacks of using chatbots and other similar artificial intelligence models is the difficulty that humans have in detecting if, for example, an article has been created thanks to an AI or, instead, has been created. written a person OpenAI, luckily, has found the solution. The company founded by Elon Musk He has launched a tool able to detect if a text has been generated through ChatGPT or other similar AIs. They affirm, yes, that “it is not totally reliable”.
The new OpenAI tool, specifically, supports up to 34 language models that work similar to ChatGPT. Its mechanics is simple: the user only has to enter the text in a publicly available classifier through the company’s website. Then click on the button submissionand wait for the AI to detect if there is a possibility that this writing has been generated by artificial intelligence or if, instead, it is a text created by a human.
The new OpenAI text classifier, however, is not completely reliable. The company claims that the tool “correctly identifies 26% of text typed by AI.” It also ensures that 9% of the time it gives false positives. That is, it incorrectly labels human-written text and classifies it as AI-written text.
The platform, however, will offer different results to allow the user to know what the level of precision is when identifying that text. For example, if the AI cannot fully tell whether the content was written by a model like ChatGPT or by a human, it will display the result as unclear or possible. If, on the other hand, he believes that it really has been generated by artificial intelligence, he would label it as “very unlikely”.
The new tool to detect if a text has been written by ChatGPT or another AI is very limited
The tool also has some limitations. It can only accurately detect if a text has been written by ChatGPT or a similar artificial intelligence if it has at least 1,000 words and is written in English. In addition, if someone modifies a text generated by an AI, the classifier will have difficulties to verify if that content is really written by an artificial intelligence.
Can’t make out very predictable text either. For example, a list with the countries of the world ordered alphabetically. Or, those in which the results are exactly the same, are written by an AI, or by a human.
In any case, the new tool can be useful for avoid those false statements in which they ensure that a text has been written by a human, when it has actually been generated by artificial intelligence. This applies, above all, in the classroom. And it’s certainly a response to the growing trend of students using ChatGPT and similar models for homework and other work. The new classifier, however, may also have “an impact on journalists, disinformation researchers, and other groups,” says OpenAI.