One of the most common drawbacks of using chatbots and other similar artificial intelligence models is the difficulty that humans have when it comes to detecting if, for example, an article has been created thanks to an AI or, instead, has been created. written a person OpenAI, luckily, has found the solution. The company founded by Elon Musk He has launched a tool capable of detecting if a text has been generated through ChatGPT or other similar AIs. They affirm, yes, that “it is not totally reliable”.
The new OpenAI tool, in particular, supports up to 34 language models that work in a similar way to ChatGPT. Its mechanics is simple: the user only has to enter the text in a publicly available classifier through the company’s website. Then, click on the Submit button, and wait for the AI to detect if there is a possibility that this writing has been generated by artificial intelligence or if, instead, it is a text created by a human.
The new OpenAI text classifier, however, is not completely reliable. The company claims that the tool “correctly identifies 26% of AI-written text.” It also ensures that 9% of the time it gives false positives. That is, it incorrectly labels human-written text and classifies it as AI-written text.
The platform, however, will offer different results to allow the user to know what the level of precision is when identifying that text. For example, if the AI cannot fully tell whether the content was written by a model like ChatGPT or by a human, it will display the result as unclear or possible. If, on the other hand, he believes that it really has been generated by artificial intelligence, he would label it as “very unlikely”.
The new tool to detect if a text has been written by ChatGPT or another AI is very limited
The tool also has some limitations. It can only accurately detect whether a text has been written by ChatGPT or a similar AI if the text is at least 1,000 words long and is written in English. In addition, if someone modifies a text generated by an AI, the classifier will have difficulties to verify if that content is really written by an artificial intelligence.
It also cannot distinguish very predictable texts. For example, a list with the countries of the world ordered alphabetically. Or, those in which the results are exactly the same, are written by an AI, or by a human.
In any case, the new tool can be useful to avoid those false statements that claim that a text has been written by a human, when it has actually been generated by artificial intelligence. This applies, above all, in the classroom. And it’s certainly a response to the growing trend of students using ChatGPT and similar models for homework and other work. The new classifier, however, may also have “an impact on journalists, disinformation researchers, and other groups,” says OpenAI.