They include ChatGPT as a co-author of an article

ChatGPT, the artificial intelligence model developed by OpenAI, has been named, along with 11 other researchers from healthcare startup Ansible Health, a co-author from a scientific article which analyzes the capabilities of artificial intelligence itself to pass the United States Medical Licensing Exam (USMLE). A measure that has not convinced many specialists and users and has even been rejected by Springer Nature, one of the largest academic publishers.

In fact, OpenAI’s artificial intelligence has already appeared on more than one occasion as co-author of some articles published in the journal Nature. One of them, with an error attributed to a human. Even media like CNET have used ChatGPT to write articles; many with a multitude of errors, something that has made us question its use as a tool to automate work.

Springer Nature, We reiterate, you do not agree that ChatGPT is named as the author of an article. They have no problem, yes, in helping to write research. Jack Po, CEO of Ansible Health, however, has singled out Futurism that “adding ChatGPT as a co-author was definitely an intentional move.” It is something, in addition, that they evaluated for a while, highlights the researcher. “The reason we listed him as an author was because we believe he actually contributed intellectually to the content of the article and not just as a subject for evaluation.”

Po, however, details that ChatGPT has not been part of the “predominant scientific rigor and intellectual contributions.” He claims that he contributed in the same way that “an average author” would, but he expects that ChatGPT and other similar models will be used in all work. Including those of knowledge.

That ChatGPT appears as the author of a scientific article does not seem to convince many

The decision of the CEO of Ansible Health to include ChatGPT as an author, however, has not been liked by many users and experts, who have arrived to affirm that it is a “deeply stupid” measure. Mainly, because a language model of these characteristics “cannot have the moral responsibilities of an author”. Others detail that “if a person creates or contributes results for an article”, he could be named as a co-author. But that this, however, is not something that extends “to models or algorithms”.

ChatGPT also has a drawback that has been reflected in the dozens of articles that CNET has been publishing in recent weeks. It is not always perfect, and it makes many calculation errors by not correctly processing the information.