ChatGPT is not revolutionary, says Meta’s AI boss

ChatGPT has been on everyone’s lips for several weeks and is, along with DALL-E 2, one of the great OpenAI viral phenomena. The artificial intelligence chatbot has become a recurring topic of conversation on social networks, and has also made endless headlines since its launch. However, a renowned AI expert appeared to lower the foam on the matter and ensure that the platform has nothing innovative or revolutionary.

This is the Frenchman Yann LeCun, a pioneer in areas such as machine learning and deep learning, and computational neuroscience, among others. Today Meta’s chief artificial intelligence scientist was categorical in his opinion about ChatGPT and its impact among users.

“In terms of the underlying techniques, ChatGPT is not particularly innovative. It is nothing revolutionary, although that is the way it is perceived in the public. It’s just that, you know, it’s well put together, it’s very well done,” he said in a recent meeting via Zoom, according to ZDNet.

At first glance, it might seem that LeCun is trying to belittle the work of OpenAI. However, its terms are intended to provide context for those who had their first interaction with a language model of this type with ChatGPT. That is, to clarify that the technology that powers the platform was not created from scratch by Sam Altman’s firm.

“OpenAI is not particularly advanced compared to other labs, not at all. It’s not just Google and Meta, but there are half a dozen startups that basically have very similar technology. I don’t want to say it’s not rocket science, but it’s really shared, there’s no secret behind it, so to speak.”

Yann LeCun, about OpenAI and ChatGPT.

ChatGPT and the art of taking advantage of existing technologies

Yann LeCun explained that OpenAI has made use of technologies that have been developed and refined over the course of many years. Something that is not necessarily bad, although it has generated some discomfort in the sector due to the great media coverage of ChatGPT and other products from its parent company.

The expert indicated that GPT-3, the language model on which the artificial intelligence chatbot is based, was created from Transformer architectures developed by Google. And in turn, the technology developed by those from Mountain View was based on the work of Canadian computer scientist Yoshua Bengio, who two decades earlier had created the first neural network language model.

Furthermore, LeCun claimed that the technique used by OpenAI to train ChatGPT—known in the industry as “reinforcement learning through human feedback”—was also originally implemented by Google.

Meta’s chief AI scientist’s explanation is actually valid. As we already said, it serves to put into context how the startup co-founded by Elon Musk has achieved some of its most successful products. But it doesn’t detract from the “turn of the screw” in exposure that OpenAI has achieved by allowing the general public to interact with services like DALL-E 2 and ChatGPT.

In fact, the latter is what really causes concern in companies like Google or Meta. It is not that they are less advanced than OpenAI in terms of artificial intelligence, but that they have not yet developed tools that allow their massive use in everyday situations. At least not like ChatGPT, whose interaction with people is direct and not a feature that lurks under the hood in another product.

Jealousy in the competition?

OPT Goal

Asked why Google and Meta do not compete with ChatGPT today, LeCun opted for a similar approach to the one recently expressed from Mountain View. “Both companies have a lot to lose by releasing systems capable of inventing things,” he said. The foregoing, in clear reference to how the OpenAI chatbot is capable of “lying” when answering questions to which it does not know the true answer.

For now, the startup led by Sam Altman is enjoying its moment of greatest exposure and recognition. Yesterday, the company confirmed that it will receive a new multi-million dollar investment from Microsoft. The final figure has not been disclosed, but is estimated at $10 billion as part of a multi-year deal. This will not only allow increasing the infrastructure to train ChatGPT and other systems, but also implement them in Redmond products. In addition, Azure will become the official provider of cloud services for the firm based in San Francisco, California.

View Hide summary