The creator of ChatGPT has a surprising position on how to regulate AI

Mira Murati, who is responsible for the team that works on the development of artificial intelligence models such as ChatGPT, the popular chatbot, or DALL-E, believes that AI should be regulated to prevent its misuse and thus “is aligned with the human values”. The OpenAI technology director has stated in an interview with the magazine Time that, although these types of technologies are still somewhat new, it is not early for politicians to get involved because of the great impact they are going to have.

Murati, who has not hesitated to mention the benefits of ChatGPT and its “immense potential” in education, has also confessed to having doubts about the use of this AI by users. “How do you get the model to do what you want it to do, and how do you ensure it’s aligned with human intent and ultimately in service to humanity?” she details. He further states that there are many social impact and other “ethical and philosophical” questions that the OpenAI team must consider.

Murati, however, points out that although it is important that his company or others specializing in the development of AIs similar to ChatGPT ask themselves these questions, the participation of other parties is also necessary, including governments that are involved in the creation of regulations. “It is important that we bring different voices, such as philosophers, social scientists, artists and people from the humanities”, he says moments before assuring that “AI can be misused” or used by “bad actors”. “We need a lot more information in this system and a lot more information that goes beyond technologies, definitely regulators, governments and everyone else”, Murati stresses.

ChatGPT creator says it’s “not too early” to regulate AI

The OpeAI CTO also does not believe that it is too soon to start regulating AI due to the impact it could have in the future. Many companies, in fact, are actively working on the development of models similar to ChatGPT to launch them publicly and integrate them into some of their services. Google, for example, is already testing an AI capable of answering questions based on LaMDA, its natural language model.

Meanwhile, administrations and political communities, such as the European Union, are working on measures that make it possible to prohibit, for example, those artificial intelligence systems whose objective is to control citizens, or regulate the AIs with which some type of of identity theft.