China begins to regulate deepfakes

The growing popularity of text and image generation tools using artificial intelligence algorithms poses a direct threat to the interests of any totalitarian government. For them, control of the information that the nation can access is vital. However, the task is becoming more and more complicated: first, the Internet; then, social networks; now, the artificial intelligence that generates deepfakes.

That is why the Cyberspace Administration of China approved in December a new set of laws that regulate technology that uses machine learning techniques, virtual reality and any other type of algorithm capable of artificially generating images, audio, video or virtual scenes. These measures come into effect this month.

For the Chinese government, the Internet was always dangerous

There is no surprise. The Chinese Communist Party has strictly supervised what can be seen and put on the Internet since its expansion. The Chinese internet is not the internet that you and I surf daily and its use is not anonymous, since all citizens must link their telephone number associated with their identity document. In addition, all social networks are supervised and moderated by the Party thanks to artificial intelligence algorithms that prevent the promotion of news, opinions or ways of life that are opposed to its vision.

The new regulations prohibit creating or disseminating “fake news” generated by tools of this type or information that is “harmful to the economy or national security.” This considered ambiguity provides a wide margin of interpretation to the authorities for the censorship of almost any type of content. It also forces the companies responsible for these technologies to “prominently” label images, videos, and text as “synthetically” generated, known as deepfakes when they can be misinterpreted as real. Regardless of the content or purpose of the generated material, the citizen must know at all times that it is something created by a computer.

China watches what is happening in the West and takes precautions before it poses a serious problem for the government. Since 2014, various individuals and organizations have flirted with the idea of ​​generating realistic deepfakes using artificial intelligence. Its first use, as has usually happened on the Internet, was the creation of pornographic material using well-known faces of famous women from the cinema, the music industry or politics without their consent. Soon similar technologies were adopted for a less reprehensible purpose in the form of filters within Snapchat or TikTok.

The performance of the algorithms and the ease of use of these tools have made it possible to create realistic audio, images and even videos capable of manipulating a politician’s speech so that he apparently says what he has not said.

The most significant advances are made in the United States, but Chinese companies are already developing their own algorithms that draw on Chinese culture, since tools like DALL-E 2 and ChatGPT are clearly influenced by Anglo-Saxon culture and industry, which little it has to do with the oriental or the ethical and social vision of the Chinese Communist Party. It is not only a question of controlling the AI, but of developing your own, according to your values, to be able to compete globally with other nations.

What can happen in the United States

In the United States the matter is more complicated because the Government does not have absolute control over the Internet. The challenge is to regulate the possible harmful uses of these technologies without violating legitimate forms of expression protected under the First Amendment and the Universal Declaration of Human Rights. A citizen has the right to have truthful information, but also to create satirical content or freely express their opinions or creative ideas.

Elon Musk transformed into Baby Yoda thanks to AI

But the use of content deepfakes presents important problems, such as the manipulation of the public, the creation of materials that violate the right to honor of citizens, violations of copyright or intellectual property…

Virginia, Texas and California have already proposed measures. In Virginia, the law penalizes all those who distribute non-consensual pornographic material using deepfake technology and in Texas the distribution of those generated with the intention of defaming or damaging the reputation of political candidates is also prohibited. However, there is no clear consensus on how to legislate to protect all citizens without restricting their rights enshrined in the Constitution.

According to Aaron Moss, director of litigation at Greenberg Glusker law firm, celebrities have had some success suing advertisers for unauthorized use of their images under so-called rights of publicity laws. He cited the $5 million deal between Woody Allen and American Apparel in 2009 over the director’s unauthorized appearance on a billboard for the edgy clothing brand.

Big AI companies like OpenAI could put a watermark on every image or video generated, but this would clearly undermine its artistic or commercial value. Protecting against misuse of those tools spoils the chances of putting them to the best possible use.

In addition, this approach does not fix the problems presented by the generation of texts, which is already causing headaches for university professors. Has the student written that essay or been ChatGPT? The tool is still crawling, but it already writes better than many people and presents itself as an apparently rational entity when it is nothing more than a prediction calculator to choose the next word.

Weapons against deepfakes in Europe

Within the European framework, every citizen can benefit from the right to protection of personal data. The first paragraph of article 5 of the General Data Protection Regulation (GPRD) stipulates that any online content relating to a person must be “fair and transparent in relation to the interested party” and that, in addition, this must be “accurate and , if necessary, updated”. If this is not the case, the article states that “reasonable measures must be taken so that personal data that is inaccurate with respect to the purposes for which it is processed is deleted or rectified without delay.”

The approval in April of the Digital Services Act (DSA) aimed at subjecting large technology companies to greater liability for illegal content disseminated on their platforms and services, provides even more resources to the Union Union to combat disinformation and deepfakes. The large social networks will have to eliminate all false content immediately to tackle misinformation and the violation of the honor of citizens. Otherwise, they will face hefty penalties of up to 6% of their global revenue.

Europe |  Cisco SystemsUrsula von der LEYEN, President of the European Commission welcomed the agreement: “What is illegal offline will now also be illegal online in Europe” Credit: European Parliament

As early as 2020, Twitter began tagging and removing artificially made videos with the aim of manipulating information or impersonating a person without their consent. However, the spread of these materials is always faster than their suppression. A manipulated video speech of a politician, impossible to discern if it is false or not, can put the company that manages the social network, the authorities and the judges in check.

When AI eclipses the human eye and understanding, it will be difficult to control what is truthful and what is not and, above all, to do it in time, before the damage is already done.

The most feasible solution seems to be to subject these algorithms to their own medicine: other algorithms, but instead of being trained to deceive, they are trained to detect deception. For a human it can be difficult to know if an image is made by a robot or by a similar one, but for the machine it is relatively easy to verify it, since everything is a mathematical process. Not only are there already programs to detect texts generated by ChatGPT, but there are also those that detect whether a voice is human or has been synthetically generated.

However, if we need to create an AI today to control the AI ​​we programmed yesterday, where will the human being of tomorrow end up if they are no longer able to control what they themselves created?

View Hide summary