this was his strategy

Much has been said about the development of Artificial Intelligence, highlighting the machine’s ability to reproduce, even improve, the work of a person. In this case, it highlights how a human beat a device… playing Go.

It was done by Kellin Pelrine, an American player, as explained by Ars Technica, citing the Financial Times. “The victory (…) highlighted a weakness in the best Go software that is shared by most Artificial Intelligence systems used today, including the ChatGPT chatbot,” the portal indicates.

Go is a strategy game of Chinese origin, created more than 2,500 years ago. The dynamic consists of placing, in turns, white and black stones at the intersections of the board: the one that controls more than 50% of the area of ​​the board wins.

There is a negative precedent for humanity in the game against the machines. It happened in 2016, against AlphaGo, a system created by the company DeepMind, owned by Google. The AI ​​defeated the world champion of Go, Lee Sedol, four games to one.

Sedol retired three years later, saying he “was an entity that you can’t beat.”

How did Kellin Pelrine beat Artificial Intelligence in Go?

In the case of Pelrine, he played against an Artificial Intelligence similar to AlphaGo, created by the FAR AI firm. Her strategy, she would later reveal, consisted of “slowly linking together a large loop of stones to surround one of her opponent’s groups, while distracting the AI ​​with moves to other corners of the board.”

“The Go bot didn’t notice his vulnerability, even when the encirclement was almost complete. As a human, it would be pretty easy to spot,” Pelrine noted.

But let’s not get too happy: most likely, based on training, the machine will learn to avoid the play and later it will be much more difficult to defeat it.

Adam Gleave, executive director of FAR AI, later explained it.

“One likely reason is that the tactic exploited by Pelrine is rarely used, which means that the AI ​​systems had not been trained in enough similar games to realize they were vulnerable,” Gleave was quoted as saying by Ars Technica.

Will something similar happen again in the future?