Go: Bravo for this victory of human intelligence!

Start

The go game games between Google Alphago's artificial intelligence and the world champion Lee Sedol have made a lot of ink flow. Triumph of the machine over the man reads everywhere. However, Lee Sedol's defeat must be interpreted as a victory for mankind. It was advances in computer research that made this possible, it was software written by humans that won.

After the failures and victory of Deep Blue of IBM on Kasparov in 1996, after Jeopardy! and the victory of Watson from IBM in 2011, the game of Go was one of the few games where humans still dominated machines.

 

Lee Sedol, star of Go
 

In March 2016, a five-set match between Go star Lee Sedol of South Korea and AlphaGothe Google DeepMind software. AlphaGo won its first three games and lost the fourth.

Final Score: AlphaGo 4 - Lee Sedol 1. AlphaGo is awarded the title of Grand Master of Go, fourth in the world Go ranking ahead of Lee Sedol.

The event is especially symbolic for the world of computer research, which expected this frontier to fall one day. Some thought that the Go champions would resist longer: the game with its considerable number of possible positions poses difficulties to algorithms that win mainly by their ability to consider countless alternatives. This was without counting on the enormous progress of computers, their ever more numerous and faster processors, their ever more massive memories, and without counting above all on the considerable advances in artificial intelligence research.

Why not enjoy unlimited reading of UP'? Subscribe from €1.90 per week.

 

 

The word Go, Japanese ideogram (kanji).

Lee Sedol was beaten by a battery of super-sophisticated techniques including: deep learning (deep learning), Monte Carlo search techniques (Monte Carlo tree search) and massive data analysis techniques (big data).

Deep learning

L’profound learning is a technique that allows to train neural networks comprising many hidden layers (i.e. computational models whose design is very schematically inspired by the functioning of biological neurons, the different layers corresponding to different levels of data abstraction). These techniques were first used for pattern recognition. Yann le Cun (Chair of Computer Science and Digital Sciences this year at the Collège de France) has for example used this technique for the recognition of handwritten characters. More recent developments have led to applications in image and voice classification.

READ UP : Facebook's artificial intelligence enters Collège de France

Deep neural networks combine simplicity and generality. They are able to create their own representations of the characteristics of the problem to achieve much better success rates than other methods. They are based on learning processes that take quite a long time for the largest networks. To accelerate learning, deep network designers use powerful graphics cards such as Nvidia's, which allow them to achieve very fast multiplications of large matrices. Despite this, the learning time of a large network can be counted in days or even weeks.

 

To fight against disinformation and to favour analyses that decipher the news, join the circle of UP' subscribers.

 

 

The networks used for AlphaGo, for example, consist of 13 layers and 128 to 256 feature planes. For specialists: they are "convolutional" with 3×3 size filters, and use the Torch language, based on the Lua language. For others: they are very complex. AlphaGo uses deep learning in several phases. It starts by learning how to find the moves of excellent players from tens of thousands of games. It reaches a recognition rate of 57 %. He then plays millions of games against different versions of himself to improve this first network. This allows him to generate new data that he will use to teach a second network to evaluate Go game positions. A difficulty is then to combine these two networks with a more classical technique of "Monte-Carlo search" to guide the computer's game.

Monte-Carlo research

The principle of Monte-Carlo research is to make statistics on possible moves from randomly played games. In fact, games are not completely random and decide moves with probabilities that depend on a form, the context of the move. All the states encountered during the random games are memorized and the statistics on the moves played in the states are also memorized. This makes it possible, when returning to a state already visited, to choose the moves which have the best statistics. AlphaGo combines deep learning with Monte-Carlo search in two ways. First, it uses the first network that predicts moves to try these moves first in random games. Then it uses the second network that evaluates the positions to correct the statistics that come from the random games.

Massive data analysis

AlphaGo uses the latest techniques in massive data management and analysis. A large volume of data consists first of the very many excellent games available on the Internet; this data is used to initiate learning: AlphaGo starts by learning to imitate human behaviour. Another huge volume of data is generated by the games AlphaGo plays against itself to continue to improve and eventually reach a superhuman level.

Bravo!

Lee Sedol's defeat must be interpreted as a victory for mankind. It was advances in computer research that made this possible, it was software written by humans that won.

The techniques used in AlphaGo are very general and can be used for many problems. One thinks in particular of the optimization problems encountered for example in logistics, or in the alignment of genomic sequences. Deep learning is already used to recognize sounds and images. AlphaGo has shown that it could be used for many other problems.

Let us marvel at AlphaGo's performance. We had to rely on the results of brilliant researchers, use the talents of engineers, brilliant Go players to design AlphaGo's software, and have very powerful hardware. All this to defeat one man.

Let's also marvel at the performance of the champion Lee Sedol! He posed enormous difficulties for the Google team and even won the fourth game. He was still on his own against Google's financial means, against all AlphaGo's processors.

Humans perform extremely complex tasks such as understanding an image on a daily basis. Let's take one of these very emblematic tasks, translation. Although machine translation software is constantly improving, it is still far from reaching the levels of the best humans, without even going as far as a Baudelaire translating Extraordinary Stories of Edgar Allan Poe. There are still many challenges to artificial intelligence.

Serge AbiteboulDirector of research at Inria, member of the Academy of Sciences, affiliated professor, ENS Cachan - Université Paris-Saclay

Tristan CazenaveProfessor LAMSADE - University of Paris-Dauphine, University Paris Dauphine - PSL

The original text of this article was published on The Conversation.

This article is published in collaboration with Binary Blog.

 

The Conversation

0 Comments
Inline Feedbacks
View all comments
Skarb
Previous article

Panic among the shaving giants: the laser razor is coming!

Palmyra
Next article

French startup drones ready to check out Palmyra.

Latest articles from Advanced Technologies

JOIN

THE CIRCLE OF THOSE WHO WANT TO UNDERSTAND OUR TIME OF TRANSITION, LOOK AT THE WORLD WITH OPEN EYES AND ACT.
logo-UP-menu150

Already registered? I'm connecting

Register and read three articles for free. Subscribe to our newsletter to keep up to date with the latest news.

→ Register for free to continue reading.

JOIN

THE CIRCLE OF THOSE WHO WANT TO UNDERSTAND OUR TIME OF TRANSITION, LOOK AT THE WORLD WITH OPEN EYES AND ACT

You have received 3 free articles to discover UP'.

Enjoy unlimited access to our content!

From $1.99 per week only.
Share
Tweet
Share
WhatsApp
Email
Print