Artificial intelligence of the 3th typeIntelligence can be defined according to two broad categories of faculties. The first intelligence, which we share with most animal species, is that which connects us to the outside world and enables us to perceive, learn, recognize, estimate and decide. It is central to our ability to adapt and survive in an ever-changing world, and its automation has so far been the main driver of Artificial Intelligence (AI).Thanks to the constant advances in microelectronics, computer science, signal processing, statistical analysis and more recently deep learning operating on vast data, remarkable results have been achieved in the automation of perception and decision-making tasks. A striking example of this first type of AI is the autonomous car, whose virtuosity in respecting the rules of driving and in paying attention to others will not be questioned.The second intelligence is elsewhere and is specific to each individual. It encompasses the faculties of the mind, those that allow us to imagine, elaborate, invent and hope. The only model we have for trying to reproduce in a machine the properties of this creative intelligence is our brain, whose architecture is markedly different from that of the classical computer. Here, information and processes are interwoven into a single web of synaptic connections, numbering in the trillions.AI of the second type will not be able to do without this massive parallelism, which can however and fortunately be broken down in the manner of cortical modules. Once the mysteries of mental information and cortical organization have been completely solved, once microelectronics will be able to control a large number of connections (say a few hundred million per module, no more), nothing will prevent us from designing artificial cortices with more modules than our brains contain. This AI of the third type, which some call technological singularity, will be the result of an explosive alliance between neuroscience, electronics, intensive computing, datamass and the principle of diversity.Claude Berrou, Professor at Télécom Bretagne
Birth of Intelligence Artificielle
"Question : So you seem to be saying that AI programs will be almost identical to humans. Won't there be any différence ?Réponse : The differences between RN programs and humans are likely to be greater than the differences between most people. It is unthinkable that the " corps " containing an AI program would not affect it profoundly. That is why, unless his body is a surprisingly faithful replica of the human body, (and why would it be-il ?) he would probably have extremely different views of what is important, what is interesting, etc. [...] I think an AI program, even if we could understand it, would seem rather strange to us. That's why the moment we're dealing with an AI program, and not just a " bizarre " program, will give us a lot of trouble. »Douglas Hofstadter, Gödel Escher Bach, 1985
READ ALSO IN UP' : Elon Musk: Do you think I'm crazy?
It's time to take Artificial Intelligence seriously.
60 years of artificial intelligence. It was in August 1956, at the Dartmouth conference, that the expression "Artificial Intelligence " first appeared publicly. It had been used by the pioneers John McCarthy, Marvin L. Minsky, Nathaniel Rochester and Claude E. Shannon the previous summer to propose this seminar. It characterizes " la possibility of producing programs that would conduct or think intelligemment ". Its ambitions at the time, and the original challenge of Artificial Intelligence, were " chercher to produce, on a computer, a set of outputs that would be considered intelligent if produced by a being humain ". AI can be revealed through accurate simulations of human cognitive processes, or through programs leading to intelligent consequences. It has been crossed by many dualities, between innate and acquired, between the symbols of expert systems and the sub-symbols of formal neural networks, between competence and performance, which have punctuated its history. A technology of knowledge (new engineering science) but also a general science of information processing (by man or machine) or the theory of man and cognitive processes, this discipline has had each of these ambitions in turn, neither incompatible nor independent. Closely linked to a set of other disciplines within the Cognitive Sciences, it has had its moments of glory in 60 years, but also its moments of doubt and distance. These are the AI Winters that Yann LeCun, today head of the AI teams within Facebook, and whose work over the last 30 years has led to today's spectacular advances, never hesitates to recall to moderate some of the current enthusiasm towards the general public. But many researchers, like John Giannandrea, Google's vice-president of engineering and head of the machine learning business, believe that " les things are taking a turn incroyable " and that we are witnessing a genuine AI spring.
What is intelligence ?

Am I dealing with a humain being? The question of whether the dialogues that we engage in through a machine are conducted by human beings facing us, or by algorithmic robots, or a combination of both, is becoming increasingly important with the deployment of services associated with instant messaging such as Facebook's Messenger, and micro-blogging platforms such as Twitter : the bots. The challenge is important, because it is no less important to capture users who spend more and more time on these tools, and to offer them through intelligent dialogues a new way to access information and consumption. It is a kind of new architecture for a web browser in the age of AI, with the aim of simplifying interaction with online services. In February 2016, the Quartz information site thus deployed a bot with which the reader dialogues, rather than consulting a long list of articles, in order to be offered summaries of the essential information that interests him. Following Telegram and Kik, Facebook will open its botstore on Messenger in April 2016. Most of the 900 million monthly Messenger users will be able to find their first conversational bots there, as long as they know they are not too intrusive and are actually doing them a service. The user experience has to be perfect for adoption to happen. A bot that would book airline tickets without taking into account boarding and check-in times during stopovers would be more cumbersome than useful, and would serve the brand that would distribute it. These bots, developed to assist humans in tasks that are specific to them, must eventually acquire knowledge of the level of a strong, general AI (see above), and a personality capable of adapting to each user, taking into account his habits, culture, beliefs ... This is why for some time to come bots will probably still be made up of mixed teams of AI & humans. Will we always be able to distinguish the part of the bot from that of the human? http://botpoet.com/ invites you to test yourself in a case particulier : find out which of a human or an AI has composed a poem.
60 years of links between games and AI. As early as 1956, the AI became interested in games as a source of challenge. Arthur Samuel develops a first draughts game program that is formed by learning by reinforcement, and in 1962 beats in 1962 a good American amateur, on a single game, becoming the first machine that beats man in history. The game of backgammon follows in 1979, with the same techniques that make the program play against itself during its apprenticeship. In this way, it reaches levels that are reputed not to be teachable by humans, while humans learn a lot by observing the capabilities of these programs. This is how one becomes aware that artificial intelligences could well serve to strengthen human intelligences. And it's the game of chess, a game that Claude Shannon had said as early as 1950 was a good challenge for mechanized thinking, which kept researchers and players busy until 1997, when IBM's DeepBlue computer beat world champion Garry Kasparov. And finally the game of go in 2016, thanks to deep learning techniques, and a system that had played thousands of games against itself, fell in its turn. But there is still a lot to learn from the game. So far, players have indeed a perfect knowledge of all the elements of the game. Now we have to look at games with incomplete knowledge, such as the game of poker, which becomes the next step to take. Other types of games, such as football robots or autonomous racing cars, also present many more general AI challenges. The ultimate game remains what Alan Turing in 1950 called The imitation game. À l 'Originally a woman and a man are questioned by a person (male or female) who doesn't see them, and who has to guess who the woman is. The hidden man must therefore imitate "female behaviour". Turing wonders if a program could take the place of the hidden man and lure the interrogator. Today, the question is what proportion of humans would lose to this game.
A model of the architecture of the human mind. The General Problem Solver, created in 1959 by Simon, Shaw and Newell, is a first attempt at an artificial system in the age of nascent AI, proposed to solve any problem, by confronting the objectives pursued and the means to achieve them. This system, which had a great influence on the evolution of Artificial Intelligence and Cognitive Sciences, solved very well simple problems, but was limited as soon as the combinatorics of the problem increased. This work, and the book by Newell & Simon, Human Problem Solving (1972), are the founders of the cognitivist paradigm.
Assessing intelligence

What directions for artificial intelligence?
A daily newspaper cradled with Artificial Intelligence

The advent of machine learning

At Télécom ParisTech, Albert Biffet is conducting research as part of the Machine Learning project. on real time. This involves data mining (datastream mining) arriving at high velocity. This can be, for example, telecommunications data streams. No data is stored and it is necessary to adapt to changes in real time. The current stakes are to enable this type of learning machine in the context of the Internet of Things, in particular to enable these objects to make better decisions. A balance is sought between having more data available or having better algorithms to make these decisions under real-time constraints. In some cases, the data flows studied arrive in a distributed way. Four types of algorithms are retenus : classification, regression, clustering, and frequent pattern mining. Albert Biffet is developing the next generation of models and algorithms to mine data from these objets : algorithms that never stop learning, parallel and distributed algorithms on multiple machines, and finally the application of deep learning to these data streams. This research benefits from the existence of the Machine Learning for Big Data chair at Télé- com ParisTech, led by Stephan Clémençon, whose work has already been covered in the Big Data watch book. Created in September 2013 with the support of the Fondation Télécom, and financed by four partner companies (Safran, PSA Peugeot Citroën, Criteo and BNP Paribas), the chair produces methodological research that meets the challenge of statistical analysis of massive data.
At Télécom Bretagne, Mai Nguyen is developing adaptive robots, within the framework of personal assistance services. The learning studied is, for example, active learning through artificial curiosity. This is what a child between 0 et 6 years old who does not go to school, yet learns to develop. He plays, independently. It is the child who decides, among all the possible games. Psychologists say that the child will intuitively choose the task that will allow him to learn the most. Every child wants to make progress, in his own way, and chooses things to learn for the good of the child himself. To this end, they have different learning strategies at their disposal, either through social guidance or independently. It is on the basis of these observations that the researcher imagines robots capable of learning from their physical environment.
READ ALSO IN UP' : Convocation of pyschanalysis in the world of artificial intelligence
Towards neuro-inspired computing
At Télécom Bretagne, Claude Berrou and his team are working within the European Neucod (Neural Coding) project to model the human brain using an approach that has been used until now inédite : information theory. They start from the observation that the neocortex, this " milieu of propagation which allows biological processes to pass from islands of knowledge to autres ", has a structure very close to that of modern decoders. Their work has made it possible to develop codes for representing and storing information, which explains its robustness and durability. It all starts with an analogy between Shannon's information theory and the human brain. This analogy has allowed the development of robust associative memories with a great diversity of learning, thanks to the work of Vincent Gripon. It takes into account a world that is real, rich, complex, and above all analogical. The phase of perception comes back to source coding (removing redundancy from the environment), and the phase of memorization to channel coding (adding redundancy to make the signal more robust).