artificial intelligence

Artificial intelligence: What promesses ? Which défis ? Part 1

Next March, a major Japanese insurer will replace 34 of its employees with artificial intelligence. This would reinforce the report published in 2015 by the Nomura Research Institute, predicting that nearly half of all jobs in Japan will be held by robots by 2035. Already in the USA, last May, a teaching assistant at the Georgia Institute of Technology (Georgia Tech) in Atlanta was replaced by an artificial intelligence, in the gentle name of Jill Watson. (1) ... Artificial intelligence is everywhere, as demonstrated by the Fondation Télécom in its studies carried out jointly with the Institut Mines-Télécom, in its latest new opus of the Cahiers de veille with a major cross-cutting theme: artificial intelligence. 
The book ends with a chapter asking a big question: "Artificial intelligences, what will we do with them? "and concludes: "It is therefore high time to question all together about our practices with AI, algorithmic or robotic, well beyond the circle of scientists, and to question ourselves. » Is it to reflect on this that the Secretary of State for Digital and Innovation is launching "France IA", the national strategy for Artificial Intelligence, on Friday 20 January? Above all, the Government wishes to mobilize all members of the AI community and federate the many emerging initiatives in France to define a concerted national strategy and highlight France's potential in these innovative technologies that are essential for the future.
Artificial intelligence technologies represent a major potential for research, the development of new products and services and innovative industrial sectors, but also raise many ethical, social and societal issues. The Fondation Télécom's monitoring notebooks are the result of studies carried out jointly by teacher-researchers from the Institut Mines-Télécom and industrial experts. Each notebook, which deals with a specific subject, is entrusted to researchers from the Institute who bring together recognized experts. Both complete and concise, the watch book offers a technological state of the art and an analysis of the market as well as economic, sociological, legal and ethical aspects, focusing on the most crucial points. It concludes with perspectives that are all possible avenues for joint work between the Fondation Télécom partners and the Institute's teams. Here is the first part of Cahier n°8 on Artificial Intelligences, written by Aymeric Poulain-Maubant, independent expert. 

Artificial intelligence of the 3th type
Intelligence can be defined according to two broad categories of faculties. The first intelligence, which we share with most animal species, is that which connects us to the outside world and enables us to perceive, learn, recognize, estimate and decide. It is central to our ability to adapt and survive in an ever-changing world, and its automation has so far been the main driver of Artificial Intelligence (AI).
Thanks to the constant advances in microelectronics, computer science, signal processing, statistical analysis and more recently deep learning operating on vast data, remarkable results have been achieved in the automation of perception and decision-making tasks. A striking example of this first type of AI is the autonomous car, whose virtuosity in respecting the rules of driving and in paying attention to others will not be questioned.
The second intelligence is elsewhere and is specific to each individual. It encompasses the faculties of the mind, those that allow us to imagine, elaborate, invent and hope. The only model we have for trying to reproduce in a machine the properties of this creative intelligence is our brain, whose architecture is markedly different from that of the classical computer. Here, information and processes are interwoven into a single web of synaptic connections, numbering in the trillions.
AI of the second type will not be able to do without this massive parallelism, which can however and fortunately be broken down in the manner of cortical modules. Once the mysteries of mental information and cortical organization have been completely solved, once microelectronics will be able to control a large number of connections (say a few hundred million per module, no more), nothing will prevent us from designing artificial cortices with more modules than our brains contain. This AI of the third type, which some call technological singularity, will be the result of an explosive alliance between neuroscience, electronics, intensive computing, datamass and the principle of diversity.
Claude Berrou, Professor at Télécom Bretagne
From the birth of the term "Artificial Intelligence" in 1956 in the United States to the 2015 call of scientists for the pursuit of AI as beneficial to society as possible, AI research is full of promise, challenge, controversy and major societal issues.
In 27 pages, this "Cahier de veille" defines what intelligence is (rational, naturalist, systemic, emotional, kinesthetic...) and visits the history of AI by posing the two main paradigms used to tackle the challenges of AI.
He then questions the directions for artificial intelligence by citing three topics to be addressed in the short term: the impact on the economy, ethical and legal issues, and the robustness of artifacts (safety, controllability, etc.). Because AI is now very present in our daily lives under the impetus of major Internet groups and emerging start-ups, especially bots. And these new forms of intelligence are benefiting from a revival with the Machine learning and the bio-inspired computing that the author explains.

Birth of Intelligence Artificielle

 "Question : So you seem to be saying that AI programs will be almost identical to humans. Won't there be any différence ?
Réponse : The differences between RN programs and humans are likely to be greater than the differences between most people. It is unthinkable that the " corps " containing an AI program would not affect it profoundly. That is why, unless his body is a surprisingly faithful replica of the human body, (and why would it be-il ?) he would probably have extremely different views of what is important, what is interesting, etc. [...] I think an AI program, even if we could understand it, would seem rather strange to us. That's why the moment we're dealing with an AI program, and not just a " bizarre " program, will give us a lot of trouble. »
Douglas Hofstadter, Gödel Escher Bach, 1985
January 2015: on the initiative of British specialist in Artificial Intelligence (AI) Stuart Russel, a dozen researchers sign an open letter calling on their colleagues to go beyond the simple historical objective of AI performance. " Les progress in Artificial Intelligence is such that today we must focus not only on better performing AI, but also on the pursuit of the most beneficial AI possible for society. ...] We recommend a broad research effort to ensure that AIs are increasingly robust and bénéfiques ; and that these systems actually do what we want them to do. ...] This research is necessarily interdisciplinary, as it involves both Society and Artificial Intelligence. It extends from economics to law and philosophy, from information security to formal methods and, of course, within the various branches of AI itself.»
The 37th signatory, Elon Musk, had been moved the previous year that a superior artificial intelligence not benevolent to humanity might emerge within a few years, and that it might already be too late to stop this process. Musk had become convinced of this threat when he read the philosopher Nick Boström's book "Super Intelligence". A number of researchers had met at the end of the NIPS 2014 Conference to reflect on the impacts that these positions could have on their research and had agreed to meet again at the end of 2015.
It was around the same time that Musk and a few other entrepreneurs created Open AIThe Government of Canada has established a US$1 billion fund to promote AI with a human face. In two months, the signatories represent nearly 300 research groups, including leading AI researchers at Google, Facebook, Microsoft and other manufacturers, as well as the world's top computer scientists, physicists and philosophers.
There are more than 8600 in June 2016. They not only stress the importance of their findings and the opportunity to be seized, but also accompany them with a list of concrete avenues of research to be implemented immediately. 2016 has thus opened up with a new approach and objectives for developing Artificial Intelligence.

READ ALSO IN UP' : Elon Musk: Do you think I'm crazy?

It's time to take Artificial Intelligence seriously.

" Ce is no longer just a curiosity for researchers, Artificial Intelligence now has a measurable impact on our lives.» It is with these words that the Wall Street Journal announces to its readers at the end of August 2014 that Artificial Intelligence is no longer a simple subject of foresight. Developments in AI have long been underestimated due to a lack of clarity in definitions, fuelled by a widespread confusion between machine learning, deep learning, neural networks, predictive analysis, and massive data mining and analysis. Cinema, literature, and the media have often misled the AI discussion by preferring fantasy stories, from HAL 9000 in theSpace Odyssey à Terminator and his procession of fears.
But while researchers are becoming widely aware of the need to discuss the impact of AI on society, the general public is discovering in the press and social networks spectacular advances that tell a new story, that of a technology that is already among them. Three major breakthroughs explain how, in just a few years, a boost has been given to AI research. These three trends have provided an accessible and inexpensive innovation platform for developers who use these algorithms as basic commodities to make major transitions in many sectors industriels :
- access to parallel computing resources at very low cost,
- easy access to massive data that can be used as a learning package,
- new algorithms, taking advantage of the two previous breakthroughs.
Challenge of the AI if ever there was one, the first victory of a go program (AlphaGo from Google DeepMind) in October 2015 over a professional player had already caused a sensation. The way the same program, having learned from subsequent training with the losing human, then beat one of the world's best players and got 4th place in the world go ranking, finished to strike the minds, especially since it seemed to have been creative in doing so. As autonomous cars are already on the road and software translates more and more texts, in real time, are we creating these intelligences that go beyond homme ? But what intelligence(s) are we talking about nous ?

60 years of artificial intelligence. It was in August 1956, at the Dartmouth conference, that the expression "Artificial Intelligence " first appeared publicly. It had been used by the pioneers John McCarthy, Marvin L. Minsky, Nathaniel Rochester and Claude E. Shannon the previous summer to propose this seminar. It characterizes " la possibility of producing programs that would conduct or think intelligemment ". Its ambitions at the time, and the original challenge of Artificial Intelligence, were " chercher to produce, on a computer, a set of outputs that would be considered intelligent if produced by a being humain ". AI can be revealed through accurate simulations of human cognitive processes, or through programs leading to intelligent consequences. It has been crossed by many dualities, between innate and acquired, between the symbols of expert systems and the sub-symbols of formal neural networks, between competence and performance, which have punctuated its history. A technology of knowledge (new engineering science) but also a general science of information processing (by man or machine) or the theory of man and cognitive processes, this discipline has had each of these ambitions in turn, neither incompatible nor independent. Closely linked to a set of other disciplines within the Cognitive Sciences, it has had its moments of glory in 60 years, but also its moments of doubt and distance. These are the AI Winters that Yann LeCun, today head of the AI teams within Facebook, and whose work over the last 30 years has led to today's spectacular advances, never hesitates to recall to moderate some of the current enthusiasm towards the general public. But many researchers, like John Giannandrea, Google's vice-president of engineering and head of the machine learning business, believe that " les things are taking a turn incroyable " and that we are witnessing a genuine AI spring.

What is intelligence ?

In its Introduction to Cognitive Sciences In 1992, Michel Imbert cautiously prefers " décrire, which is everything we agree to recognize as intelligent ". Its definition has the advantage of identifying the observable results that characterize an intelligent process, rather than explaining the mechanisms that made it possible, many of which are still not understood.
Indeed, our definition of intelligence is constantly evolving. We still wonder today about the way we learn, about the place of emotion in reasoning, about the place of dreams in the reinforcement of learning. In order to understand what we can expect from artificial intelligence, and to better avoid the many myths, we must first agree on different forms of intelligence, rather than defining a single notion of intelligence.
Many forms of intelligence
Measuring and comparing the intelligence of human beings is a delicate exercise. Intelligence develops within an environment and a culture, it manifests itself differently according to gender, age, experiences, knowledge. It refers in turn to the ability to understand, the ability to reason and decide, the ability to adapt, skill and ability, the sum of knowledge and the sum of skills, not forgetting the ability to pass exams.
Intelligence is actually multiple and multiform. Literature often lists more than a dozen of them. Let's explore some of them, starting with those that were reproduced very early by artifacts.
First among them, rational intelligence (logic) is the one that has been the most measured since the famous Binet testthe precursor of the Intellectual Quotient. It brings together the skills of calculation, analysis, logic, and reasoning by deduction or induction, perfect for solving mathematical problems, games, and decision making. This mathematical intelligence was naturally the first to be implemented in programs, and particularly through expert systems. We can add the naturalist intelligence which consists in knowing how to classify objects and define categories. Modelling a situation, manipulating this modelling and testing hypotheses and their limits, is a set of processes that has also been rapidly mechanised. In humans it is called systemic intelligence. Coupled with organizational intelligence, that which makes it possible to assemble disparate information, and with strategic intelligence, that which includes the optimization of resources, means, time and space and makes it possible to take decisions, all these forms of intelligence are a first group which makes it possible, for example, to plan a series of actions to achieve a goal.
Creative intelligence is most often ignored by tests. However, the ability to be creative is one of the most frequently cited criteria in our time to determine the degree of intelligence of an artifact.
Literary intelligence, the intelligence of words and meaning, allows us to develop reasoning translated into speech, to follow and carry out conversations, to translate and manipulate abstract concepts.
Emotional intelligence, which also has its own quotient, makes it possible to observe the emotions, in oneself or on others, to interpret them, to channel them in order to discharge them from the consciousness of the individual.
Whether for natural or artificial intelligence, the role and importance of emotions has long been ignored or misunderstood. It is now known since António R. Damásio that emotion and reasoning are strongly linked. Emotions provide information about the state of the body, which is the interface between the place of intelligence and the environment that motivates it and where it is exercised. Kinesthetic intelligence is also linked to corps : it is the intelligence that allows the coordination of movements, their strength, their precision.
Together with emotional intelligence and spatial intelligence (the sense of orientation), it allows us to understand our place in the world, what we can and cannot do in it, and how to do it. To this is added situational intelligence, which consists of knowing how to adapt and survive in an unfamiliar, even hostile, environment. This third group provides the ability to perceive and act appropriately in the world.
The characters of intelligence are the product of evolution and interaction with the world. They are also activated by interactions with one's fellow human beings, particularly for certain forms of learning, by imitation - l 'intelligence is often referred to the ability to learn. Social intelligence reveals humans who are at ease in contact with others, while collective intelligence characterizes those who put their ego aside for a higher common goal. The duality isolé / collective has nourished a strong current of cognitive sciences, that of the paradigm of the anthill. It is a question of making minimal cognitive agents cooperate in order to obtain a global behavior qualified as intelligent, that no isolated agent could have carried out alone.
Less common intelligences also exist. Their mastery brings a plus to their beneficiaries. Thus the multisensory intelligence makes use simultaneously of all its senses to perceive the world in a different way. We find this ability implemented for example in connected objects and in particular the fusion of sensors.
Finally, temporal intelligence, for its part, offers a keen sense of the time axis. The concepts of self, past and future being linked within the primary consciousness, and the absence of higher order consciousness preventing to plan the future (using long term memory), the study of people who possess this temporal intelligence would allow to know more about another concept which poses question : consciousness. For consciousness, and especially self-consciousness, is a necessary condition for going beyond the early stages of intellectual development of a living being. It is not limited to human beings, and many animals pass the mirror test, including insects. Researchers thus become aware that animal cognition has long been underestimated (for their supposed lack of language) and must be a source of inspiration for our artifacts. As is the idea that the seat of intelligence does not only lie in the (human) brain, and that cognitive processes also take place in neurons of the skin or, so-called embodied intelligence, in the morphology of living beings.
The intelligence of the artifacts
The initial project of strong AI, which was to reconstruct the way man thought, and then go beyond it, has given way over the years to a more modest project of weak AI, a specialized simulation of human behaviour considered intelligent, using engineering methods, without concern for similarity. Two main paradigms have been used to address AI challenges. The classical approach, based on the manipulation of symbols and rules, cognitivism, quickly gave interesting results for problems in the first group of intelligences, that of mathematics, but came up against the complexity of machine translation, despite the arrival of semantic networks.
The neuro-inspired approach, connexionnism, with its learning capacities, tackled problems of visual or auditory perception, limited by the power of the machines of the time. Hybrid approaches, on parallel machines designed for, allowed in the 90s advances with the emergence of non-programmed intelligent behaviours. Today, with the multitude of existing forms of intelligence, human or not, being a given, it is not so much towards a super-intelligence as towards a new form of intelligence, specific to artifacts, defined neither by comparison nor by extension, that AIs are heading.

Am I dealing with a humain  being? The question of whether the dialogues that we engage in through a machine are conducted by human beings facing us, or by algorithmic robots, or a combination of both, is becoming increasingly important with the deployment of services associated with instant messaging such as Facebook's Messenger, and micro-blogging platforms such as Twitter : the bots. The challenge is important, because it is no less important to capture users who spend more and more time on these tools, and to offer them through intelligent dialogues a new way to access information and consumption. It is a kind of new architecture for a web browser in the age of AI, with the aim of simplifying interaction with online services. In February 2016, the Quartz information site thus deployed a bot with which the reader dialogues, rather than consulting a long list of articles, in order to be offered summaries of the essential information that interests him. Following Telegram and Kik, Facebook will open its botstore on Messenger in April 2016. Most of the 900 million monthly Messenger users will be able to find their first conversational bots there, as long as they know they are not too intrusive and are actually doing them a service. The user experience has to be perfect for adoption to happen. A bot that would book airline tickets without taking into account boarding and check-in times during stopovers would be more cumbersome than useful, and would serve the brand that would distribute it. These bots, developed to assist humans in tasks that are specific to them, must eventually acquire knowledge of the level of a strong, general AI (see above), and a personality capable of adapting to each user, taking into account his habits, culture, beliefs ... This is why for some time to come bots will probably still be made up of mixed teams of AI & humans. Will we always be able to distinguish the part of the bot from that of the human? invites you to test yourself in a case particulier : find out which of a human or an AI has composed a poem.

60 years of links between games and AI. As early as 1956, the AI became interested in games as a source of challenge. Arthur Samuel develops a first draughts game program that is formed by learning by reinforcement, and in 1962 beats in 1962 a good American amateur, on a single game, becoming the first machine that beats man in history. The game of backgammon follows in 1979, with the same techniques that make the program play against itself during its apprenticeship. In this way, it reaches levels that are reputed not to be teachable by humans, while humans learn a lot by observing the capabilities of these programs. This is how one becomes aware that artificial intelligences could well serve to strengthen human intelligences. And it's the game of chess, a game that Claude Shannon had said as early as 1950 was a good challenge for mechanized thinking, which kept researchers and players busy until 1997, when IBM's DeepBlue computer beat world champion Garry Kasparov. And finally the game of go in 2016, thanks to deep learning techniques, and a system that had played thousands of games against itself, fell in its turn. But there is still a lot to learn from the game. So far, players have indeed a perfect knowledge of all the elements of the game. Now we have to look at games with incomplete knowledge, such as the game of poker, which becomes the next step to take. Other types of games, such as football robots or autonomous racing cars, also present many more general AI challenges. The ultimate game remains what Alan Turing in 1950 called The imitation game. À l 'Originally a woman and a man are questioned by a person (male or female) who doesn't see them, and who has to guess who the woman is. The hidden man must therefore imitate "female behaviour". Turing wonders if a program could take the place of the hidden man and lure the interrogator. Today, the question is what proportion of humans would lose to this game.

Why not enjoy unlimited reading of UP'? Subscribe from €1.90 per week.

A model of the architecture of the human mind. The General Problem Solver, created in 1959 by Simon, Shaw and Newell, is a first attempt at an artificial system in the age of nascent AI, proposed to solve any problem, by confronting the objectives pursued and the means to achieve them. This system, which had a great influence on the evolution of Artificial Intelligence and Cognitive Sciences, solved very well simple problems, but was limited as soon as the combinatorics of the problem increased. This work, and the book by Newell & Simon, Human Problem Solving (1972), are the founders of the cognitivist paradigm.

Assessing intelligence

As a corollary to the definition of intelligence, the ability to measure and quantify it, whether natural or artificial, remains an open question. Developed in 1905 to detect students in difficulty, the Simon-Binet test is a metric scale of intelligence which is at the origin of the concept of mental age. It should not be confused with the intelligence quotient, which is a psychometric test giving a standardized quantitative indication of intellectual performance, a person's rank relative to the population. It also has the drawback of not taking into account all the different forms of intelligence. Moreover, being applied only to human beings, and according to their own principles of measurement, it has also contributed to ignoring the study of the forms of intelligence exercised by animals. Artificial intelligence itself is still measured according to human criteria, and thus evaluated by comparison with humans.
In his article "Computing Machinery and Intelligence", Alan Turing asked in 1950 the question of the possibility of machines being able to think, and these two terms have yet to be defined. To find out, he proposed that if, during a conversation, a machine manages to pass for a human being to an interlocutor, it is because it is really intelligent. There are actually several Turing tests with more refined protocols. Their shortcomings are that they are limited to communication experiences, and to human judgment only, whereas intelligent behaviour may not be associated with language, or even be human.
Better assessment of intelligence
Current artificial intelligence has passed the Turing testBut researchers agree that it's not so much the machine that passes the test, but the humans who fail, because they've become so familiar with the behavioural characteristics of a machine that they can no longer identify them. Moreover, many of the experiments that pass the test do not stand up to closer scrutiny, and above all reveal systems that have been cunningly designed to earn the label. The research community is now entering a post-Turing era, enriching the test so that the machine can be evaluated on a wider range of criteria and tasks.
Morten Tyldum's "Imitation Game" with Benedict Cumberbatch - January 2015

What directions for artificial intelligence?

After having explored a large number of problems and approaches since its creation, AI research has been interested for 20 years in the construction of intelligent agents, systems that perceive and act in specific environments. In this framework, intelligence is defined according to statistical dimensions and the economic notion of rationality: it is about making logical deductions, planning correctly and making good decisions. This approach based on probabilistic representations and statistical learning has resulted in a strong cross-fertilization between disciplines such as Artificial Intelligence, machine learning, statistics, neuroscience... The creation of common conceptual frameworks and theories, the availability of large masses of data to learn from, and the computing power of machines have allowed the sudden emergence of remarkable AI successes. All the old problems have simultaneously undergone major advances: speech recognition, machine translation, man-machine dialogue, classification of image contents, robot operation, autonomous vehicles... So what do researchers propose for the continuation of événements ?
Three topics are to be covered in short terme : the impact of AI on the economy, ethical and legal issues, robustness of artifacts. For the first, the aim is to maximize the beneficial effects on the economy while minimizing its deleterious effects. What will be the effects on the labour market, and on the very notion of work? How will sectors such as banking, insurance and marketing be modified by the extremely detailed knowledge of customer behaviour? What policies should be implemented to mitigate the negative effects and what new metrics should be used to take these décisions ? Ethical and legal questions arise, especially for vehicles autonomes : what decisions can they take to minimize accidents, and who endorses them responsabilités ? What role do computer scientists play in the construction of algorithms and their consequences, especially in the monitoring or management of life data privée ?
Finally, for society to widely accept intelligent artifacts, they must be verified (they do what they are supposed to do), valid (they do not behave in a way that has unintended consequences), safe (they are not hackable), and controllable (they can be corrected after deployment).
In the long term, it is a question of developing systems that can learn from experience - et especially from few examples, which is not the case today with learning profond - in a way similar to humans, to the point of surpassing human performance in most cognitive tasks, which will have a truly major impact on human society.
The nature of AI is now clear. They are new forms of intelligence, created by humans and creating themselves, accompanying humanity in its daily life, in a benevolent and beneficial way.

A daily newspaper cradled with Artificial Intelligence

 " Le business model for the next 10,000 startup is easy to prédire : prendre X, add AI. Everything we've electrified, we're going to cognitiser ", could you read in late 2014 in Wired. And indeed 2015 was the year when artificial intelligence really entered our daily lives, under the impetus of the major Internet groups and startups relying on the conveniences they opened up.
It can even be said that the digital revolution now underway is liberating and increasing our cognitive capacities, just as the industrial revolution helped to liberate and increase the muscular capacities of humans, by bringing them the power of steam and then electricity. And as with these previous revolutions, this will not happen without a reorganization of work, or even of the notion of work itself.
While the original AIs were developed to emulate rational intelligence, it is primarily to assist our emotional and social intelligence that today's AIs are designed. It is to facilitate and increase all our communication processes, whether they are between humans - outils real-time translation, face and emotion recognition they  outils- or between humans and machines. "Take X, Add AI" can be understood in the literal sense of terme : to take any object, and increase it with minimal intelligent capacities so that its use is as natural as possible. Intelligent capacities that also serve to exchange between connected objects; for example, when the thermostat, which has learned from the behaviours of the users of the house what temperature to set the rooms according to the hours and the people present, becomes the heart of the home automation management platform.
Gradually, generalist conversational agents, such as Apple's Siri, are becoming familiar, although natural language recognition remains a vast subject of research, far from being resolved. Familiar, to a certain extent, because while a physical object usually provides information about its use through its form, or a software program through its icon- and menu-based interface, interaction with a conversational agent still requires learning on the part of the human, who does not always know what to expect from his virtual companion, what he will or will not be able to respond to, and to what extent he can blame him for his incapacities.
The house is one of the stakes in the deployment of these AIs, a scenario that is already known to us thanks to cinéma : virtual butler or artificial presence (like Mark Zuckerberg's). At the Google I/O Developer Conference in May 2016, the new generation of Google Assistant was unveiled, a bot that uses the context of ongoing conversations to provide relevant assistance on the fly, and which will also enrich Google Home with an "ambient  expérience that goes beyond appareils ".
In order to provide such services, these assistants have to listen constantly. But " faciliter our daily life must be done while respecting our life privée "We are developing a pervasive AI, learning from our habits and interacting with technology for us," recalls Rand Hindi, CEO of Snips, one of the Innovators Under 35 in France in 2014, which is developing a pervasive AI, learning from our habits and interacting with technology for us. The challenge is to encrypt the personal data collected and learn from this encrypted data.

The advent of machine learning

To implement these scenarios, which were still futuristic five years ago, machines must learn from an unpredictable environment, on data of all types, arriving massively, whose raw meaning has been previously encrypted. Learning algorithms and big data, two of the three pillars of the new Artificial Intelligence, are thus strongly linked today.
However, while both natural and artificial intelligence are increasingly associated with learning and adaptive abilities, this has not always been the case. For a long time problems have been solved in the form of rule manipulations - s 'it's raining, take a parapluie - and decision trees where each branch is a rule option. But such systems have proven ineffective for pattern recognition and speech recognition problems, where the data to be understood is complex, variable, varied, noisy. It was necessary to imagine systems capable of training on the basis of examples, to make features emerge from them, to generalize on examples not yet encountered and to continuously improve with expérience : this is the learning machine, or automatic learning, which today includes several dozen algorithms, which can be classified in different ways, or types of learning.
Types of learning
The most common approach is the learning approach supervisé : is presented to an algorithmic system learning a (large) set of inputs associated with labels, e.g. objects with their category, and the system adjusts its internal parameters - on speaks of poids - until it can classify examples not yet presented, thus demonstrating generalizability. Neural networks are such systems. They build an internal model of the world presented to them, based on feature extractors and simple classifiers.
This learning requires external intervention to ensure that the learning rules are applied. A teacher provides desired targets for each input pattern and the learning system changes accordingly.
Conversely, unsupervised learning takes place without a teacher guiding the system. The latter has to forge its own categories from the space of entry patterns, usually on a statistical basis. In between, semi-supervised learning works on data, some of which is labelled and some of which is not, which can enhance the qualities of each approach taken independently. Co-learning is a subset of this class, based on classifiers working on different and ideally independent characteristics and communicating their results. Hetero-associative learning learns to associate output forms with input patterns. Self-associative learning makes it possible to reconstitute an incomplete pattern presented as input and which has only been evoked.
Reinforced learning is a special type of supervised learning. Very little information is given to the system, including no desired output. A judgement is made on the performance of the system, and it is up to the system to change so that the judgements received are increasingly positive. In this way it resembles animal-only learning techniques. Combined with the pro-bottom learning, it allowed AlphaGo to win against the European champion at the end of 2015.
The successes of deep learning
Pattern recognition, especially image recognition, has long been processed by combining different feature extractors and learner classifiers. These are often neural networks - leurs main components model neurons in a very simplified way biologiques - multi-layered - la topology of these networks makes it possible to identify input neurons, output neurons and one or more intermediate layers of neurons, linked by their synapses and modifying their synaptic weights. Deep learning implements this idea on a very large number of intermediate layers and an even larger number of neurons. It is this gigantism, made possible by current computing capabilities, that is at the heart of the vast majority of the decisive advances of recent months.
The great advantage of these deep architectures is " leur ability to learn to represent the world in a hiérarchique  way", as explained by Yann LeCun, whose work on convolutional networks, " une is a particular form of multilayer neural network whose connection architecture is inspired by that of the visual cortex of mammifères ", enabled the first cheque recognition systems in the 1990s. All current speech recognition systems today implement this type of learning.
However, the success of deep learning should not lead one to believe that all AI is reduced to it today. To reach general AI, far from the current AI who only know how to play go, or chess very well, but are unable on other tasks, they must be allowed to remember, predict and plan. New architectures of neural networks, recurrent memory networks, are thus designed, able for example to automatically caption images.
Living beings understand, because they know how the world works, because they live in it. This ability to understand common sense, as Yann LeCun points out, is the fundamental question that needs to be resolved now. Predictive learning could be one avenue. It is a very particulier  type of learning: the learning system is shown a sequence of events (images in a video, words in a sentence, user behaviour) and is asked to predict the following elements.

At Télécom ParisTech, Albert Biffet is conducting research as part of the Machine Learning project. on real time. This involves data mining (datastream mining) arriving at high velocity. This can be, for example, telecommunications data streams. No data is stored and it is necessary to adapt to changes in real time. The current stakes are to enable this type of learning machine in the context of the Internet of Things, in particular to enable these objects to make better decisions. A balance is sought between having more data available or having better algorithms to make these decisions under real-time constraints. In some cases, the data flows studied arrive in a distributed way. Four types of algorithms are retenus : classification, regression, clustering, and frequent pattern mining. Albert Biffet is developing the next generation of models and algorithms to mine data from these objets : algorithms that never stop learning, parallel and distributed algorithms on multiple machines, and finally the application of deep learning to these data streams. This research benefits from the existence of the Machine Learning for Big Data chair at Télé- com ParisTech, led by Stephan Clémençon, whose work has already been covered in the Big Data watch book. Created in September 2013 with the support of the Fondation Télécom, and financed by four partner companies (Safran, PSA Peugeot Citroën, Criteo and BNP Paribas), the chair produces methodological research that meets the challenge of statistical analysis of massive data.

Forms of learning
Some learning mechanisms, such as those related to robot motricity, cannot rely solely on big data, like most popular AI algorithms, because the adaptation of a robot to its environment depends on robots in all their varieties and states, and on environments in all their variations and variabilities. It is therefore necessary to focus on forms of learning that put the learning system in its environment. When the types of learning highlight the processes involved, the forms of learning focus on the context, behavioural and environmental, in which they act. They are multiple and depend on the cognitive development of the being - ou of the artefact - observed.
Imprinting and habituation are primary and irreversible learnings. The first is acquired brutally, the realization of innate behaviors. The second makes it possible not to consider as new all the events that occur. It makes experience and reasoning possible. Trial-and-error learning is based on value systems, and récompenses / punishments. It allows the acquisition of automatisms. Learning by action, highly dependent on the environment, combines the acquisition of know-how through training, active search for information, manipulation (of objects, models...). It can have as its object the thought itself (scaffolding of hypotheses and verifications) and thus allow creative power.
Learning by observation /imitation describes situations where the learner copies the behaviour of a model without being instructed or corrected by the model. It is a common type of learning in the first periods of intellectual development, and later in the phenomena of mimicry. Coactive learning takes place between beings with a comparable level of expertise. Its strength lies in the dynamic of exchange between the two, based on hypotheses, tests and criticism.
Learning by instruction, in contrastis carried out between two people with different levels of expertise. Complementing the learning by action and by observation / imitation, the expert directs the student's progress by communicating with him. Finally, a last form of learning is the learning to learn. Meta-learning allows, by observing its own learning processes, to activate the right learning mechanisms for each new situation, and eventually create new learning mechanisms.

At Télécom Bretagne, Mai Nguyen is developing adaptive robots, within the framework of personal assistance services. The learning studied is, for example, active learning through artificial curiosity. This is what a child between 0 et 6 years old who does not go to school, yet learns to develop. He plays, independently. It is the child who decides, among all the possible games. Psychologists say that the child will intuitively choose the task that will allow him to learn the most. Every child wants to make progress, in his own way, and chooses things to learn for the good of the child himself. To this end, they have different learning strategies at their disposal, either through social guidance or independently. It is on the basis of these observations that the researcher imagines robots capable of learning from their physical environment.

The objective is that of a service robotics that aims to create intelligent robots through the aggregation of the best AI algorithms. A first challenge consists in aggregating these algorithms into a coherent and real-time system, on an embedded system. A second challenge is to build algorithms so that the robot can interact physically and emotionally with its physical environment and its users. A third challenge is to allow the robot to learn new tasks that had not been pre-programmed. This means going beyond the most well-known challenges, where the (unique) task as well as the test cases are defined in advance and precisely. Intelligent robotics undeniably benefits from recent advances in AI, but must also continue to find new paradigms to advance in its own challenges. For example, when these robots leave the factory, they are not yet finished, so they can later adapt to their environment. They therefore require a cognitive structure that allows them to learn.

READ ALSO IN UP' : Convocation of pyschanalysis in the world of artificial intelligence

Towards neuro-inspired computing

Computing has undergone considerable development over the last forty years, thanks to advances in physics and microelectronics. Algorithms have also progressed, particularly since the pioneering work of Donald Knuth, but at a much slower rate than microelectronics. Our ultra-fast, multi-core computers with vast memories have remained rudimentary programming machines, non-learning, non-adaptive, indifferent, in a word, unintelligent.
Two reasons can be given for the temporary failure of AI. First, as Yann LeCun explained, too many scientists have lacked pragmatism in developing methods that could have been used to advance AI: " il had to have a foolproof theory, and empirical results didn't count. This is a very bad way to approach a problem with ingénierie ". And if there is one area in which, in the current state of knowledge, no unifying theory has imposed itself, it is that of intelligence, whether natural or artificial. Too often also the top-down approach -top-down, of cognition to modèles - has been favoured to the detriment of the approach bottom-up - des circuits to cognition - less conducive to theorizing.
Secondly, and more importantly, the idea of Artificial Intelligence has only been put forward, until now, by computer experts (in the broadest sense of information science), who thought they could do without imitating the only available and working models: the circuits of our brain. However, if we want to obtain superior cognitive properties from electronic machines, such as imagination, creation and judgement, we must stop following the model of the conventional computer machine to the letter, or rather try to find a companion model, the first one taking care of the big calculations (which the human brain does not know how to do) and the second one of intelligence. And we have only one source of inspiration to get this intelligence Artificielle : our cortex.
It therefore seems imperative, in a new move towards AI, which will be called rather neuro-inspired computing, to have a good knowledge of the brain in its mesoscopic aspects, i.e. neural coding and neural circuits. This approach also has the advantage of addressing some of the current bottlenecks in the field, such as energy consumption or other issues related to embedded systems.

At Télécom Bretagne, Claude Berrou and his team are working within the European Neucod (Neural Coding) project to model the human brain using an approach that has been used until now inédite : information theory. They start from the observation that the neocortex, this " milieu of propagation which allows biological processes to pass from islands of knowledge to autres ", has a structure very close to that of modern decoders. Their work has made it possible to develop codes for representing and storing information, which explains its robustness and durability. It all starts with an analogy between Shannon's information theory and the human brain. This analogy has allowed the development of robust associative memories with a great diversity of learning, thanks to the work of Vincent Gripon. It takes into account a world that is real, rich, complex, and above all analogical. The phase of perception comes back to source coding (removing redundancy from the environment), and the phase of memorization to channel coding (adding redundancy to make the signal more robust).


Inline Feedbacks
View all comments
grown man
Previous article

The Augmented Man, a fantasy come true...

Next article

Human, Post-Human: at the European Bioethics Forum

Latest articles from The Man Raised



Already registered? I'm connecting

Inscrivez-vous et lisez three articles for free. Recevez aussi notre newsletter pour être informé des dernières infos publiées.

→ Register for free to continue reading.



You have received 3 free articles to discover UP'.

Enjoy unlimited access to our content!

From $1.99 per week only.