artificial intelligence

Artificial intelligence: where do we stand?

Would AI be the most significant event in the history of mankind? At least that is reason enough to think about it in a lucid and informed way right now. Whatever the outcome of this adventure, let's bet that we will learn as much about the essence of humanity as we will about the cognitive capacities of a machine. Notably thanks to the latest white paper produced by Weave, a consulting firm in operational strategy, which addresses the various aspects of AI: its definition, history and evolution, its ethical and legal aspects, as well as its influence in the consulting sector.
L’The study was conducted by three authors: Pirmin Lemberger, data scientist, physicist by training, who leads and directs Weave's data lab; Jonathan Lepan, Senior Manager within Weave's Business Technology entity; and Olivier Reisse, member of Weave's executive committee and founding partner of the Business Technology practice.
These three experts state that "It seems inadequate to consider AI as just another invention in an era already fertile in innovation. Instead, AI technologies are more like a value-creating resource, or a foundation for future innovation in the same way as the web 25 years ago or even electricity 150 years ago (Ng, 2016). As such, the disruptive potential of AI is probably difficult to overestimate. 
Each technology opens up a new field of possibilities, with its risks and opportunities. The ancient Greeks, again, had already formalized this idea in their concept of pharmakon, which is the term used to describe the ambivalence inherent in any technology, which is both a poison and its own remedy. There is no reason in this case why AI should be an exception. » 
The history of AI is marked by a succession of grandiose ambitions and disappointed hopes. In the 1950s, for example, the leaders of the fledgling AI imagined that significant progress on designing systems that think like humans could be achieved during a two-month seminar. (see P.21 of the study). Thus, disparate technical and conceptual approaches have long coexisted and prevented AI from being conceived as a unified discipline.
Although the subject is now better structured, there is no unanimity among experts on the objectives that should be assigned to RNs, as shown in the study. (see P. 13) through four points of view developed by Russel and Norvig in their monumental work Artificial Intelligence a Modern Approach (Norvig, et al., 2009).
At the origin of this lack of unanimity is naturally the difficulty in defining what intelligence is or even only some of its aspects such as creativity, intuition or the capacity for abstraction.

AI is always the future

Another difficulty encountered when trying to define what an AI is is the subjective nature of such a definition. Problems that are believed to require an "authentic" AI to be solved vary throughout history. Thus a machine able to beat a chess grandmaster would have been considered fifty years ago as "intelligent". Such a machine was actually built in 1997 when Deep Blue demonstrated its superiority in a match against the champion Kasparov. From the status of prowess we passed then very quickly to that of a general public product and nobody would think anymore that a chess program executed on a PC would be endowed with a "genuine" intelligence.
Only two years ago, few people would have bet on the possibility that a computer could beat a great master at the game of Go, at least not before a good twenty years. The experts' opinion was that a real mastery of this game required an intuition, an imagination (or even for some a mystical sensibility) inaccessible to a machine.
This has been the case since the beginning of the year (2016) with Google's AlphaGo system, which defeated champion Lee Sedol in a historic tournament. Once again this performance is no longer considered AI. We could easily multiply the examples.
In short, AI is often perceived as something that has not yet been achieved. It is therefore highly likely that there will never be an Eureka moment in the quest for AI but more likely a succession of phases of progress and habituation that will span several decades.

Illusion or reality?

How to recognize a generalist intelligence (sometimes called a strong AI) from a simple simulacrum that acts in such a way as to create an illusion? The idea that a simulated AI would inevitably be detected after a sufficiently long test is the essence of the famous Turing test. It tests a supposedly intelligent system to measure its ability to simulate human conversation without physical contact. Of course, no system to date has yet passed this test. However, some applications could already raise doubts about their ability to "understand the world". A system such as Google's software capable of describing the scene represented by a photo with a text (see P. 34) is he smart in the sense that we would like him to be? The answer is probably no. These are exciting times, however, as the answer to this question is no longer entirely clear.

What goals for AI?

Given the difficulties in clearly defining what an AI is, one possibility is to better define the objectives expected from such a system. In other words, the question needs to be asked as to what problems an AI should be able to solve. Approaches to AI can be broadly grouped into four categories as shown in the table below. The table defines two axes: the vertical axis distinguishes between "thinking" and "acting" and the horizontal axis distinguishes the ideal of the targeted behaviour: "human" or rational behaviour. (Norvig, et al., 2009).

Thinking like a human The hope is that, one day perhaps, we will be able to reproduce our cognitive faculties in an artificial system. Cognitive sciences are following this path and developing experimental methods on humans or animals to develop a model of the mind that can be tested experimentally. Some early approaches to AI, such as that of the General Problem Solver developed in the early 1960s by Herbert Simon, have thus tried to reproduce the steps of human reasoning.
Acting like a human: The Turing test offers an operational definition of intelligence without worrying about the internal processes of the machine. Building a system capable of passing such a test would, however, involve solving one of the most difficult cascades of AI problems: human language processing, knowledge representation, reasoning and machine learning. To date, AI research has focused on understanding the mechanisms underlying intelligence rather than on reproducing an intelligence capable of passing the Turing test. The approach here is similar to aeronautics, whose successes have been based on a thorough understanding of the laws of aerodynamics rather than on the imitation of bird flight.
Think rationally: Rational thinking is primarily based on logic. Its history begins with Aristotle and leads to modern mathematical logic. While arithmetic deals with the manipulation of numbers, mathematical logic formalizes the sequences of propositions of more or less general scope. The logicist approach of AI starts from the premise that all mental operations are ultimately reduced to logical operations. However, this approach comes up against two main difficulties.
- On the one hand, it is difficult to logically formalize a poorly defined problem or even a problem with uncertain data.
- On the other hand, there is a gulf between solving a problem in principle, i.e. with a priori infinite computing resources, and solving it in practice, i.e. using limited resources for an acceptable period of time in order to obtain an acceptable solution rather than a perfect solution that would result from strictly logical reasoning.
Act rationally: Rational action can be a consequence of rational reasoning... or not! In some emergency situations, for example, rational action is not necessarily the result of a long process of deliberation but rather the result of a reflex learned through experience. In yet other situations, action is required when no rational reasoning exists to infer it. The rational action approach is therefore more general than the logicist approach. It also has the advantage of allowing a more scientific definition than approaches that aim to act or think like a human being, which are difficult to define rigorously. This is the modern view of AI developed in Norvig, et al. 2009.
"AI is the science of designing rational agents, agents that optimize the expectation value of some notion of utility based on past perceptions of their environment." Rational action is in fact only a first step towards the more ambitious and realistic goal of acting in complex environments with limited rationality under time and resource constraints, which is what humans have to do throughout their lives. 

Will AI be able to contribute to increasing our potential for humanity or on the contrary to subvert it? It is up to each of us to use what free will remains to make good use of the unprecedented extension of possibilities that AI will represent in the decades to come.
(Source: Weave Business Technology - 2017)

To go further:
- Listen to France Culture : "Catherine Malabou's not-so-artificial intelligence." ...for his book, Metamorphoses of Intelligence: What to do with their blue brains. (PUF - August 2017)
- Book "In the disruption: how not to go crazy?"  by Bernard Stiegler - Editions Les Liens qui Libèrent, 2016

Inline Feedbacks
View all comments
artificial intelligence
Previous article

Small lexicon in fifteen words of artificial intelligence jargon

artificial intelligence
Next article

Artificial intelligence: what challenges for the company?

Latest articles in Artificial Intelligence



Already registered? I'm connecting

Inscrivez-vous et lisez three articles for free. Recevez aussi notre newsletter pour être informé des dernières infos publiées.

→ Register for free to continue reading.



You have received 3 free articles to discover UP'.

Enjoy unlimited access to our content!

From $1.99 per week only.