The CNIL presented on Friday 15 December its report on the risks of AI and innovative recommendations on the ethical issues of algorithms. The irruption of artificial intelligence in our daily lives generates multiple upheavals and new challenges. So quid the preservation of the autonomy of human decision-making in the face of machines sometimes perceived as infallible? Answer: to be in a state of permanent watchfulness, in methodical doubt. Explanations.
Ce report entitled "How to allow the man to keep his hand? "is the result of a public debate led by the CNIL. Between January and October 2017, 60 partners (associations, companies, administrations, trade unions, etc.) organized 45 events throughout France to identify the ethical concerns raised by algorithms and artificial intelligence, as well as possible solutions.
The first part of the report provides a pragmatic definition of algorithms and artificial intelligence, while presenting their main uses, in particular those that are currently the focus of public attention. Indeed, algorithms and artificial intelligence are fashionable. These words are everywhere today, not without confusion sometimes. The definitions and examples given in the public debate today are often imprecise. They are sometimes even contradictory.
This is due to the highly technical nature of subjects that quickly found themselves in circulation, going far beyond the circles of experts and specialists to which they had long been confined. Hence a general lack of knowledge of the French: if 83% of the French have already heard of algorithms, more than half of them do not know precisely what they are about (52%). Their presence is already perceived as massive in everyday life by 80% of the French, who, at 65%, consider that this dynamic will become even more pronounced in the years to come*.
The main functions of algorithms and AI in different sectors :
*Survey conducted by the IFOP for the CNIL in January 2017 (with a sample of 1001 people, representative of the French population aged 18 and over) on the level of awareness of algorithms within the French population.
Why this report?
Public perception of algorithms and AI is marked by mistrust, according to a survey carried out as part of the public debate by the CFE-CGC, a management union, among 1,263 of its members (mainly from the "Metallurgy" and "Finance and Banking" federations).
Citizens seem to be primarily concerned about the new decision-making modalities and the dilution of accountability created by the algorithm. The potential "loss of competence" of physicians or employers who would rely heavily on the algorithm was highlighted.
Among the prejudicial consequences evoked: a "management of uncertainties" judged ineffective in the machine compared to what man is capable of; an inability to "manage exceptions" or the "loss of a sense of humanity" (evoked in particular with regard to the absence of recourse on "APB").
The use of computer systems, sometimes autonomous, to make decisions raises the concern that responsibility for errors is "unclear", a concern raised specifically about the medical sector. Regarding the "APB" case, some citizens criticize the lack of transparency, which explains why the algorithm serves as a "scapegoat that acts as a buffer between those who make political choices and those who complain about those choices".
The issue of informational personalization on social networks and its collective effects, mentioned in connection with the presidential elections in the United States, also accentuates their fear that "no one is really in charge of controlling the Internet anymore".
Less mentioned, the danger of algorithmic confinement is however mentioned by several participants in the "human resources" and "digital platforms" workshops. Citizens also mentioned the risk of "formatting" recruitment, and the consequent rationalisation of a field that should not be so much formatted, or the risk of being stuck on the Internet "in a profile that would slow down our personal development".
For the public, algorithms and AI lead to a form of dilution of traditional authority figures, decision-makers, officials and even the very authority of the rule of law.
Finally, the issue of bias, discrimination and exclusion deserves particular vigilance in the eyes of the participants, whether the biases in question are voluntary (in recruitment, there is concern that an algorithm may be coded "according to the objectives of employers at the expense of employees") or involuntary (the algorithmic tool is a cause for concern as to the errors it could generate).
The three most shared fears are the loss of human control (63 % of members), normativity and confinement through uniform recruitment (56 %) and the disproportionate collection of personal data (50 %). 72 % of respondents even consider as a threat the possibility of being recruited by algorithms based on an analysis of their profile and its compatibility for a defined position.
Six major ethical issues
- The development and increasing autonomy of technical artefacts allow increasingly complex and critical forms of delegation of tasks, reasoning and decisions to machines. Under these conditions, in addition to the increase in its power of action made possible by technology, is it not also its autonomy, its free will, that can be eroded? Does not the prestige and trust given to machines that are often considered infallible and "neutral" risk generating the temptation to offload on machines the fatigue of exercising responsibility, judging, making decisions? How can we understand the forms of dilution of responsibility that are likely to result from complex and highly segmented algorithmic systems?
- Algorithms and artificial intelligence can give rise to bias, discrimination and even forms of exclusion. These phenomena can be voluntary. But the real issue at stake, at a time when learning algorithms are being developed, is their development without even human knowledge. How can we deal with this?
- The digital ecosystem as it was built with the Web, but also more ancient actuarial techniques, have strongly exploited the potential of algorithms in terms of personalization. Increasingly fine-grained profiling and segmentation provide many services to the individual. But this personalization logic is also likely to affect - in addition to individuals - collective logics essential to the life of our societies (democratic and cultural pluralism, risk pooling).
- Artificial intelligence, because it is based on learning techniques, requires huge amounts of data. However, legislation promotes a logic of minimizing the collection and storage of personal data, in line with an acute awareness of the risks involved for individual and public liberties in the constitution of large files. Do the promises of AI justify a revision of the balance constructed by the legislator?
- The choice of the type of data feeding an algorithmic model, their sufficient or insufficient quantity, the existence of biases in the datasets used to train the learning algorithms are a major issue. This crystallizes the need to establish a critical attitude and not to have excessive confidence in the machine.
- The increasing autonomy of machines as well as the emergence of forms of hybridization between humans and machines (hybridization at the level of an action assisted by algorithmic recommendations, but also soon at the physical level) question the idea of an irreducible human specificity. Is it necessary and possible to speak in the proper sense of "algorithmic ethics"? How can we apprehend this new class of objects that are humanoid robots, objects but likely to arouse forms of affect and attachment in humans?
Building an artificial intelligence at the service of mankind
Two new principles are emerging as founders. The first, substantial, is the principle of loyalty, in a version that goes further than the one initially formulated by the Council of State on the platforms.
Indeed, this version integrates a collective dimension of loyalty, which aims at ensuring that the algorithmic tool cannot betray the community to which it belongs (consumerist or citizen), whether or not it processes personal data.
The second, more methodical, is a principle of vigilance/reflexivity. It aims to respond over time to the challenge posed by the unstable and unpredictable nature of learning algorithms. It is also a response to the forms of indifference, negligence and dilution of responsibility that can be generated by the highly compartmentalized and segmented nature of algorithmic systems. Finally, it aims to take into account and counterbalance the form of cognitive bias leading the human mind to place excessive trust in the decrees of algorithms.
It is a question of organizing, through concrete procedures and measures, a form of regular, methodical, deliberative and fruitful questioning with regard to these technical objects on the part of all the actors in the algorithmic chain, from the designer to the end user, including those who train the algorithms.
These two principles appear to be the basis for the regulation of these complex tools and assistants that are algorithms and AI. They allow their use and development while integrating their control by the community. They are completed by a specific and new engineering articulated on two points: one aiming at rethinking the obligation of human intervention in algorithmic decision making (article 10 of the French law "Informatique et libertés"); the other at organizing the intelligibility and the responsibility of algorithmic systems.
These principles are being operationalized in the form of six recommendations addressed both to the public authorities and to the various components of civil society (general public, businesses, associations, etc.).
Recommendations for managing a complex world
- To train in ethics all the links of the "algorithmic chain" (designers, professionals, citizens);
- Make algorithmic systems understandable by strengthening existing rights and organising mediation with users ;
- Working on the design of algorithmic systems in the service of human freedom ;
- Establish a national platform for auditing algorithms;
- To encourage the search for technical solutions to make France the leader in ethical AI and to launch a great national participatory cause around a research project of general interest ;
- Strengthen the ethical function within companies.
Two founding principles therefore emerge from this report.
On the one hand, a substantial principle, the principle of loyalty of algorithms, in a formulation that deepens the one already elaborated by the Council of State. This formulation includes a dimension of loyalty towards users, not only as consumers, but also as citizens, and even towards collectives, communities whose existence could be affected by algorithms, whether they process personal data or not.
On the other hand, a more methodological principle: the principle of vigilance. This principle of vigilance must be understood, not as a vague incantation, but as a substantiated response to three central issues of the digital society:
Firstly, the evolutionary and unpredictable nature of algorithms in the age of machine learning.
Secondly, the highly compartmentalized nature of the algorithmic chains, leading to segmentation of the action, indifference to the impacts generated by the algorithmic system as a whole, dilution of responsibilities.
Thirdly, and finally, the risk of over-reliance on the machine, which is considered - as a result of some form of human cognitive bias - infallible and unbiased.
Through the principle of vigilance, the objective is to organize the permanent state of vigilance of our societies with regard to these complex and moving socio-technical objects that are algorithms or, strictly speaking, algorithmic systems or chains. A state of watchfulness, in other words a questioning, a methodical doubt.
This concerns first and foremost the individuals who make up the links in the algorithmic chains: it is a question of giving them the means to be the watchers, lucid and active, always in question, of this digital society.
This also applies to the other driving forces in our society. Companies, of course, to model virtuous algorithmic systems, but not only.
These principles, through the universal approach from which they derive, could well be part of a new generation of principles and human rights in the digital age: this generation which, after those of rights-freedoms, economic rights and social rights, would be the generation of rights-systems organizing the underlying dimension of our digital universe. Are they not likely to be raised to the level of the general principles of global governance of the Internet infrastructure? At a time when the French and European positions on artificial intelligence are being constructed, the question deserves to be asked.