artificial intelligence

Artificial intelligence: the complex question of ethics

Start
With the development of artificial intelligence come many societal questions. In particular, how to define an ethics in this field? Armen Khatchatourov, philosopher at Télécom École de Management and member of the "Values and policies of personal information" Chair at the ITM, carefully observes and analyses the answers proposed to this question. One of his main concerns is to see how ethics tries to be normalised by a legislative framework.
 
Dn the unbridled race for artificial intelligence led by the GAFA, with ever more powerful algorithms and ever faster automated decisions, engineering is king, because it is the bearer of the highly coveted innovation. So what place does the philosopher have left in this technological universe that puts progress at the centre of every objective? Perhaps that of a critical observer. Armen Khatchatourov, a researcher in philosophy at Télécom École de Management, himself describes his approach as that of "observing with the necessary hindsight the general enthusiasm for all that is new". Having worked for several years on human-machine interactions and issues relating to artificial intelligence (AI), he has been examining the potential harmful effects of automation.
 
In particular, it examines the problematic aspects of the legislative framework for RNs. His reflections focus in particular on "ethics by design". This movement consists in integrating the consideration of ethical aspects right from the design stages of an algorithm or an intelligent machine in the broad sense of the term. Although at first glance it may seem that constructors and developers can attach importance to ethics, "this approach can, paradoxically, be harmful" according to the researcher.
"The risk is to lose all critical faculties."
Armen Khachaturov

Ethics by design: the same limits as privacy by design?

To illustrate his point, Armen Khatchaturov takes the example of a similar concept in the field of the protection of personal data (privacy): just like ethics, this subject raises the question of the ways in which we behave towards others. Privacy by design" emerged in the late 1990s as a reaction to the difficulty of regulating the digital environment. It presented itself as a global reflection on the incorporation of personal data protection issues into product development or business processes. "The main problem is that privacy by design is now taking on the appearance of a text," regrets the philosopher, referring to the General Regulation on Data Protection (RGDP) adopted last April by the European Parliament. "And the reflections on ethics are taking the same path," he adds.
 
According to the researcher, the main harmful bias of such standardized regulation through a text is the disempowerment of the actors. On the one hand, engineers and designers risk being content to agree with the text," he explains. On the other hand, consumers no longer think about their actions and trust the labels awarded by potential regulators. Behind this trivialization," he says, "there is a risk of losing all critical thinking. Are we really thinking about what we do on the Web on a daily basis, or are we guided by a normativity that is taking hold? "asks the philosopher.
 
Same threat in the case of ethics. Its textual formalization alone already betrays the reflexivity it carries. "This would be tantamount to freezing ethical reflection," warns Armen Khatchaturov. He details his thinking by referring to the work of the developers of artificial intelligence. There always comes a time when the engineer has to translate ethics into a mathematical formula to be integrated into an algorithm. Concretely, this can take the form of a decision in the ethical field based on a structured representation of knowledge (ontology in computer language). "But if we sum up ethics as a problem of logic, it is more than problematic! asserts the philosopher. For a military drone, for example, this would mean defining a threshold for the number of civilian deaths at which the decision to fire is acceptable? Is this desirable? There is no ontology of ethics, and we must not allow ourselves to be led into that terrain. »
 
And military drones aren't the only area involved. The development of autonomous cars, for example, is subject to many questions about how to make a decision. Ethical considerations often lead to dilemmas. Typically, should a car heading towards a wall that it can only avoid by crushing a crowd of pedestrians save its passenger, or should it sacrifice it at the expense of the lives of the pedestrians? There are many reasons for this. The pragmatic thinker will favour the number of lives. Others will want the car to save its driver no matter what. The Massachusetts Institute of Technology (MIT) has thus developed a digital tool that presents numerous concrete cases and confronts Internet users with choices: the Moral Machine. The results vary greatly depending on the individual. They show that it is impossible to establish universal ethical rules for the unique case of autonomous cars.

Ethics is not a product

Still using the analogy between ethics and data protection, Armen Khatchaturov raises another point, based on the reflections of IT security specialist Bruce Schneier. Schneier describes IT security as a process, not a product. Therefore, IT security cannot be fully ensured either by a technical approach or by legislation, as both are only valid at a given moment. Although updates are possible, they often take time and are in principle out of step with current problems. "The lesson of IT security is that we cannot rely on a ready-made solution, and that we need to think in terms of processes and attitudes. If we can risk this parallel, so can the ethical issues raised by AI," the philosopher points out.
 
Hence the interest in thinking about the framework of processes such as privacy or ethics on a different scale from that of the laws. However, Armen Khatchaturov acknowledges the need for the latter: "A normative text is probably not the solution to everything, but if there is no legislative debate, witnessing a certain collective awareness, it is even more problematic. "This shows the complexity of a problem to which no one is currently able to provide a solution.
 
Armen Khachaturov, philosopher

READ ALSO IN UP': Artificial Intelligence: What Promises? What challenges?

 

1 Comment
Les plus anciens
Les plus récents Le plus de votes
Inline Feedbacks
View all comments
bioethics
Previous article

Human, Post-Human: at the European Bioethics Forum

grown man
Next article

Frankenstein and transhumanism

Latest articles from The Man Raised

JOIN

THE CIRCLE OF THOSE WHO WANT TO UNDERSTAND OUR TIME OF TRANSITION, LOOK AT THE WORLD WITH OPEN EYES AND ACT.
logo-UP-menu150

Already registered? I'm connecting

Register and read three articles for free. Subscribe to our newsletter to keep up to date with the latest news.

→ Register for free to continue reading.

JOIN

THE CIRCLE OF THOSE WHO WANT TO UNDERSTAND OUR TIME OF TRANSITION, LOOK AT THE WORLD WITH OPEN EYES AND ACT

You have received 3 free articles to discover UP'.

Enjoy unlimited access to our content!

From $1.99 per week only.
Share2
Tweet
Share
WhatsApp
Email