artificial intelligence

Is there any legal liability for robots?

Is it necessary to develop legal liability for robots? The European Commission is currently working on the draft of a new law for robots, but more than 200 members of civil society are opposed to the creation of an "electronic personality". An open letter has just been published in an attempt to dissuade the European Commission from such an attribution.
Photo: "What moral framework should be adopted in an automated future? »  ©Matthew Wiebe
Por the 220 signatories of the open letter " le legal status of a robot cannot derive from the model of the legal person, since it implies the existence of natural persons behind it to represent it and diriger ". Experts, researchers, doctoral students and professors from all over Europe have sent an open letter to the European Commission calling for an innovative and reliable legal framework on the development of both artificial intelligence and robotics. Like Parliament, the European Commission shares the view that a set of rules is needed, particularly on responsibility, transparency and accountability, without restricting research, development and innovation. and innovation in the field of robotics.
The letter :
"We, experts in Artificial Intelligence and Robotics, industry leaders, lawyers, ethicists, and health professionals, affirm that the creation of European legal rules for robotics and artificial intelligence is relevant to ensure a high level of security for EU citizens while promoting innovation.
As the interaction between man and robot becomes more and more widespread, the European Union must provide the appropriate framework to reinforce the values of democracy and the European Union. Indeed, the legal framework of artificial intelligence and robotics must be explored not only through economic and legal aspects, but also through its societal, psychological and ethical impacts. In this context, we are concerned about the European Parliament Resolution on Civil Law Rules on Robotics and by its recommendation to the European Commission in paragraph 59 (f):
"The creation, in the long term, of a legal personality specific to robots, so that at least the most sophisticated autonomous robots could be considered as electronic persons responsible for repairing any damage caused to a third party; it would be conceivable to consider as an electronic person any robot that takes autonomous decisions or that interacts independently with third parties".
We argue that:
1. The economic, legal, societal and ethical impact of AI and robotics must be considered without haste or bias. The benefit for all Humanity should preside over the legal framework set by the European Union.
2. The creation of a legal status of "electronic person" for "autonomous", "unpredictable" and "self-learning" robots is justified by the erroneous assertion that liability for damage caused would be impossible to prove.
From a technical point of view, this statement offers many biases based on an overestimation of the real capabilities of the most advanced robots, a superficial understanding of the unpredictability and self-learning capabilities and, probably, a perception of robots distorted by science fiction and some sensational press releases.
From an ethical and legal point of view, creating a legal personality for a robot is inappropriate whatever the legal status envisaged:
a. A legal status for a robot cannot derive from the model of the physical person, since the robot would then have human rights, such as the right to dignity, the right to integrity, the right to remuneration or the right to citizenship, thus directly confronting human rights. This would be in total contradiction with the Charter of Fundamental Rights of the European Union and the Convention for the Protection of Human Rights and Fundamental Freedoms.
b. The legal status of a robot cannot be derived from the model of the legal person, since it implies the existence of natural persons behind it to represent and direct it. And this is not the case for a robot.
c. The legal status of a robot cannot derive from the Anglo-Saxon model of the Trust also called Fiducie or Treuhand in Germany. Indeed, this regime is extremely complex, requires very specialised skills and would not solve the question of liability. More importantly, it would still imply the existence of a human being as a last resort - the trustee - responsible for the management of the robot granted with a Trust or Trust Fund.
Accordingly, we declare that:
- The European Union must encourage the development of the Artificial Intelligence and Robotics industry in order to limit health risks and ensure the safety of human beings. The protection of robot users and third parties must be at the heart of all EU legal provisions.
- The European Union must create a workable framework for the development of AI and innovative and reliable robotics with the aim of creating great advances for the European people and the common market. »
Complete list of signatories :
For Cédric Villani, artificial intelligence forces us to ask ourselves "what kind of society we want to live in". Indeed, the stakes, the risks and the potential drifts of artificial intelligence deserve a real vigilance ... Elon Musk, himself, considers that "Artificial intelligence is a fundamental existential risk to human civilization." and call for public decisions. Bill Gates also calls for the government to address these issues without delay. In February 2017 the European Parliament adopted a resolution on "Civil Law Rules on Robotics", calling on the European Commission, Member States and the international community through the UN to provide itself with appropriate legal tools. And will the General Assembly on Bioethics, which ends in a few days, be enough to prevent the dangers of AI? 
The day when AI systems capable of autonomous 100% decision making will exist, we will have to be well armed legally to be able to integrate them into our daily life .

To go further:
– "Robotics Law: Towards New Humanities? » : Publication Edition Dalloz, taken from Alain Bensoussan's speech at the conference Towards new humanities which took place at the Cité des sciences et de l'industrie on March 24, 2017.

Anything to add? Say it as a comment.

Inline Feedbacks
View all comments
artificial intelligence
Previous article

Between hubris and amalgam: who benefits from AI confusion?

AI and humans
Next article

What will be the relationship between human and artificial intelligence?

Latest articles in Artificial Intelligence

Will AI make us stupid?

Will AI make us stupid?

L’intelligence artificielle pourrait bientôt altérer notre capacité à prendre des décisions par



Already registered? I'm connecting

Register and read three articles for free. Subscribe to our newsletter to keep up to date with the latest news.

→ Register for free to continue reading.



You have received 3 free articles to discover UP'.

Enjoy unlimited access to our content!

From $1.99 per week only.