The little car moves at a slow pace in its lane on the highway. Further on, it detects an obstacle that will force it to change lanes. Blinker, check for available space and off we go to the left lane. Bummer! There's a bus coming, and there's a fender bender.
A fender bender like there are thousands of them on every road in the world. Except, here we are in California and we have just reported the world's first accident of robomobile, Google's driverless car.
Dor the past seven years, these driverless automobiles, guided by laser vision and artificial intelligence, have been driving the roads between San Francisco and Los Angeles. Google says they have never been in a single accident to their credit. This is a first that happened last February 14. A small accident without gravity, crumpled sheet metal, but which raises many questions.
Classical problem of misunderstanding
According to the accident reportthe Google car was stuck in the right lane by the sandbags. It first let several cars pass on the left lane before trying to get into this lane as a bus was approaching. According to the Google car passenger in the autonomous vehicle, she saw the bus approaching in the left rearview mirror, but thought the bus would slow down to allow the Google car to pass.
Classic problem of misunderstanding on the road. But here, it's Google's artificial intelligence that didn't understand. The magazine The Verge got the accident report from Google. It reads: "This type of misunderstanding happens every day on the road between human drivers. It's a classic example of the negotiation that is an integral part of driving - we all try to predict each other's movements. "Google acknowledges its responsibility in this text and assures that its software will be reworked to improve this negotiation game by integrating the fact that "buses and other heavy vehicles are less able to give way to us than other types of vehicles".
No doubt that Google will rework its algorithms to make driving its vehicles safer and safer.
Question of liability
What is notable is the issue of liability. In an accident involving a robot car, who is at fault? American justice through the National Highway Traffic Safety Administration... (NHTSA) made its decision and position known in early February. Autonomous vehicles such as Google's are recognized as "drivers" in the eyes of the law. It states that while self-driving vehicles have met technical safety standards, there is nothing to prevent the artificial intelligence that controls the vehicle, in the absence of any human control, from being on the one hand an obstacle to driving and on the other hand, consequently, responsible for its driving.
Who is sanctioned when an autonomous car violates a traffic rule?
Understanding the novelty of its decision, the Authority takes care to specify: "the driver will not be a driver in the traditional sense of a driver of vehicles" as it has been understood since the automobile was first introduced. If there are no humans driving the vehicle, it is more reasonable to identify the driver as "a driver in the traditional sense of a driver of a vehicle" as it has been understood since the automobile existed. all which (as opposed to the one who) does the driving. "Artificial intelligence is thus promoted in law, driver of a vehicle. The AI has thus become an individual responsible for his or her actions. This is a first and it must make people think. Robots, since a Google autonomous car is nothing more than a robot, is therefore endowed with legal personality and liability.
It's not just a philosophical question. In the small traffic accident we report at the beginning of this article, we observed that the accident was caused by a classic problem of interpretation. This is what happens every day on the roads. Choices and trade-offs are made, usually with the safety option at the culmination of the decision-making process. Sometimes we make life and death decisions.
What about vehicles driven by an AI? What is its decision-making process in borderline cases where a moral dilemma is introduced? An example of an emergency situation where one has to make a split-second decision: should one crush one's car against a wall and risk killing the occupant or should one continue on the road and mow down a crowd of pedestrians? What does IA respond and what does IA do? Is the autonomous car programmed to kill as few people as possible and therefore smash the car against the wall, together with its passenger?
No one is answering those questions today. Defenders of the autonomous car argue that its driving is incomparably more reliable than human driving and that if it were to become widespread, it would significantly reduce the number of road deaths. So be it.
Will the driver's license be abolished soon?
Some even think, as is the case with some insurers, that the driving of vehicles by humans will eventually be banned. We are not there yet but the idea is there.
Google manufacturers like Tesla are determined to move forward in this market. For them, Mr. Toutlemonde's car will be without pedals, steering wheel and driver. It will be controlled by artificial intelligence assisted by sensors, cameras, radars and lasers to guide it in ultra-sophisticated mapping systems, and to understand its environment with the help of connections to all possible warning and signalling systems. Google is injecting massive investments in these projects and announces that its autonomous car will be available in the public as early as 2020 (in four years' time) and Tesla promises it this year!
The robomobile responds without any possible dispute to many societal challenges: improving road safety, mobility for all, reinventing the city, the disappearance of car parks, the development of car-sharing, etc. A revolution in the automobile for which America is writing the standards, rules and legal principles. Invented at the end of the 19th century in Europe, the history of the automobile that became robomobile is now being written in the United States.