Will Self Driving Cars Be programmed to Play God?
While at a recent continuing education class, the topic of *Self Driving Cars* arose. The concern from an insurance stand point is “Who would be liable in the event of an accident?” The driver, the auto manufacturer, the system designer?
Who is driving the car?
To date Google’s self driving cars have driven in excess of 1,500,000 miles, and although there have been 12 accidents, none of them have been the fault of the self driving car. As these self driving cars, or autonomous vehicles, begin to roll out, however, there will be, during the transition in, both manually driven and autonomous vehicles sharing the road. The problem arises here… You are occupying a self driving car, heading towards you is a semi truck, to your right is a vehicle occupied by an elderly couple, to the left is a woman pushing a stroller with 2 infants. The computer system identifies the scenario and determines that someone has to die. It’ll be either you, an elderly couple without much life left to live, or a mother and her two children.
It stands to reason that these systems will need to be programmed with risk mitigation analysis to minimize the cost and severity of a claim. The question remains, will these be published scenarios with full transparency that clearly state what the outcome the systems will choose when faced with these situations? With assisted self driving cars to roll out as early as 2017 and fully autonomous vehicles slated to hit the streets as early as 2020, there are several obstacles that will need to be overcome. However the severity of the incident will no longer be determined by the driver, nor will the potential outcome of your life, it will be determined by a computer programmer.
Follow Us On Social