Last week, we heard of the terrible accident between the Uber Autonomous Vehicle (AV) and Elaine Herzberg. This is a terrible tragedy and our thoughts and prayers are with Elaine Herzberg’s family.
Currently in motor vehicle accident criminal and civil proceedings, the driver is tested as to whether they were negligent. This stands to reason, given they were in control of the vehicle. If they were not found negligent, we have a system of ‘contributory negligence’ which proportionally reduces the compensation received based on any negligence on the claimant’s behalf. For example, in Davis v Swift [2014] NSWCA 458 the claimant’s compensation was reduced to nil for the ‘conduct of walking backwards into the path of the vehicle’. In a final scenario, there are ‘blameless accidents’ that were not caused by the vehicle driver, owner or any other person.
So what happens when there isn’t a driver in an AV? There are two main points of difference when considering the scenarios where the vehicle involved is autonomous and without a driver.
First is apportioning negligence to the autonomous driving system’s manufacturer and developer. In my view, there will need to be a ‘reasonably capable and uninhibited driver’ test i.e. could a capable and uninhibited driver have prevented the accident? If so, then it is likely that the manufacturer and developer will be at fault as the system did not meet the requirements of an appropriate duty of care in operating the vehicle. Therefore, they would be deemed negligent.
It will be a very difficult to win a claim. The defendant will be a giant multinational technology and/or vehicle manufacturing company who will defend vigorously given the implications for their businesses. The sensor data that will be needed as evidence will be held within systems developed by the defendant, which will be a huge conflict of interest.
The second interesting point is regarding blameless accidents. In my view, AVs will be ending up in more blameless accident scenarios because the vehicles will simply function as they are intended i.e. do whatever the algorithms instruct them to based on the data from sensors. This means that ‘at fault’ accidents will become very rare as it will be very difficult to prove the development of these algorithms was negligent. So, we will end up with mostly blameless accidents.
Currently in the section blameless accident provisions (7A) of the Motor Accidents Compensation Act 1999 NSW (‘the Act’):
1) The death of or injury to a person that results from a blameless motor accident involving a motor vehicle that has motor accident insurance cover for the accident is, for the purposes of and in connection with any claim for damages in respect of the death or injury, deemed to have been caused by the fault of the owner or driver of the motor vehicle in the use or operation of the vehicle
Taken at face value, the owner’s insurance will end up liable for most of the accidents because there isn’t a driver and it will be difficult to mount a case against the system manufactuer/developer. So yes, owners will be picking up a lot of the liability. And yes, your vehicle may cause a fatality that you could be held liable for while you are sitting in your lounge room.
So what does this mean for AV ownership. I doubt whether many private individuals will want to take this risk. This means that vehicle sharing models will be much more prevalent, as large companies will be able to insure against this liability for large fleets of vehicles and build this into their pricing. Vehicle manufactures may also want to indemnify individuals against the risks, as Audi has done with some highest autonomous featured on its latest models.
There is significant work to be done on the legal frameworks supporting AVs. But perhaps the recent terrible accident and the AV Start Act in front of US Congress will provide some clarity. Clarity of the legal framework will reduce the largest barrier to adoption, as the development of the technology is basically here now.
This is very interesting. Would also be worth exploring how the algorithms are to react when an accident is predicted. And will this be configurable by the driver/owner? If a pedestrian appears suddenly in front of the car should we have the option to either swerve out of the way and possibly cause our own injury, or just brake and maximise the chance of injuring the pedestrian?
Your point on the manufacturers having control of the data is interesting. I wonder if it should be a legal requirement for all AV cars to contain something like a black box that would gather all metrics and be available for post-crash analysis by crash investigators.
I would expect that an AV would have much better sensory data and reaction times than a human. I guess this would be difficult to prove without putting humans through the same scenario, like in this case. It could be replayed through a simulator with multiple test drivers to see how this scenario would play out under human control.
Thanks for sharing!
Thanks Chris – these are all very important questions for the autonomous vehicle industry and society as a whole. The decisions the algorithms make when detecting an accident are the source of much debate, and it is coming back to the philosophical thought experiment ‘the trolley problem’. A tram is hurtling towards a group working on some tracks. You have a lever in your hand that could change the tram onto another line, but there is a distant, lone worker on that line. What is the right moral choice? Kill one to save many? There’s no right answer.
At the end of the day a programmer somewhere will have to codify this choice which is an age old ethical dilemma.
On data security, there is some discussion of the sensor data being stored in a blockchain for reference after an accident.