Defending self-driving cars in the face of tragedy

Last month we reached the tragic, and long-dreaded, moment in the history of self-driving cars: the death of an individual who didn’t opt into using self-driving technology. (In this case, it was a pedestrian, but it could have been passengers in a non-self driving car).

[ Click to Tweet (can edit before sending): https://ctt.ec/A29R8 ]

This follows the May 7, 2016 Florida death of a driver using Tesla’s driving-assist technologies (“autopilot”), which are often confused with self-driving technology.

Since that time, two other deaths have occurred while autopilot was engaged, including a driver in China on January 20, 2016, and the recent crash here in the Bay Area on March 23rd.

Four tragic deaths, in four separate instances, using two different flavors of self-driving tech (driver assist & fully automated), but one common thread which we must, as a society and industry, address candidly: user error and — most confoundingly — the abuse or misuse of this technology.

I’m a strong believer, and investor, in self-driving technologies.

I’m a shareholder in all three of the major players in self-driving: Tesla, Uber and Alphabet (aka Google), which owns Waymo. Two of those names, I own blindly via my Wealthfront “robo-portfolio” (i.e., I don’t actively trade them and don’t know how much I own of each). I invested in Uber, which is still a private company, during their seed round.

I also own two Teslas with self-driving technology, the Model X and the Model 3, and I’ve logged over 20,000 miles on autopilot.

I use autopilot almost every day on the 101 freeway, the same road where the most recent death with autopilot engaged occurred. It’s important to note that I’m not saying “autopilot death” here, but rather a death that occurred with autopilot engaged.

This is an important distinction, because in all three autopilot cases — and I want to be careful to not blame the victims here — the users appear to have potentially misused — or perhaps even abused — the technology.

The Frustrating Truth

The most disturbing and frustrating trend, in all four deaths, is that the human drivers played a significant role in them. From all of these crashes, we have a massive amount of data; in two, we have dashcam video, and in the tragic death of the pedestrian, we have video of the driver (which I believe is a first).

The unprecedented amount of information we are getting from these accidents is steering our collective discussion toward logic over emotion — which is a nice thing to see.

Here is what we know about each of the four crashes.

Florida (Autopilot): The driver was going nine miles above the speed limit with autopilot engaged and had seven seconds to brake as a truck passed in front of him. The brake pedal was never touched. This means the driver was either incapacitated at the time or chose to look away for seven full seconds. There were various reports that a DVD player was found in the car, and the driver of the truck claimed that Harry Potter was still playing after the crash.

China (Autopilot): You can watch the dashcam video of this tragic accident ( https://youtu.be/fc0yYJ8-Dyo ), in which a Tesla on autopilot crashes into the back of a stationary road sweeping truck. Based on the video the driver had ample time to avoid the sweeper if autopilot was on. Also, no self-driving technology can be perfect when dealing with stationary objects on the road (i.e., a boulder rolls down a hill onto a highway). Finally, why on earth is a road sweeping car sitting in the passing lane with no lights, flares or safety vehicle behind it to alert drivers to the fact that it’s stationary or moving slowly?

Mountain View (Autopilot): In this case, Tesla quickly released the tragic news that the driver of the Model X had ignored warnings to keep their hands on the wheel while autopilot was engaged. Additionally, the driver was speeding and had the distance setting at one-car length, when it should have been set to the max, which is seven. The driver had, according to Tesla, five long seconds to avoid the concrete divider, but again, no action was taken.

Arizona (Self-driving): It’s too early to know what failed with this tragic death, but based on the video of the driver, they were looking down — most likely at a smartphone — for most of the ~10 seconds of video that has been released.

We won’t know for a while, but there is a chance that if the driver — who was being paid to drive the car — had not been blatantly and knowingly breaking the law, they might have been able to apply the brakes in time. There is a chance that the self-driving technology failed in this situation, as well. If the technology did fail (and that’s a big if), this would wind up being a serious edge case: the technology AND the safety driver failed.  

The thread we must address in all of these cases is that all four drivers (three primary drivers using autopilot and one safety driver using fully autonomous) were confirmed to have ignored what was happening on the road for many seconds.

If you look away from the road for five seconds at 65 MPH, which is what happened in three of these four accidents, you would have traveled a distance of well over a football field — without looking (65 MPH=95.3 feet per second; 95.3 feet x five seconds=~477 feet).

How We Should Address Autopilot

Autopilot, as currently designed by Tesla, consists of two primary technologies that are available in many cars: adaptive cruise control and lane assist. As such, it’s clear Tesla is being held to a higher standard than other players who have been deploying this technology.

Some people have blamed the word “autopilot,” claiming that it’s giving naive users a false sense of security. Owning two cars with autopilot, I can tell you that the system makes it absurdly clear that you can’t take your hands off the road, and in fact, it disables itself for the entire ride if you ignore it completely.

The fact is, smart people can take risks they shouldn’t, and sometimes smart people can make inconceivably bad decisions, like watching a movie while driving or taking the eyes off the road for five seconds.  

The only fixes I can think of with autopilot are window dressing. We could require everyone to take an hour-long course and sign even more waivers, but I don’t see either of those measures stopping someone from deliberately misusing the technology.

Just like Honda is never going to get motorcycle riders to stop splitting traffic, popping wheelies and (back to smart people doing incomprehensibly inadvisable things) standing on the seats of their bikes. In fact, there is an entire genre of YouTube compilations around motorcycle riders doing very stupid things: https://youtu.be/QeKFc3BNtU4

How We Should Address Fully Autonomous, Self-Driving Trials

In order to install massive confidence in self-driving, and in order to conservatively manage edge cases, we should require two “pilots” — one in the left seat and the other in the right seat — for all self-driving trials.

This will help with the boredom issues that solo test drivers have reported, and that peer interactions with audits will (hopefully) eliminate.

Airplanes are designed to be run by one pilot, but the benefits of two pilots running checklists (read the awesome “The Checklist Manifesto” if you haven’t), and having a backup, are well worth the expense.

We don’t need to save money in these trials, we need to refine the edge cases of the technology while inspiring massive confidence in it. The drivers in this scenario should swap pilot/co-pilot roles every hour, in order to keep people fresh and engaged.

If we do this, we will have a massive advantage over non-autonomous cars: two drivers AND an array of sensors backing them up.

This is an easy concession for the industry to make, in order to avoid a complete shutdown of self-driving trials — which is what most savvy people believe will happen if we have another tragic death.  

What about other causes of road fatalities?

The leading causes of death in cars are speed, lack of seatbelts and distracted driving. All are behaviors that people choose to do and that could easily be solved with a combination of technology and enforcement — but as a society we choose to take a very light hand with these.

Speed limiters have existed for decades, yet we do not require them despite the technology being cheap, available and speed being a top contributor to deaths.

When Ontario, Canada added speed regulators to trucks, crashes dropped dramatically — between 25-73% according to research I’ve read. (Sources: http://bit.ly/2uObH5J, http://bit.ly/2qaS0PL)

We allow consumers to decide if they want to speed every day, and have done so for almost a century.

If we want to reduce road deaths, the quickest path to that would be to put regulators on all cars this year with a maximum speed of whatever the highest speed limit is in your state/region.

Putting aside the ease at which speed regulator technology could be put in cars, our laws around speeding could be instantly changed to be so punitive that they would seriously dissuade people from speeding.  

Imagine if you get tagged doing 20 miles or 30% above the speed limit, with or without speed regulator, and your car is impounded for a month. What if, on the second time you’re caught speeding at this level, your car was sold and the proceeds given to victims of car accidents?

The same highly-punitive process could be deployed for distracted driving, which people don’t take very seriously.

Behavior would change quickly if you lose your car for a month — or indefinitely. Sure, this is an extreme measure, but that’s why I’m making it: we have the power as a society to change laws and reexamine our approach to long-standing traditions, like speeding.

So far, I’m very impressed with the press and public’s reaction to these tragic deaths. We’re not overreacting (yet), and hopefully we’ll take a moment to consider the big picture when it comes to road fatalities–taking a fresh look at all aspects of how we might get to “zero road deaths.”  

All the best, Jason