Here at Quoted, we’ve spent some time talking about autonomous vehicles—we’ve talked about the exciting new technology, detailed which big companies are getting in on the ground floor of autonomous driving, and we’ve presented safety advantages and concerns. But perhaps the most thrilling—and most mysterious—aspect of autonomous vehicles is how they’ll impact driving as we now know it. If there really are huge safety advantages to autonomous vehicles (and it looks like there are), things will certainly change. But how? Jay Samit over at Tech Crunch recently set forth an elegant argument: Samit says that with the advent of mainstream driverless vehicles, “Driving a car will be illegal by 2030.” A simple and clever—and provocative—argument, certainly, but we at Quoted disagree.
The Argument for Outlawing Driving
Though autonomous vehicles are still in their infancy (or perhaps toddlerhood), Samit argues that the evidence so far demonstrates that driverless vehicles will be so much more safe than human-driven ones that there will be no prudent choice but to outlaw driving. Samit’s compelling evidence for the superiority of robots to humans:
- “More than 1.2 million people are killed in car accidents globally each year.”
- “Human driver error is responsible for 94 percent of all crashes across the planet.”
- “According to the National Sleep Foundation, 69 percent of adult drivers report driving while drowsy at least once a month.”
- “The single biggest cause of traffic fatalities: human error.”
- “Alcohol is now responsible for more than a third of all traffic-related fatalities worldwide.”
- “Autonomous vehicles don’t drive drunk, don’t drive distracted and don’t fall asleep at the wheel.”
- “Self-driving cars are wired with cameras, infrared sensors, networked maps and a host of other software, which empowers them to accurately avoid dangers in ways humans can’t.”
- “[Self-driving cars] can brake faster, swerve quicker and anticipate changes in road conditions that are imperceptible to the human eye (such as obstacles beyond the visible range of headlights).
- “Robots also communicate with each other more efficiently and effectively than human-navigated vehicles.”
And perhaps Samit’s most compelling argument for the superiority of autonomous vehicles: “Google’s autonomous vehicles have now logged more than one million miles on roads dominated by human-driven cars. Subjected to the same real-world road conditions as us mere mortals, self-driving cars have been through rain, sleet and snow. Autonomous vehicles have driven the equivalent of circumnavigating the globe 40 times — without incident. In fact, self-driving machines have been hit by human drivers 11 times [Google’s own number is 14], but never been the cause of a single accident.”
All excellent points. However, semantics here are important. An “autonomous vehicle” is not the same as a “driverless vehicle.” And this distinction is why drivers will still be behind the wheel in fifteen years and, we argue, why human driving won’t be outlawed at all, at least not with driving as we know it today.
Autonomous Vehicle vs. Driverless Vehicle: an Important Distinction
Two Words: Safety Driver. It’s true, Google’s self-driving car has logged more than one million miles (the count is now at nearly two million, and other autonomous vehicles, like Delphi’s have made impressive strides. But for each and every mile, there has been a driver behind the wheel, ready to take over when needed. From Google’s own FAQ page about their autonomous vehicles: “During this phase of our project, safety drivers are in all of our vehicles to watch over how the cars drive and take over driving if needed using removable manual controls.” And, “All safety drivers are professionally trained to take control of the vehicle at any time. They’re also trained to be conservative in deciding when they should take over—if they notice that the car is not being as safe as it could be, they’ll take over driving immediately.”
Google’s car is driving itself, yes, but its passengers are not kicking back with a good book (or a good game of Candy Crush), they are, at this point, just as alert as a driver in a regular old non-autonomous vehicle—perhaps even more alert than the average driver. Beyond Google, as we reported back in March, Delphi Global Communications Manager Kristen Kinley told Quoted, “No one has a fully autonomous vehicle where the driver does not at least need to be in the loop should something go wrong. This is several years into the future. Regulations, testing, liability, cost, infrastructure all have to be in place before we will see fully-autonomous, driverless cars on public roads.”
Autonomous Vehicles Still Need to Visit Oz
Our second issue with Samit’s argument for the quick advent of a world in which humans are no longer permitted to drive: ethics.
Sure, the evidence so far points to the superiority of autonomous vehicles when it comes to standard traffic safety—even on public roadways and even when unexpected curveballs are considered, like another car cutting the autonomous vehicle off, or treacherous weather conditions. But what about the all-too-common (and all-too-human) events like the one philosopher Chris Gerdes of Stanford presented (and we reported on): “A child suddenly dash[es] into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van.” Gerdes said, “As we see this with human eyes, one of these obstacles has a lot more value than the other. What is the car’s responsibility?” Gerdes continued “If [it] would avoid the child, if it would save the child’s life, could we injure the occupant of the vehicle? These are very tough decisions that those that design control algorithms for automated vehicles face every day,” he said.
Can we trust a machine to think like we do, flawed and often impaired as we are? Not yet. Humans are certainly not foolproof at negotiating events like the above, but our millions of years of evolution have prepared us better than machines. (We’ll regret that dig when the robots take over.) We argue that the technology to ensure the same level of quick decision-making in incredibly complex situations the way a person can is nowhere near developed enough to see completely driverless fleets on our roadways in fifteen years.
Our Final Take
Don’t get us wrong: We are champing at the bit about autonomous vehicles just as much as the next car-tech loving blog, but we just aren’t sure that advancements in ethical technology could, in just fifteen years, advance enough to justify outlawing driving completely.
What do we think will need to happen before the world Samit describes is realized—a world in which truly driverless cars take over and human drivers are not even allowed to drive if they want to? Major overhauls of all roadways, clear laws and regulations governing autonomous vehicles, and robots that can reason and make decisions about morally and ethically complex situations with the same processing time as the human brain (insert jokes about your dopey step-cousin-in-law here). Another complication: a human can be held accountable for driving decisions and errors, but we cannot yet hold a robot accountable. Auto insurance companies we’ve spoken to have no protocol in place for insuring driverless vehicles (so even you early adaptors ought to plan on keeping up with your policy for now).
Perhaps most importantly: all of the evidence Samit presents for the superiority of autonomous vehicles is based on a human-machine pairing—and highly trained, highly alert, on-the-job level of attention humans at that. So maybe the answer isn’t outlawing driving, maybe the answer is pairing robots and humans: one is clearly superior most of the time, but in crucial moments the other can step in and take the wheel.
Tell us what you think in the comments: Will legislators have an obligation to outlaw driving as soon as possible so as to reduce the overall deaths of humans? Or do the philosophical conundrums make enough of an argument for keeping human drivers in the game, at least for the foreseeable future?