A self-driving car may someday have to decide between your life and the lives of others. But how should the car choose? If you don’t know how to make that decision, that’s okay — Washington doesn’t either.
That’s one big takeaway in a new, lengthy document from the Department of Transportation that lays out options to make autonomous vehicles safer–and represents the most public sign of the attention self-driving cars are getting from politicians despite their inability to vote.
Over just the past three months, a Tesla driver died when his car’s autopilot software failed to detect a turning tractor-trailer, Ford (F) began showing off its own autonomous (and exceedingly polite) vehicles, Lyft founder John Zimmer predicted that the majority of that ride-hailing service’s trips would involve self-driving cars by 2021, and Uber launched a trial of self-driving cars in Pittsburgh—in which human drivers remain seated upfront, just in case.
It’s enough to make Google, once the most public advocate of driverless cars, look like it’s falling behind.
The rapid progress has also left government policy makers and auto-industry lawyers with their own catching up to do.
DOT on the spot
On Tuesday, the Obama administration set out its plan to bring national oversight to self-driving cars that, as President Obama argued in a Pittsburgh Post-Gazette op-ed, bring such benefits as “safer, more accessible driving” and “less congested, less polluted roads.”
Remember, we human drivers aren’t as good as we think. US motor-vehicle crashes killed 35,902 people in 2015, and driver choice or error caused 94% of those accidents.
The Department of Transportation’s proposed framework, as outlined in a 116-page National Highway Traffic Safety Administration document, stresses guidance over regulation.
NHTSA’s recommended “Safety Assessment” covers 15 criteria, from “Data Recording and Sharing” to “Object and Event Detection and Response.” The agency doesn’t stipulate metrics and in some cases tosses the hard choices for “Highly Automated Vehicles” to the industry.
For example, under “Ethical Considerations,” the paper shies away from a bright-line rule like, say, “A self-driving car may not injure a human being or, through inaction, allow a human being to come to harm.” Instead, it admits that when a self-driving car can only protect one person at the cost of another, its programming “will have a significant influence over the outcome for each individual.” Yes, it will.
NHTSA counsels against expecting people to take over after a software malfunction: “human drivers may be inattentive, under the influence of alcohol or other substances, drowsy or physically impaired in some other manner.”
That’s something Google learned early on, when it found that Google employees who’d volunteered to test self-driving cars started ignoring the road — even though cameras in test cars recorded their behavior.
Today, car manufacturers certify their own vehicles, after which NHTSA conducts spot checks and, if necessary, orders recalls. The paper devotes much of its length to exploring other alternatives, from the kind of pre-market testing the Federal Aviation Authority does to certify each new aircraft type to intermediate levels of regulation that might involve third-party testing.
My own prediction: NHTSA will gravitate towards enforcement mechanisms that don’t require new legislation, since we’ve all seen how inefficient Congress can be at moving forward with tech policy.
A panel discussion at a conference in New York revealed other potential complications, most involving the information that a self-driving or only partially-autonomous car must handle to do its job.
“Autonomous vehicles create and generate an enormous amount of data,” said Allison Hoff Cohen, managing counsel at Toyota (TM). For self-driving cars to take off, she said, that data must stay private by default — with clear customer incentives for any disclosure you might make.
Who would want that data? Car-insurance firms, for one. For years, some have offered discounts to motorists willing to have their driving habits tracked; panel moderator Jonathan Beckham, a lawyer with Greenberg Traurig, suggested insurers would line up to offer additional benefits if they could get more insight about drivers of partially autonomous vehicles.
State and local governments looking to ease traffic will also want to tap into the artificial brainpower of self-driving cars, observed Darius Withers, in-house counsel at Accenture LLP (and a regular on Washington’s Beltway). “The data is particularly valuable to them,” he said.
Until cars reach total autonomy — at which point the steering wheel goes away — we’ll also have to decide how much liability falls upon drivers who disable all or part of a car’s automated systems.
Toyota’s Cohen noted that a car that can read the roads could also read its occupants. She sketched out a future in which an autonomous car would drop a parent off at her job, then return home and take the kids to school, recognizing each family member automatically.
That could streamline many family errands, but it would also intersect with different privacy rules — which, as she said in a conversation after the panel, get particularly strict in Europe.
Starting off in first gear
The politics of all this are almost guaranteed to get weird. People have understandable hangups about yielding control to robots, even when they’re demonstrably worse than their machines, and the high prices of many vehicles sold today with assisted-driving features threaten to add a little class resentment to the mix.
And it’ll only take one story about somebody behaving grossly irresponsibly in a self-driving car to set back the entire discussion.
But we have to figure this out. Tens of thousands of lives are at stake, year after year. I don’t know how long it will take to put self-driving cars into wide and accepted use, but I hope it’s less than 10 years from now — when my daughter will be old enough to get her driver’s license.
More from Rob: