What If Your Autonomous Car Keeps Routing You Past Krispy Kreme?

The Atlantic

View photo

.
Alexis Madrigal

On a future road trip, your robot car decides to take a new route, driving you past a Krispy Kreme Doughnut shop. A pop-up window opens on your car’s display and asks if you’d like to stop at the store. “Don’t mind if I do,” you think to yourself. You press “yes” on the touchscreen, and the autonomous car pulls up to the shop.

Wait, how did the car know that you might want an original glazed doughnut? Because it has data on your driving habits, and you’re a serial offender when it comes to impulsive snacking. Your car is also linked to your online accounts at home, and you had recently “liked” Krispy Kreme’s Facebook page and visited its website. 

Is this future scenario convenient—or creepy? It’s one thing if a car’s driver-drowsiness detection system (which exists today) sees that you’re nodding off and suggests coffee. But to make your automated car divert from its usual course because some advertiser paid it to do so, well, that sounds like a mini-carjacking.

Whatever you think of it, this future may be coming up on the road ahead. At the Consumer Electronics Show (CES) earlier this month in Las Vegas, automakers announced deals to deliver online services or in-car apps to web-enabled cars of tomorrow. And where there are free or cheap online services, there’s online advertising—that train is never late. 

We don’t know what that advertising might look like: It could literally steer your future car, or it could be more familiar, such as streaming ads across your windshield in auto-driving mode (maybe too distracting in manual-driving mode). But because ad revenue is still the dominant e-business model, it’s a safe bet that advertising will be coming to a future car near you. After all, Google’s acquisition of Nest—maker of “smart” thermostats and other appliances—last week appears to be its first step toward owning the Internet of Things. If the technology giant is leaping the firewall of your personal computer to the rest of your home, why not also your car? Apple co-founder Steve Jobs reportedly had hoped to bring an “iCar” to market, essentially a huge iPhone with wheels. 

Could advertisers really influence the route taken by a self-driving car? It seems plausible, and legal, in at least some circumstances. Say there are multiple routes to your destination. Some may be shorter in terms of distance but longer in terms of travel time, or some routes are equidistant. In those cases, there’s no obviously “right” route to take, but advertiser money could be a “plus factor” that’s just enough to tip driving algorithms in their direction. 

This practice doesn’t seem to be a big inconvenience for the car’s passengers, as long as the detour doesn’t add much extra time or distance to their trip. Some taxi drivers and hotel concierges are known to accept kickbacks from restaurants, casinos, strip clubs, and other establishments to steer business toward them. So this already happens today. But even if not illegal, it raises ethical questions and the need for transparency in a world run by algorithms most of us don’t understand.

 

More Ethical Potholes

Privacy is already a chief worry about in-car apps and robotics more generally, which some predict to be the next battleground for civil liberties. The doughnut scenario above speaks to that fear. Distracted driving could be made worse with in-car apps, as this hilarious video predicts. But there are other, less obvious problems to think about too:

A couple of weeks ago, a Massachusetts man was arrested when allegedly his Google+ account automatically emailed invitations to everyone in his address book, including his ex-girlfriend who had a restraining order against him, without his knowledge. Something similar could happen with robot cars, such as driving a registered sex offender right by a school when he isn’t supposed to be within 2,000 feet of them. Who would be to blame: the human behind the wheel, or the self-driving car?

As one automotive vice-president unwisely pointed out at CES, “We know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you're doing. By the way, we don’t supply that data to anyone.” This raises the issue of whether capability implies responsibility: Are you morally obligated to act on information that could prevent serious harm to someone?  For instance, if an intelligence agency collects data that strongly suggest an impending terrorist attack, it seems wrong not to warn the public or try stopping the attack.

As this applies to automated cars and certain people, it could be the duty of manufacturers to not only figure out where a car’s driver should go, but also where he or she should not go. In some distant future, if the locations of most people can be pinpointed through GPS and other methods, a robot car could tell when a driver is about to violate a restraining order and then refuse to travel there. If they have the data to connect the dots, they probably should do it when it matters.

And it doesn’t just matter for legal reasons, but other factors could be important to users of future wired cars. An owner of a shiny new robot car probably wouldn’t appreciate being deliberately driven—because of advertisers—past fast-food restaurants if she’s on a diet, or by a cluster of bars if she’s a recovering alcoholic, or toward maternity stores if she hasn’t publicly revealed her pregnancy.    

It could be that drivers and passengers can instruct cars to avoid certain destinations. Putting aside the question of why we should be imposed upon like this at all, if the car were to drive to those verboten destinations anyway, that’s probably wrong. Recall in Isaac Asimov’s novels that the second law of robotics is to always obey human orders (where they don’t violate the first rule to not cause or allow harm to humans). 

However, resisting humans is a major point of autonomous cars: We humans are often error-prone and reckless, while algorithms and unblinking sensors can physically drive better than us in most if not all cases. An automated vehicle is designed precisely to disregard our orders where they are imminently risky. That’s to say, refusing human orders is sometimes a feature, not a bug. It’s unclear, then, whether opting-out of certain destinations (or opting-in) is reason enough for cars to comply with those commands.

 

* * *

 

The app itself is becoming the new killer app. The latest Windows 8 machines mimic the app dashboards on Apple OS and Android mobile phones. And we can expect online applications to be part of future cars, robotic or not. As existing apps on our mobile phones and computers are already doing now, in-car apps will raise a host of legal and ethical dilemmas, from privacy and beyond.

The problem I discussed at the beginning was related to advertising, but advertising itself isn’t the problem. At their best, advertising could be helpful video clips or images that educate you about products and solutions you truly might be interested in. At their worst, they’re annoyances that interrupt your concentration while you’re absorbed in an essay, video, podcast, or video game.  Ads can push you to vote one way, or buy this thing you don’t need.  They could make you into a worse person—or a better person. 

So while advertising gets a lot of criticism, ads seem to be a necessary evil if the consumer wants to pay as little as possible. That’s neither here nor there in our discussion now, but the decision to allow a car to be controlled by third-parties—directing the route for an advertiser’s interests and not the car owner’s—is the real problem.  Advertising inside a wired car is not just about showing you tantalizing stuff, but it could be about driving you physically to that stuff.  This paradigm shift would make ads even more invasive than critics today might imagine.

More seriously, manufacturers will also need to make hard life-and-death choices in programming autonomous cars, and these decisions should be considered thoughtfully, openly, to ensure a responsible product that millions will buy, ride in, and possibly be injured with. That’s all the more reason to focus on ethicsnot just on law, as we’re doing at the Center for Automotive Research at Stanford (CARS)—in steering the future of transportation in the right direction.

 





More From The Atlantic

Rates

View Comments (0)