A challenge for automated vehicles... Google and Apple are (currently, at least) leading the charge to built automated, driverless vehicles. Tesla is an up-and-coming contender, and some European makers are also making tentative moves. It seems like something that is almost certainly going to happen – there are just too many compelling reasons for doing so. Perhaps the single most compelling reason is highway efficiency: with robotic drivers, traffic can run safely at full speed with much smaller intervals between vehicles, because the robotic drivers will have vastly faster reaction times. Other compelling benefits include increased safety and an interesting alternative to intra-city mass transit.
In the past few years, Google and Apple have made a lot of progress, to the point where they are now making test runs of driverless cars on real highways. So many of the problems have been solved (or have clearly reachable solutions) that developers are now starting to face a thornier problem: how should a robotic vehicle behave when there is a moral or ethical element to a driving decision? There are many examples of such decisions. Here's one simple example. Suppose you're alone in your car. You drive around a turn on a twisty two lane road, and you see an impassible cliff on the left, a truck coming toward you in the left lane, a giant boulder in the right lane, and a meadow with lots of people in it on the right. You can't go off the road to the left, because the cliff is stopping you. If you go into the left lane, the oncoming truck will crush you. If you hit the boulder, you'll be killed. If you drive off the road to the right, you'll probably mow down 10 people, killing them, but you'll be fine. Most human drivers would choose to drive off the road to the right, because you'll live – but ten other people will die. What should the robotic driver do? Should it take the action that kills the least people? In that case, you're gonna die, because the robotic driver will choose the boulder. Should it act to save itself and you? In that case the ten people are gonna die. The only real certainty is that no matter what the robot chooses to do, the car manufacturer is going to be sued.
Right now there are no laws or regulations regarding that sort of moral or ethical decision-making by robots. If you're a science fiction reader, you'll probably remember that this is a problem Isaac Asimov foresaw in his robot stories and novels, and which formed the basis of several of them. We have plenty of challenges figuring out the right thing for people to do, and when the decision maker is a robot it gets much more challenging. The main driver today is a desire by the manufacturers to avoid litigation and damage to their reputations, but almost certainly the government is going to get involved at some point – and we'll end up with a giant set of regulations as impenetrable and useless as the tax regulations are today. This article is a good introduction to the issue. I'm certain this is an area that will evolve quickly, so it should be interesting to watch over the next decade or so...
No comments:
Post a Comment