Thursday, August 13, 2015
If you, like us, are dog lovers...
I knew these new-fangled semiconductor thingies were just a fad!
I knew these new-fangled semiconductor thingies were just a fad! Vacuum tubes are back...
French cop arrests Brit...
French cop arrests Brit... Via my pistol-packing, switched-to-Mac mama...
Study the picture carefully first, then read on...For my American readers who have never been exposed to British cars: not only do the Brits drive on the wrong side of the road, they have their driver side on the wrong side of the car! This is most confusing to any non-Brit who rents a car over there, as I can attest to with my personal experiences. The Brits aren't alone in this – most of their former colonies (Australia, New Zealand, Singapore, etc.) also drive on the wrong side, as does Japan. The map at right (click to embiggen) shows all these backwards-driving countries in blue...
This actually happened to an Englishman in France who was totally drunk.
A French policeman stops the Englishman's car and asks if he has been drinking.
With great difficulty, the Englishman admits that he has been drinking all day, that his daughter got married that morning, and that he drank champagne and a few bottles of wine at the reception, and many single malts scotches thereafter.
Quite upset, the policeman proceeds to alcohol-test (breath test) the Englishman and verifies that he is indeed totally sloshed.
He asks the Englishman if he knows why, under French Law, he is going to be arrested.
The Englishman answers with a bit of humour,
"No sir, I do not! But while we're asking questions, do you realize that this is a British car and that my wife is driving . . . . . on the other side?"
A challenge for automated vehicles...
A challenge for automated vehicles... Google and Apple are (currently, at least) leading the charge to built automated, driverless vehicles. Tesla is an up-and-coming contender, and some European makers are also making tentative moves. It seems like something that is almost certainly going to happen – there are just too many compelling reasons for doing so. Perhaps the single most compelling reason is highway efficiency: with robotic drivers, traffic can run safely at full speed with much smaller intervals between vehicles, because the robotic drivers will have vastly faster reaction times. Other compelling benefits include increased safety and an interesting alternative to intra-city mass transit.
In the past few years, Google and Apple have made a lot of progress, to the point where they are now making test runs of driverless cars on real highways. So many of the problems have been solved (or have clearly reachable solutions) that developers are now starting to face a thornier problem: how should a robotic vehicle behave when there is a moral or ethical element to a driving decision? There are many examples of such decisions. Here's one simple example. Suppose you're alone in your car. You drive around a turn on a twisty two lane road, and you see an impassible cliff on the left, a truck coming toward you in the left lane, a giant boulder in the right lane, and a meadow with lots of people in it on the right. You can't go off the road to the left, because the cliff is stopping you. If you go into the left lane, the oncoming truck will crush you. If you hit the boulder, you'll be killed. If you drive off the road to the right, you'll probably mow down 10 people, killing them, but you'll be fine. Most human drivers would choose to drive off the road to the right, because you'll live – but ten other people will die. What should the robotic driver do? Should it take the action that kills the least people? In that case, you're gonna die, because the robotic driver will choose the boulder. Should it act to save itself and you? In that case the ten people are gonna die. The only real certainty is that no matter what the robot chooses to do, the car manufacturer is going to be sued.
Right now there are no laws or regulations regarding that sort of moral or ethical decision-making by robots. If you're a science fiction reader, you'll probably remember that this is a problem Isaac Asimov foresaw in his robot stories and novels, and which formed the basis of several of them. We have plenty of challenges figuring out the right thing for people to do, and when the decision maker is a robot it gets much more challenging. The main driver today is a desire by the manufacturers to avoid litigation and damage to their reputations, but almost certainly the government is going to get involved at some point – and we'll end up with a giant set of regulations as impenetrable and useless as the tax regulations are today. This article is a good introduction to the issue. I'm certain this is an area that will evolve quickly, so it should be interesting to watch over the next decade or so...
In the past few years, Google and Apple have made a lot of progress, to the point where they are now making test runs of driverless cars on real highways. So many of the problems have been solved (or have clearly reachable solutions) that developers are now starting to face a thornier problem: how should a robotic vehicle behave when there is a moral or ethical element to a driving decision? There are many examples of such decisions. Here's one simple example. Suppose you're alone in your car. You drive around a turn on a twisty two lane road, and you see an impassible cliff on the left, a truck coming toward you in the left lane, a giant boulder in the right lane, and a meadow with lots of people in it on the right. You can't go off the road to the left, because the cliff is stopping you. If you go into the left lane, the oncoming truck will crush you. If you hit the boulder, you'll be killed. If you drive off the road to the right, you'll probably mow down 10 people, killing them, but you'll be fine. Most human drivers would choose to drive off the road to the right, because you'll live – but ten other people will die. What should the robotic driver do? Should it take the action that kills the least people? In that case, you're gonna die, because the robotic driver will choose the boulder. Should it act to save itself and you? In that case the ten people are gonna die. The only real certainty is that no matter what the robot chooses to do, the car manufacturer is going to be sued.
Right now there are no laws or regulations regarding that sort of moral or ethical decision-making by robots. If you're a science fiction reader, you'll probably remember that this is a problem Isaac Asimov foresaw in his robot stories and novels, and which formed the basis of several of them. We have plenty of challenges figuring out the right thing for people to do, and when the decision maker is a robot it gets much more challenging. The main driver today is a desire by the manufacturers to avoid litigation and damage to their reputations, but almost certainly the government is going to get involved at some point – and we'll end up with a giant set of regulations as impenetrable and useless as the tax regulations are today. This article is a good introduction to the issue. I'm certain this is an area that will evolve quickly, so it should be interesting to watch over the next decade or so...
Morning walk in Paradise...
It was a clear, crisp morning here; not a cloud in the sky. I'd been up a bit earlier, hoping to see some of the Perseid meteor shower, but had no luck on that. They must be avoiding northern Utah for some reason :) We were so early that other than a few flitting birds we didn't see any animal life at all, though we certainly heard a lot of bird chatter on the way back.
Subscribe to:
Posts (Atom)