...the prosecutor-dictated plea bargain system, by creating such inordinate pressures to enter into plea bargains, appears to have led a significant number of defendants to plead guilty to crimes they never actually committed. For example, of the approximately three hundred people that the Innocence Project and its affiliated lawyers have proven were wrongfully convicted of crimes of rape or murder that they did not in fact commit, at least thirty, or about 10 percent, pleaded guilty to those crimes. Presumably they did so because, even though they were innocent, they faced the likelihood of being convicted of capital offenses and sought to avoid the death penalty, even at the price of life imprisonment. But other publicized cases, arising with disturbing frequency, suggest that this self-protective psychology operates in non-capital cases as well, and recent studies suggest that this is a widespread problem. For example, the National Registry of Exonerations (a joint project of Michigan Law School and Northwestern Law School) records that of 1,428 legally acknowledged exonerations that have occurred since 1989 involving the full range of felony charges, 151 (or, again, about 10 percent) involved false guilty pleas.Do go read the whole thing...
Sunday, November 2, 2014
“But would not any program that helps to reduce the shame of sending innocent people to prison be worth trying?”
“But would not any program that helps to reduce the shame of sending innocent people to prison be worth trying?” That's the final sentence of an essay by Jed Rakoff, a U.S. District Judge. He's talking about a proposal for reforming one of the most shameful aspects of the U.S. system of justice, one that no other country uses so pervasively: plea bargains. A sample:
The Obama Doctrine...
The Obama Doctrine... As tweeted by Gary Kasparov:
Do as little as necessary to appear to be doing something without actually committing to a cause or course of action.You know, that's completely plausible...
Surprising graphics...
Surprising graphics... Twenty-one more like the one at right. Not all of these were surprising (at least, not to me), but they all are ponder-worthy...
Vintage ads...
Vintage ads... A couple dozen of 'em, some funny, some shocking. Most of these “vintage” ads are from within my lifetime, which I don't find amusing at all!
“One thing is for sure: politics is in for a major overhaul.”
“One thing is for sure: politics is in for a major overhaul.” Gary Shapiro is president of the Consumer Electronics Association. He notes that software for smartphones is becoming available that can analyze emotions from images of faces, and can determine from the sound of a voice whether someone is telling the truth or is lying. Then he ponders the implications of such technologies, particularly if they get better than they are today.
To anyone who reads science fiction, this will seem like old territory. Sci-fi authors have been pondering questions like this for about 80 years now. I'm going to guess that Gary Shapiro does not read science fiction, or this wouldn't all seem like new territory to him :)
Something he doesn't mention at all, though, is much more concerning to me. That is the fact that this software will be wrong very often – maybe even most of the time. Imagine an app that listens to someone you're talking to on the phone, and tells you whether they're telling the truth. If your caller says “I’d love to go to dinner with you!” and the app says they're lying – but they're actually telling the truth – that's going to be a problem. Or, if your caller says “That’s too expensive for me!” and the app says they're telling the truth – but they're actually lying – that's also going to be a problem. And those apps will make these sorts of errors, because assessing emotional state or verity is not an exact science. Like “lie detectors”, there's not really solid evidence that these things work outside of carefully controlled (and possibly carefully constructed) test scenarios. The current state of the art here isn't much different than a Ouija board – which means that about half of Americans will believe they're infallible.
This I find very worrisome...
To anyone who reads science fiction, this will seem like old territory. Sci-fi authors have been pondering questions like this for about 80 years now. I'm going to guess that Gary Shapiro does not read science fiction, or this wouldn't all seem like new territory to him :)
Something he doesn't mention at all, though, is much more concerning to me. That is the fact that this software will be wrong very often – maybe even most of the time. Imagine an app that listens to someone you're talking to on the phone, and tells you whether they're telling the truth. If your caller says “I’d love to go to dinner with you!” and the app says they're lying – but they're actually telling the truth – that's going to be a problem. Or, if your caller says “That’s too expensive for me!” and the app says they're telling the truth – but they're actually lying – that's also going to be a problem. And those apps will make these sorts of errors, because assessing emotional state or verity is not an exact science. Like “lie detectors”, there's not really solid evidence that these things work outside of carefully controlled (and possibly carefully constructed) test scenarios. The current state of the art here isn't much different than a Ouija board – which means that about half of Americans will believe they're infallible.
This I find very worrisome...
“Their little heads are exploding.”
“Their little heads are exploding.” A sixth grade class (with an unusually gifted teacher!) attacks a simple, but interesting numerical problem. The results were different than my intuited result, but the blow-by-blow description of the class' result is fascinating...
Trap a cat in a circle...
Trap a cat in a circle... This is weird, and I'm skeptical. I've asked Debbie to see if she can verify this experimentally...
Some non-deterministic software makes me uncomfortable...
Some non-deterministic software makes me uncomfortable... Reading this article about flaws in neural networks got me thinking about some of my own experiences with non-deterministic software.
There are two big areas of software I'm aware of whose outputs are not (necessarily) determined by its inputs: many kinds of AI, and genetic programming.
Some kinds of (so called) artificial intelligence (or AI) use non-deterministic “learning” mechanisms that result in a system that produces outputs from its inputs in a way that no human understands. I'm not saying that these systems aren't useful, because they demonstrably are useful. I first ran into such a system when interviewing with a company that made software to evaluate the risk in credit applications. In the interviewing process I was left alone for a few hours with a computer running a development version of their software. This let me play around with various inputs and see how they changed the output – and I very quickly spotted some things that looked like big problems to me. For instance, I noted that if I varied the applicant's annual income up in $100 increments, the output would vary from “recommend approval” to “recommend disapproval” in an apparently random way. It's hard to imagine any real world justification for that! When I inquired about that response, I was told that nobody knew why that happened, exactly; they just knew that the “network” had learned that somehow. As a software developer, this makes me uncomfortable, and distrustful of the results. This is the kind of software application that (it seems to me) should be deterministic – but it's not.
Genetic programming attempts to find solutions for problems through the same sort of mechanisms through which life has evolved. The basic method is to try various approaches, randomly selected, and let the winners survive through some sort of Darwinian filtering. This is a concept I find fascinating, as it has the potential to solve problems that no human software engineer has ever found a solution for. Implementing genetic programming is a tricky thing to do, and I've only played around with it a little bit. Only twice did I manage to “evolve” a program that actually accomplished something (that is, produced the desired results) – and in one of these cases, the result was better (meaning faster, in this case) than what I did by hand. This better result was for a sorting program written in Java. The test that controlled the evolution of my genetic program took 10 sets of data as inputs, and sorted them all correctly. I ran the evolution part of the program for about a month on a Linux server, where it consumed one core for that entire time – that's a lot of computing! The result was three times faster than my hand implementation of Quicksort, and four times faster than the standard Java library's sort method. The code itself was complete gobbledegook, a couple thousand lines of incomprehensible lunacy that I spent a week trying to reverse-engineer but totally failed. Somewhere during that week it occurred to me to try other datasets on the genetic algorithm, and here I found some fascinating results. On some datasets, the algorithm never terminated – bad. On some datasets it sorted correctly, but slower than Quicksort – moderately bad. On most datasets, it sorted very quickly – but incorrectly. Very bad! How could one ever trust the result of a genetically derived sort algorithm? I don't think you could. My conclusion to that is that genetic programming is really only useful when the entire set of possible inputs can be tested – and that's a very limited set of problems, indeed.
I'm much more comfortable with deterministic software. Some would argue, though, that all software is non-deterministic – because of the bugs lurking within. They may be right :)
There are two big areas of software I'm aware of whose outputs are not (necessarily) determined by its inputs: many kinds of AI, and genetic programming.
Some kinds of (so called) artificial intelligence (or AI) use non-deterministic “learning” mechanisms that result in a system that produces outputs from its inputs in a way that no human understands. I'm not saying that these systems aren't useful, because they demonstrably are useful. I first ran into such a system when interviewing with a company that made software to evaluate the risk in credit applications. In the interviewing process I was left alone for a few hours with a computer running a development version of their software. This let me play around with various inputs and see how they changed the output – and I very quickly spotted some things that looked like big problems to me. For instance, I noted that if I varied the applicant's annual income up in $100 increments, the output would vary from “recommend approval” to “recommend disapproval” in an apparently random way. It's hard to imagine any real world justification for that! When I inquired about that response, I was told that nobody knew why that happened, exactly; they just knew that the “network” had learned that somehow. As a software developer, this makes me uncomfortable, and distrustful of the results. This is the kind of software application that (it seems to me) should be deterministic – but it's not.
Genetic programming attempts to find solutions for problems through the same sort of mechanisms through which life has evolved. The basic method is to try various approaches, randomly selected, and let the winners survive through some sort of Darwinian filtering. This is a concept I find fascinating, as it has the potential to solve problems that no human software engineer has ever found a solution for. Implementing genetic programming is a tricky thing to do, and I've only played around with it a little bit. Only twice did I manage to “evolve” a program that actually accomplished something (that is, produced the desired results) – and in one of these cases, the result was better (meaning faster, in this case) than what I did by hand. This better result was for a sorting program written in Java. The test that controlled the evolution of my genetic program took 10 sets of data as inputs, and sorted them all correctly. I ran the evolution part of the program for about a month on a Linux server, where it consumed one core for that entire time – that's a lot of computing! The result was three times faster than my hand implementation of Quicksort, and four times faster than the standard Java library's sort method. The code itself was complete gobbledegook, a couple thousand lines of incomprehensible lunacy that I spent a week trying to reverse-engineer but totally failed. Somewhere during that week it occurred to me to try other datasets on the genetic algorithm, and here I found some fascinating results. On some datasets, the algorithm never terminated – bad. On some datasets it sorted correctly, but slower than Quicksort – moderately bad. On most datasets, it sorted very quickly – but incorrectly. Very bad! How could one ever trust the result of a genetically derived sort algorithm? I don't think you could. My conclusion to that is that genetic programming is really only useful when the entire set of possible inputs can be tested – and that's a very limited set of problems, indeed.
I'm much more comfortable with deterministic software. Some would argue, though, that all software is non-deterministic – because of the bugs lurking within. They may be right :)
Scott Adams is a fascinating person...
Scott Adams is a fascinating person... I first ran across his work through his Dilbert cartoon. Reading the day's strip is still a morning ritual for me, and more often than not results in a hearty laugh-out-loud for this geek. He captures the office dynamics so perfectly, and finds a way to make them funny.
Then I started reading his blog, and later a couple of books. Both of these are full of meaty, interesting ideas and concepts – somehow not something I ever expected from a cartoonist. I don't know where or how I acquired the notion that a cartoonist wouldn't be that interesting a person, but clearly I did – because reading Adams' prose surprised me. A lot.
This morning I read another great example, on his blog. A key paragraph:
If you enjoy reading someone with a unique and refreshing perspective on the world, I recommend Scott Adams' blog and books. His cartoons aren't bad, either :)
Then I started reading his blog, and later a couple of books. Both of these are full of meaty, interesting ideas and concepts – somehow not something I ever expected from a cartoonist. I don't know where or how I acquired the notion that a cartoonist wouldn't be that interesting a person, but clearly I did – because reading Adams' prose surprised me. A lot.
This morning I read another great example, on his blog. A key paragraph:
The winner worldview is that you have responsibility for your own life and it is irrelevant who is at fault if the people at fault can't or won't fix the problem. I've noticed over the course of my life that winners ignore questions of blame and fault and look for solutions they can personally influence. Losers blame others for their problems and expect that to produce results.This resonates very strongly with me, as I had a similar revelation early in my adult life. In my case, it happened while I was in the Navy. The trigger was a very clear one: my bosses told me that I was doing such a great job that I couldn't go on vacation (the Navy calls this “leave”). It was clear they weren't going to change their minds – so I went on a sort of intellectual strike: I stopped knowing how to fix computers (my job). They got the message, and we made an explicit agreement: I'd get my vacation, and I would start knowing how to fix computers again. That was a formative experience – after that, there are many occasions when I reached some objective by finding my own way to get it done.
If you enjoy reading someone with a unique and refreshing perspective on the world, I recommend Scott Adams' blog and books. His cartoons aren't bad, either :)
“The first thing we do, let’s kill all the lawyers.”
But I'd be ok with banning lawyers from public office, including (especially) Congress and the Presidency. There was a time when Supreme Court justices were often not lawyers, as well – and I'm not at all sure that was a bad idea.
So I absolutely loved the ad at right!