Not so well, apparently. Gallup reports that Obama's approval ratings have slipped by nine percentage points.
Oh, well. Just nine points. Not so bad, really.
Except that it's the biggest slip in Presidential ratings that Gallup has ever measured...
Thursday, October 22, 2009
Business Failure Fail...
So you start a little espresso shop of your own, in a town that has Tim Horton and McDonald's locations. Those chains capture most of the trade, and eventually your little business falters and dies. Your shop closes, the signs come down, and you're looking for work. But wait – your web site is still there.
So you leave a little message for your customers...
What a maroon!
So you leave a little message for your customers...
What a maroon!
Fast Inverse Square Root...
If you're not a geeky math type, there's nothing to see here. Move along.
Wikipedia has a very readable article on the fast inverse square root algorithm that I first heard about in Doom's graphics engine. Turns out its roots go back a bit further than I'd thought.
This is a great example of a phenomenon that has largely disappeared: clever, small algorithms that found very fast ways to make difficult (and ordinarily, time-consuming) computations. The need for these algorithms has faded away with the development of ever more powerful computing platforms. The inverse square root is now a single clock cycle computation on gigahertz graphics CPUs, with pipelined multiple cores reducing the effective cost even further.
The need for these clever algorithms hasn't completely disappeared – you'll still find them in things like cryptographic systems, compression, etc. But these are far more complex algorithms than something like an inverse square root.
In my earliest days as an engineer, in the late '70s or early '80s (I forget which!), I remember working with a company here in San Diego that had a product needing 16 bit pseudo-random numbers. One of the principal engineers there had developed a teensy little algorithm – just a few lines of assembly language – that did the job for them. Nobody there knew how or why this algorithm worked (despite its simplicity!). All they really knew was that (a) it passed all their testing, and (b) it was very, very fast. This made it quite valuable to them.
Today one can find many off-the-shelf pseudo-random number generators, some of them well-understood and documented. They're all vastly faster than the one that firm was so proud of, and fast enough (on modern hardware) for all but the most unusual, demanding applications.
That story has been repeated many times, and one consequence of it is that these days it's actually a little unusual to find a software engineer (especially a younger one) who spends much time thinking about algorithmic efficiency at all. Most of the time, this makes no difference at all – in fact, I suspect it makes these engineers produce better code in less time (optimization is a notorious complexity adder).
But I can't help but pine (a little bit) for the days when even the most pedestrian accounting application could be made visibly more responsive through the work of a clever software engineer who could squeeze more computational power out of the slow computers of the day...
Wikipedia has a very readable article on the fast inverse square root algorithm that I first heard about in Doom's graphics engine. Turns out its roots go back a bit further than I'd thought.
This is a great example of a phenomenon that has largely disappeared: clever, small algorithms that found very fast ways to make difficult (and ordinarily, time-consuming) computations. The need for these algorithms has faded away with the development of ever more powerful computing platforms. The inverse square root is now a single clock cycle computation on gigahertz graphics CPUs, with pipelined multiple cores reducing the effective cost even further.
The need for these clever algorithms hasn't completely disappeared – you'll still find them in things like cryptographic systems, compression, etc. But these are far more complex algorithms than something like an inverse square root.
In my earliest days as an engineer, in the late '70s or early '80s (I forget which!), I remember working with a company here in San Diego that had a product needing 16 bit pseudo-random numbers. One of the principal engineers there had developed a teensy little algorithm – just a few lines of assembly language – that did the job for them. Nobody there knew how or why this algorithm worked (despite its simplicity!). All they really knew was that (a) it passed all their testing, and (b) it was very, very fast. This made it quite valuable to them.
Today one can find many off-the-shelf pseudo-random number generators, some of them well-understood and documented. They're all vastly faster than the one that firm was so proud of, and fast enough (on modern hardware) for all but the most unusual, demanding applications.
That story has been repeated many times, and one consequence of it is that these days it's actually a little unusual to find a software engineer (especially a younger one) who spends much time thinking about algorithmic efficiency at all. Most of the time, this makes no difference at all – in fact, I suspect it makes these engineers produce better code in less time (optimization is a notorious complexity adder).
But I can't help but pine (a little bit) for the days when even the most pedestrian accounting application could be made visibly more responsive through the work of a clever software engineer who could squeeze more computational power out of the slow computers of the day...