Paradise ponders: snow and router table edition... The router table is now complete except for the actual router and it's lifter (they're supposed to arrive today). The rest of the kit went together just as perfectly as the first third. The only challenging part turned out to be putting that cast iron top on. That thing was heavy (I'm guessing about 70 pounds), and the bolts that attach it to the stand go up through the bottom of it. That was a real booger to get on, because the top's mounting holes had to be precisely aligned with the corresponding holes in the stand, and because I couldn't see a darned thing inside the cabinet (no space to get my fat head in there). The first photo below shows the wood chip box that's supposed to make this router table nearly dust and chip free. We'll see how well that works!
Our weather today is really hard to believe for being nearly May. We've had rain, hail, sleet, and snow today, all day long. Right now it's raining. It's about 35°F outside, so the snow and sleet aren't sticking very long, thankfully. Our trees (at right) had some snow on their leaves, though. But once again, it's wet as heck outside – there's mud everywhere. Dang.
Friday, April 28, 2017
AI and deep learning...
AI and deep learning... My long time readers will know that I am very skeptical of artificial intelligence (AI) in general. By that I mean two separate things these days, as the field of AI has split into two rather different things the past few years.
The first (and oldest) meaning was the general idea of making computers “intelligent” in generally the same way as humans are intelligent. Isaac Asimov's I, Robot stories and their derivatives perfectly illustrated this idea. The progress researchers have made on this sort of AI is roughly the same as the progress they've made on faster-than-light space travel: nil. I am skeptical that they ever will, as everything I know about computers and humans (more about the former than the latter!) tells me that they don't work the same way at all.
The second more recent kind of AI is generally known by the moniker “deep learning”. I think that term is a bit misleading, but never mind that. For me the most interesting thing about deep learning is that nobody knows how any system using deep learning actually works. Yes, really! In this sense, deep learning systems are a bit like people. An example: suppose spent a few days learning how to do something new – say, candling eggs. You know what the process of learning is like, and you know that at the end you will be competent to candle eggs. But you have utterly no idea how your brain is giving you this skill. Despite this, AI researchers have made enormous progress with deep learning. Why? Relative to other kinds of AI, it's easy to build. It's enabled by powerful processors, and we're getting really good at building those. And, probably most importantly, there are a large number of relatively simple tasks that are amenable to deep learning solutions.
Deep learning systems (which are programs, sometimes running on special computers) we know how to train, but we don't know how the result works, just like with people. You go through a process to train them how to do some specific task, and then (if you've done it right) they know how to do it. What's fascinating to me as a programmer is that no programming was involved in teaching the system how to do its task – just training of a general purpose deep learning system. And there's a consequence to that: no programmer (or anyone else) knows how that deep learning system actually does its work. There's not even any way to figure that out.
There are a couple of things about that approach that worry me.
First, there's the problem of how that deep learning system will react to new inputs. There's no way to predict that. My car, a Tesla Model X, is a great example of such a deep learning system. It uses machine vision (a video camera coupled with a deep learning system) to analyze the road ahead and decide how to steer the car. In my own experience, it works very well when the road is well-defined by painted lines, pavement color changes, etc. It works much less well otherwise. For instance, not long ago I had it in “auto-steer” on a twisty mountain road whose edges petered off into gravel. To my human perception, the road was still perfectly clear – but to the Tesla it was not. Auto-steer tried at one point to send me straight into a boulder! :) I'd be willing to bet you that at no time in the training of its deep learning system was it ever presented with a road like the one I was on that day, and therefore it really didn't know what it was seeing (or, therefore, how to steer). The deep learning method is very powerful, but it's still missing something that human brains are adding to that equation. I suspect it's related to the fact that the deep learning system doesn't have a good geometrical model of the world (as we humans most certainly do), which is the subject of the next paragraph.
Second, there's the problem of insufficiency, alluded to above. Deep learning isn't the only thing necessary to emulate human intelligence. It likely is part of the overall human intelligence emulation problem, but it's far from the whole thing. This morning I ran across a blog post talking about the same issue, in this case with respect to machine vision and deep learning. It's written by a programmer who works on these systems, so he's better equipped to make the argument than I am.
I think AI has a very long way to go before Isaac Asimov would recognize it, and I don't see any indications that the breakthroughs needed are imminent...
The first (and oldest) meaning was the general idea of making computers “intelligent” in generally the same way as humans are intelligent. Isaac Asimov's I, Robot stories and their derivatives perfectly illustrated this idea. The progress researchers have made on this sort of AI is roughly the same as the progress they've made on faster-than-light space travel: nil. I am skeptical that they ever will, as everything I know about computers and humans (more about the former than the latter!) tells me that they don't work the same way at all.
The second more recent kind of AI is generally known by the moniker “deep learning”. I think that term is a bit misleading, but never mind that. For me the most interesting thing about deep learning is that nobody knows how any system using deep learning actually works. Yes, really! In this sense, deep learning systems are a bit like people. An example: suppose spent a few days learning how to do something new – say, candling eggs. You know what the process of learning is like, and you know that at the end you will be competent to candle eggs. But you have utterly no idea how your brain is giving you this skill. Despite this, AI researchers have made enormous progress with deep learning. Why? Relative to other kinds of AI, it's easy to build. It's enabled by powerful processors, and we're getting really good at building those. And, probably most importantly, there are a large number of relatively simple tasks that are amenable to deep learning solutions.
Deep learning systems (which are programs, sometimes running on special computers) we know how to train, but we don't know how the result works, just like with people. You go through a process to train them how to do some specific task, and then (if you've done it right) they know how to do it. What's fascinating to me as a programmer is that no programming was involved in teaching the system how to do its task – just training of a general purpose deep learning system. And there's a consequence to that: no programmer (or anyone else) knows how that deep learning system actually does its work. There's not even any way to figure that out.
There are a couple of things about that approach that worry me.
First, there's the problem of how that deep learning system will react to new inputs. There's no way to predict that. My car, a Tesla Model X, is a great example of such a deep learning system. It uses machine vision (a video camera coupled with a deep learning system) to analyze the road ahead and decide how to steer the car. In my own experience, it works very well when the road is well-defined by painted lines, pavement color changes, etc. It works much less well otherwise. For instance, not long ago I had it in “auto-steer” on a twisty mountain road whose edges petered off into gravel. To my human perception, the road was still perfectly clear – but to the Tesla it was not. Auto-steer tried at one point to send me straight into a boulder! :) I'd be willing to bet you that at no time in the training of its deep learning system was it ever presented with a road like the one I was on that day, and therefore it really didn't know what it was seeing (or, therefore, how to steer). The deep learning method is very powerful, but it's still missing something that human brains are adding to that equation. I suspect it's related to the fact that the deep learning system doesn't have a good geometrical model of the world (as we humans most certainly do), which is the subject of the next paragraph.
Second, there's the problem of insufficiency, alluded to above. Deep learning isn't the only thing necessary to emulate human intelligence. It likely is part of the overall human intelligence emulation problem, but it's far from the whole thing. This morning I ran across a blog post talking about the same issue, in this case with respect to machine vision and deep learning. It's written by a programmer who works on these systems, so he's better equipped to make the argument than I am.
I think AI has a very long way to go before Isaac Asimov would recognize it, and I don't see any indications that the breakthroughs needed are imminent...
Paradise ponders: of ovens and arduous kits...
Paradise ponders: of ovens and arduous kits... Yesterday morning we finally got our new oven (Bosch photo at right) delivered and installed. We were supposed to get it this past Monday, but after looking at it Darrell realized he was going to need some help – the beast weighs 180 pounds! Darrell (the owner of Darrell's Appliances), with Brian to help him, had it installed in less than an hour after arriving. I had already removed our old oven and wired an electrical box, which saved them some work. However, there was a challenge for them: the opening in our cabinetry was 3/8" too narrow. Darrell set up some tape as a guide, whipped out his Makita jig saw, and had that problem fixed in a jiffy. There was a bit of a struggle to get it off the floor and into the hole, but once they did everything else went smoothly. It seems to work fine. The controls are a breeze to use. We haven't cooked anything yet, but I'm sure Debbie will pop something in there soon! :) One big surprise for us: it comes with a meat thermometer that plugs into a jack inside the oven. How very convenient!
Yesterday I started building the new router table I bought (Rockler photo at right). When I took delivery of the table, I thought there had been some kind of mistake: it came in two flat boxes, just a couple inches thick and not all that big. No mistake, though – it's just a serious kit. By that I mean that what you get is some pieces of sheet metal, cut, drilled, bent, and threaded as needed, along with a bag of nuts and bolts. A big bag of nuts and bolts! The packing was ingenious – a whole lot of metal pieces packed into a very small volume. I was a bit dismayed upon seeing the kit after unpacking it, as my general experience in assembling kits made mostly of sheet metal is ... pretty bad. Things usually don't fit right, and I end up using pliers, hacksaws, rubber mallets, and nibblers to get everything to fit. Often I have to drill and thread my own holes. Worse, usually the edges of the sheet metal are as sharp as a razor blade and I end up looking like the losing side of a knife fight.
I'm about one third of the way through the assembly, and so far I am very pleasantly surprised. With just a single exception, every part has fit precisely correctly, first try. Better yet, the edges of the sheet metal don't seem quite so sharp – no cuts yet (and no bloodstains on the goods!). The one exception was a minor one: I needed one gentle tap of the rubber mallet to get a recalcitrant threaded hole to line up with a drilled hole. The directions are crystal clear, with great illustrations. I hope the rest of the assembly is just as nice!
Yesterday I started building the new router table I bought (Rockler photo at right). When I took delivery of the table, I thought there had been some kind of mistake: it came in two flat boxes, just a couple inches thick and not all that big. No mistake, though – it's just a serious kit. By that I mean that what you get is some pieces of sheet metal, cut, drilled, bent, and threaded as needed, along with a bag of nuts and bolts. A big bag of nuts and bolts! The packing was ingenious – a whole lot of metal pieces packed into a very small volume. I was a bit dismayed upon seeing the kit after unpacking it, as my general experience in assembling kits made mostly of sheet metal is ... pretty bad. Things usually don't fit right, and I end up using pliers, hacksaws, rubber mallets, and nibblers to get everything to fit. Often I have to drill and thread my own holes. Worse, usually the edges of the sheet metal are as sharp as a razor blade and I end up looking like the losing side of a knife fight.
I'm about one third of the way through the assembly, and so far I am very pleasantly surprised. With just a single exception, every part has fit precisely correctly, first try. Better yet, the edges of the sheet metal don't seem quite so sharp – no cuts yet (and no bloodstains on the goods!). The one exception was a minor one: I needed one gentle tap of the rubber mallet to get a recalcitrant threaded hole to line up with a drilled hole. The directions are crystal clear, with great illustrations. I hope the rest of the assembly is just as nice!
Subscribe to:
Posts (Atom)