Most of the time, we software engineers can safely think of our code as not life-threatening. But the proliferation of computers into devices of almost every kind is changing that.
The first time I ever ran into an example of code that could kill, it was a very straightforward example: I interviewed a fellow who had just quit his job at a medical device manufacturer here in San Diego. When I asked him why he had quit, he told me that he had worked on the firmware for an automated insulin pump his company was making. This pump would be worn by severe diabetics, and it would automatically maintain their blood sugar levels at an appropriate level – no matter what the person ate, or how he exercised. Firmware that he wrote had a bug in it, one that wasn't detected during normal testing. One of the patients trialing the device did something to provoke the bug – and the pump flooded his body with insulin, killing him. My candidate quit his job that afternoon, and vowed to work for a company where that couldn't happen.
At the time, I was working for Stac Electronics. The team I lead was building Stacker, a disk compression product. We didn't think of that product as life-threatening – but suppose someone used Stacker in a computer that ran some vital piece of equipment. If Stacker had failed, causing the computer to crash, then conceivably that could result in harm to a patient. We had language in our license agreement designed to avoid this situation, but still...
These days, such situations are far more common. Computers (which necessarily have software or firmware to run them) are embedded in just about anything you can imagine. Here's a story about computers in a passenger airliner's flight control systems, and a software bug that nearly caused a crash. In this case, it's obvious that the computer could potentially cause life-threatening malfunctions – but that's not always the case...
Interesting world we live in. Flight control bug story via reader Doug S.
No comments:
Post a Comment