The way we perceive color is very different from what you might think (if you've ever thought about this at all!). When someone with normal color vision looks around them, they see a world full of colors – an infinite variety of hues and brightnesses. This perception is created by a relatively simple set of color sensors (in the retina at the back of your eyeball) and a lot of fancy interpretation by your brain.
Consider this simple-sounding color perception example – the color of a tiny patch of clear blue sky. You perceive that as a specific color: a bright, light blue. In actual fact, the “color” of the sky is much more complex than that. It is not a high intensity of a single wavelength of visible light (as a laser is); instead, it is a mix of a broad range of wavelengths from very short ultraviolet light to very long infrared light. The blue end of these wavelengths are slightly higher intensity than the red end, and we perceive the mix as the bright, light blue of the sky.
The physical mechanism by which our eyes sense the various wavelengths of light is well understood. Our retinas have “sensors” (the so-called “cones”) that respond to three different sections of the visible light spectrum. The graph at right shows their response in a typical person – as you can see, it is quite arbitrary. In the case of our hypothetical blue sky, all three sensors would respond: the S cones (blue) most intensely, the M cones (green) slightly less intensely, and the L cones (red) even slightly less. Our brain takes that combination and interprets it as the bright, light blue of the sky.
A digital camera has sensors that mimic those of our eyes; the better cameras do so quite closely. A camera taking a photo of our example's patch of sky would record the color as intensities of red, green, and blue: something like 100% blue, 99.6% green, and 99.4% red. If you were to view that photo on your computer monitor, three different tiny dots – one red, one green, one blue (look at your screen with a magnifying glass and you can see them!) – each lit up brightly, with the blue one slightly brighter. Our eyes see that mix of three wavelengths as being nearly identical to the broad range of wavelengths in the real blue sky, and we're successfully tricked into seeing the same color.
A different kind of camera sensor is under development today – one that can record the actual spectrum at every tiny piece (“pixel”) of a photo. For scientific purposes, this is incredibly valuable information. Often it is possible to identify a particular substance (such as a particular metal, mineral, etc.) from the reflected spectrum of light. Such a camera mounted on a robot space explorer could identify all the minerals it could see on the surface of a planet; on a military vehicle, it could see the difference between natural objects and camouflaged objects. One could imagine all kinds of interesting things to do with images that have so much information in them. For example, software might analyze such images to locate wildlife – so someone might build a pair of binoculars that automatically pointed out the wildlife to you!
Currently there is no system that I'm aware of that can reproduce such an image with a full spectrum of wavelengths – but if we were to imagine such a system, then the spectrum imaging camera's photos could be seen in exactly the same way as they originally appeared. Such an image would be indistinguishable from the real thing, even by sensitive scientific instruments.
The next few years of imaging technology promise to be very interesting indeed. Many companies and scientists are working on this technology, some aimed at consumer cameras and others aimed at more exotic objectives – but anything useful is almost certain to end up in readily purchasable cameras, as this area is intensely competitive…
I want those binoculars!
No comments:
Post a Comment