Graphics Reference
In-Depth Information
Fig. 13.1 Generic block diagram of a Computational Imaging system consisting of light source,
aperture, optics, sensor and processor
the scene and detect an object in the first place. By creating a better camera system
that sees better, we enable more robust embedded vision systems to be built in the
future.
The field of computational imaging comprises more than just basic image process-
ing where traditional pixel manipulation techniques such as filtering, color interpola-
tion, image compression, and digital watermarking are applied. Techniques for image
effects for artistic output such as tone mapping, color negatives, and distortion cor-
rection are not included because they deal only with direct pixel manipulation. In
contrast, computational imaging considers a holistic view of the illumination, optics,
sensor, and processing to effect the output of image [ 2 ].
Next, wewill discuss the elements of the computational imaging system(Fig. 13.1 ).
We provide a random sampling of current, published approaches, and we note that
an exhaustive survey of the entire landscape is beyond the scope of this one chapter.
Our intent is to provide sufficient background of the area in order to better appreciate
the computational imaging platform, its embedded framework, and overall design
implications.
13.1.1 Illumination
A computational imaging system can take into consideration how a scene is
illuminated in order to improve the capture process. For example, with active illu-
mination, coded lighting at different intensity and angle can produce strong features
and responses to calculate a BRDF (bidirectional reflection distribution function) of
an object, which can be applied for material classification in a recycling plant so that
valuable materials are sorted during processing [ 3 ].
A computational imaging system may also detect the spectral responses of an
object's material emissive properties. For example, ultraviolet lighting is provided
Search WWH ::




Custom Search