Graphics Reference
In-Depth Information
so that a spectral response in near infrared can be detected based on the material
luminance properties. Applications such as forensic criminal investigation may uti-
lize such a system to detect fingerprints and bodily fluids left in a crime scene [ 4 ].
Structured illumination is also commonly used for geometry recovery. The basic
principle involves projecting a narrow band of light onto a 3D shaped object such that
the reflected illumination on the surface appears distorted. By analyzing the distortion
from various perspectives, the geometric surface shape can be reconstructed. The
Kinect Camera fromMicrosoft is one of the first consumer grade camera-system that
uses a pattern of projected infrared light points to generate a dense 3D image [ 5 ].
Other computational imaging applications where illumination is controlled include
image deblurring and object relighting.
13.1.2 Optics and Sensors
The optical elements in the camera systems are used to channel light into the image
sensor. While the most basic function would be to focus light using a lens, other
elements can be used to split, block, smear, or divert light accordingly to the proper
physical sensor arrangements. Careful attention is paid to the spectral phenomenol-
ogy of these optical elements to ensure proper transitivity of light.
In coded aperture imaging, for example, amask is applied so that certain amount of
light is captured at each location. Example applications include imaging spectroscopy
where a 3D spectral datacube (a three-dimensional data array) is mapped to a 2D
focal plane sensor [ 6 ]. In coded exposure imaging, the on/off state of the shutter is
purposefully manipulated in certain patterns and at a high rate. Example applications
include motion deblur in high frame rate processing [ 7 , 8 ].
In sparse aperture imaging, an array of lenses or sensors is used to capture light
field information of a scene. Each light measurement can be used to calculate the
distance and phase in order to reconstruct the high-resolution image. For example, in
a light-field camera, also known as a plenoptic camera, an array of microlenses is used
to simultaneously capture all light field information of a scene. Scene reconstruction
is possible, by analyzing the corresponding measured light properties of each pixel.
Because distance and optical wavefront can be estimated, the camera can, from
a single shot, produce multiple images refocused at different distances. In some
recently announced light-field cameras [ 9 , 10 ], a mask-based design is used based
on a principle of optical heterodyning, where a printed film is placed close to the
image sensor. In others, such as [ 11 ], a plenoptic camera system with n-lens array
camera is proposed for use in smartphones, replacing the normal camera module and
achieving much thinner overall form factor.
In these examples, the captured images are optically coded, requiring computa-
tional decoding to produce new images. More specifically, in computational imaging,
light is manipulated and mapped to the sensor to offer fundamentally new ways to
produce higher quality images beyond the capability of a standard lens and an image
sensor.
Search WWH ::




Custom Search