Biomedical Engineering Reference
In-Depth Information
charged couple device (CCD) or the complementary metal-oxide semiconductor
(CMOS) technologies. The advances in digital imaging matured the field in a short
time to extremely high quality as described below, yet limitations always exist.
The amount of information that can be gathered from an image determines its
quality. There are few fundamental properties that characterize the image and its
quality. At this point, we refer to images that do not contain any color information,
so that they can be considered as gray-level images, which means that each point of
the image, or pixel, contains a value that represents the brightness of that point on
the object. It may be the total integrated intensity from all the spectral range or the
intensity as measured through a color filter, a polarized filter, or else.
We describe here some of the fundamental properties of the imaging detector
itself, including the setting parameters that are crucial for achieving high quality of
the images:
1. Spatial resolution
Spatial resolution determines the closest distinguishable features in the acquired
objects. It depends mainly on the wavelength (), the numerical aperture (NA)
of the objective lens through which the image is acquired, the magnification, and
the pixel size of the array detector, usually a CCD or CMOS camera. The latter
two play a major role because they determine the sampling frequency which must
be sufficiently high to achieve full resolution. Spatial resolution also depends on
the signal quality [ 1 ].
2. Lowest detectable signal
This term defines the lowest signal from the image that can be detected. It
depends on the quantum efficiency of the detector (the higher the better), the
noise level of the system (the lower the better), the numerical aperture of the
optics (the higher the better), and the quality of the optics. It is important
especially in applications where the number of photons is limited, or where
the total acquisition time available for the measurement is limited (e.g., due to
fluorescence quenching by oxygen or in live-cell imaging).
3. Dynamic range
Dynamic range specifies the number of different intensity levels that can be
detected in an image. For a CCD or CMOS camera, it depends on the maximal
possible number of electrons at each pixel and on the lowest detectable signal
(practically it is the ratio of these two values). If, however, the measured signal
is low, so that the associated pixel is only partially filled, the dynamic range will
be limited accordingly. As an example, if a CCD well is filled to only 10 % of
its maximum capacity, the dynamic range will be reduced to 10 % of its nominal
value.
4. Field of view (FOV)
The field of view determines the size of the object that can be viewed for given
conditions, such as magnification or settings of the system's fore-optics.
5. Exposure time range
The exposure time range is usually determined by the detector and its electronic
control. For applications that involve moving objects, a short exposure time is
Search WWH ::




Custom Search