Biomedical Engineering Reference
In-Depth Information
and reflective properties, as well as the sensors themselves,
through their spectral and temporal sensitivity. It is well
known that there is a loss of dimensional space (distance to
the camera) by acquiring an image. A closer look at the
geometry of image formation reveals that the natural
framework for analyzing the projection is that of projective
geometry rather than that of Euclidean geometry. Thus, the
scene
can
be
reconstructed
as
a
3D
projective
transformation.
The principle of optical systems with passive markers is,
therefore, to determine the spatial coordinates in the real
space of a point (center of the marker), from the two-
dimensional coordinates on the image planes from at least
two cameras, which require prior calibration of the cameras.
In the image plane of a camera, the markers are
visualized, by appropriate thresholding, in the form of
distinct spots. In digital cameras, which are the most
common, an image corresponds to a map of pixels. Subpixel
operators are, therefore, frequently used to increase the
spatial resolution of the camera's physical sensor and
precisely locate the center of each marker in the image
plane, taking into account the shape of the marker (a
spherical marker projects into the image plane in the form of
a circle). Currently, for the majority of systems, these
operators allow a resolution of 1I20th-lIS0th of a pixel to be
reached.
Cameras are composed of lenses, filters, electronic
circuits, etc. A complete model would require the description
of all these components and their contributions to the
formation of images. We will focus on the geometric aspects
only, that is to say, how a point of the scene is projected onto
a pixel in the image. The most frequently used camera model
is very simple: the "stenope" model (or pinhole model), based
on the geometric properties of thin lenses. By applying this
model, a point in the 3D scene M is projected in a straight
Search WWH ::




Custom Search