Geoscience Reference
In-Depth Information
15.2.1.2. Reconstruction of 3D Trajectories. Once the
images of the tracers are recorded, the goal is to recon-
struct the 3D trajectories of as many particles as possible.
This operation requires three steps:
1. Particle Detection. Each image (at each time t )of
each camera is analyzed to determine the position of the
center of each visible particle. This step results in maps
of the 2D position of the center of the particles on each
frame of each camera.
2. 3D matching. The second step consists in combin-
ing at each given time t previous 2D maps of particle
centers from the N cameras in order to reconstruct (by
stereomatching) the 3D position of the center of the
particles with the highest possible accuracy.
3. Lagrangian tracking. Finally, once the 3D posi-
tions of particles are found for all time steps, an appro-
priate tracking algorithm allows to reconnect the trajec-
tories.
We describe briefly the key points of the previous steps
in the following paragraphs. Further details and useful
information can be found in the work of Ouellette et al.
[2005].
O 1
y
P 1
Δ 2
O
x
z
Δ 1
O 2
P 2
Figure 15.3. Optical tracking: 3D matching procedure.
and Lagrangian tracking must be made in real space
(which is common to all cameras). The most widely used
method to define the transformation for each camera
between image space (in pixels) to real space (in real units)
is based on a calibration method developed by Tsai [1987].
Each camera (let us say we consider camera i )isrepre-
sented by a projection model defined by an optical axis
i , an optical center O i , and a projection plane P i .The
image of a particle X on the sensor of camera i is then
simply given by the intersection of the line O i X with the
plane P i (see Figure 15.3). The model is generally defined
by at least nine parameters for each camera: six external
parameters for the absolute position of each camera (three
coordinates for O i and three angles for the orientation
of the optical axis i ) and three internal parameters (the
distance O i P i , a coefficient for geometric aberrations and
the aspect ratio of the pixels). Refinement of this basic
model can be considered, for instance, by including sev-
eral aberration coefficients (transverse and longitudinal).
The parameters of the model are determined from the
images of a calibration mask with known geometric prop-
erties. Once the parameters of the model for each camera
are determined, the 3D matching is performed as follows
(see Figure 15.3): Take the center of a particle x i as pre-
viously determined in pixels on the projection of plane
of one of the cameras; the real position X i of the parti-
cle in real space then lies somewhere on the line of view
O i x i . The intersection of such line of views from two (or
more) cameras defines the absolute 3D position of the
particle in real space. In theory two cameras are suffi-
cient to determine this intersection. In practice, however,
the lines of view rarely intersect due to slight imprecision
in the calibration of the Tsai model. The 3D position is
then defined as the point in real space which minimizes
the distance to the different lines of view. Whenever a
camera is added in the system, the redundancy of infor-
mation provided by the additional line of view further
restricts the possible 3D position of the particle. This
greatly improves the effective spatial resolution of the 3D
system. Ouellete et al. have shown that using three cam-
eras instead of two gives an effective resolution of the
order of one-tenth “equivalent pixel” (that is to say one-
tenth of the spatial dimension whose image is the size of
Particle Detection Ouellette et al. [2005] have tested dif-
ferent algorithms for the detection of particle centers in
2D images. The choice of the best algorithm is a compro-
mise between computation time and quality of the detec-
tion. The latter is quantified by both the accuracy with
which the position of the center of the particles is deter-
mined and the number of particles correctly detected. The
first step is to identify the local maxima of intensity on
the image, indicating the presence of a particle. Then, the
image around each maximum is analyzed to determine to
the best accuracy the location of the center of the particle.
For small particles (as generally used to seed the flow with
tracers), the image of each individual particle does not
exceed a few pixels. Under these conditions, simple algo-
rithms based on the center of mass of intensity around the
maximum are not sufficiently accurate. Algorithms based
on neural networks can be very accurate, especially when
images are very noisy, but relatively slow. A good com-
promise consists in fitting the local intensity profile by
two Gaussians (one vertical and one horizontal), whose
maxima define the center of the particle. The choice of
two 1D Gaussian fits is preferred to that of one single 2D
Gaussian because it is computationally significantly more
efficient for almost the same accuracy. Ouellette et al. have
shown that this method was typically capable of detect-
ing 95% of particles and determing their position with
subpixel accuracy.
3D Matching While the detection of particles can be
made in the image space of each camera, 3D positioning
 
Search WWH ::




Custom Search