Image Processing Reference
In-Depth Information
(a) Difference image D
(b) First image
(c) Second image
Figure 4.36
Detecting motion by differencing
4.7.1
Area-based approach
When a scene is captured at different times, 3D elements are mapped into corresponding
pixels in the images. Thus, if image features are not occluded, they can be related to each
other and motion can be characterised as a collection of displacements in the image plane.
The displacement corresponds to the project movement of the objects in the scene and it
is referred to as the optical flow. If you were to take an image, and its optical flow, then you
should be able to construct the next frame in the image sequence. So optical flow is like a
measurement of velocity, the movement in pixels/unit of time, more simply pixels/frame.
Optical flow can be found by looking for corresponding features in images. We can consider
alternative features such as points, pixels, curves or complex descriptions of objects.
The problem of finding correspondences in images has motivated the development of
many techniques that can be distinguished by the features, the constraints imposed and by
the optimisation or searching strategy (Dhond, 1989). When features are pixels, the
correspondence can be found by observing the similarities between intensities in image
regions (local neighbourhood). This approach is known as area-based matching and it is
one of the most common techniques used in computer vision (Barnard, 1987). In general,
pixels in non-occluded regions can be related to each other by means of a general
transformation of the form by
P ( t + 1) x x , y y = P ( t ) x , y + H ( t ) x , y (4.70)
where the function H ( t ) x,y compensates for intensity differences between the images, and
x , δ y ) defines the displacement vector of the pixel at time t + 1. That is, the intensity of
the pixel in the frame at time t + 1 is equal to the intensity of the pixel in the position
( x , y ) in the previous frame plus some small change due to physical factors and temporal
differences that induce the photometric changes in images. These factors can be due, for
example, to shadows, specular reflections, differences in illumination or changes in observation
angles. In a general case, it is extremely difficult to account for the photometric differences,
thus the model in Equation 4.70 is generally simplified by assuming that
1. that the brightness of a point in an image is constant ; and
2. that neighbouring points move with similar velocity.
According to the first assumption, we have that H ( x ) ≈
0. Thus,
P ( t + 1) x
y = P ( t ) x , y
(4.71)
x , y
Search WWH ::




Custom Search