Image Processing Reference
In-Depth Information
pixels closest to the median. This will provide both a mean and variance per pixel.
Such approaches assume that each pixel is covered by objects less than half the time
in the training period.
Another problem that is especially apparent when processing outdoor video is the
fact that a pixel may cover more than one background. Say we have a background
pixel from a gray road. Imagine now that the wind sometime blows so a leaf covers
the same pixel. This will result in two very different backgrounds for this pixel; a
greenish color and a grayish color. If we find the mean for this pixel we will end up
with something in between green and gray with a huge variance. This will render a
poor segmentation of this pixel during background subtraction. A better approach
is therefore to define two different background models for this pixel; one for the
leaf and one for the road, see [12, 18] for specific examples and [9] for a general
discussion.
Yet another problem in outdoor video is shadows due to strong sunlight. Such
shadow pixels can easily appear different from the learnt background model and
hence be incorrectly classified as object pixels. Different approaches can be fol-
lowed in order to avoid such misclassifications. First of all, a background pixel in
shadow tends to have the same color as when not in shadow—just darker. A more
detailed version of this idea is based on the notion that when a pixel is in shadow
it often means that it is not exposed to direct sunlight, but rather illuminated by the
sky. And since the sky tends to me more bluish, the color of a background pixel in
shadow can be expected to be more blueish too. Secondly, one can group neighbor-
ing object pixels together and analyze the layout of the edges within that region. If
that layout is similar to the layout of the edges in the background model, then the
region is likely to be a shadow and not an object. For more information please refer
to [6, 15].
8.6
Exercises
Exercise 1: Explain the following concepts: framerate, compression, background
subtraction, local vs. global thresholding, image differencing, ghost object.
Exercise 2: What is the compression factor of the following sequence of pixels if
we apply entropy coding? 14 , 14 , 14 , 7 , 14 , 14 , 14 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 7 , 4 ,
4 , 4 , 4.
Exercise 3: A camera has a framerate of 125 Hz. How many images does the cam-
era capture per minute?
Exercise 4: A camera captures a new image every 125 ms. What is the framerate
of the camera?
Exercise 5: A function is defined as y
=
abs(x
1 ) . Draw this function for x
.
Exercise 6: The reference image r(x,y) in background subtraction is updated
gradually with a weight (α) of 0.9. At one point in time a pixel at position (50,50) in
the reference image has the value 100, that is, r( 50 , 50 )
[−
10 , 10
]
=
100. In the next five im-
ages we have: f( 50 , 50 ) =
10, f( 50 , 50 ) =
12, f( 50 , 50 ) =
12, f( 50 , 50 ) =
14,
f( 50 , 50 )
=
15. What is r( 50 , 50 ) after these five frames?
Search WWH ::




Custom Search