BLOB Analysis (Introduction to Video and Image Processing) Part 1

Before describing what is meant by the somewhat strange title of this topic, let us look at a few examples. In the first example the task is to design an algorithm which can figure out how many circles are present in the image to the left (see Fig. 7.1). Obviously the answer is three, but how will we make the computer figure this out? Another example could be to find the position of the person in the image to the right. How can we make the computer calculate this? The answer to both questions is twofold. First we have to separate the different objects in the image and then we have to evaluate which object is the one we are looking for, i.e., circles and humans, respectively. The former process is known as BLOB extraction and the latter as BLOB classification. BLOB stands for Binary Large OBject and refers to a group of connected pixels in a binary image. The term “Large” indicates that only objects of a certain size are of interest and that “small” binary objects are usually noise.

The title of the topic refers to analyzing binary images by first extracting the BLOBs, then representing them compactly, and finally classifying the type of each BLOB. These three topics are described in more detail below.

BLOB Extraction

The purpose of BLOB extraction is to isolate the BLOBs (objects) in a binary image. As mentioned above, a BLOB consists of a group of connected pixels. Whether or not two pixels are connected is defined by the connectivity, that is, which pixels are neighbors and which are not. The two most often applied types of connectivity are illustrated in Fig. 7.2. The 8-connectivity is more accurate than the 4-connectivity, but the 4-connectivity is often applied since it requires fewer computations, hence it can process the image faster. The effect of the two different types of connectivity is illustrated in Fig. 7.2 where the binary images contain either one or two BLOBs depending on the connectivity.


A number of different algorithms exist for finding the BLOBs and such algorithms are usually referred to as connected component analysis or connected component labeling. In the following we describe one of these algorithms known as the Grass-fire algorithm. We use 4-connectivity for simplicity.

 (a) A binary image containing different shapes. (b) A binary image containing a human and some noise

Fig. 7.1 (a) A binary image containing different shapes. (b) A binary image containing a human and some noise

4- and 8-connectivity. The effect of applying the two different types of connectivity

Fig. 7.2 4- and 8-connectivity. The effect of applying the two different types of connectivity

The Recursive Grass-Fire Algorithm

The algorithm starts in the upper-left corner of the binary image. It then scans the entire image from left to right and from top to bottom, as seen in Fig. 4.28.

At some point during the scan an object pixel (white pixel) is encountered and the notion of grass-fire comes into play. In the binary image in Fig. 7.3 the first object pixel is found at the coordinate (2, 0). At this point you should imagine yourself standing in a field covered with dry grass. Imagine you have four arms (!) and are holding a burning match in each hand. You then stretch out your arms in four different directions (corresponding to the neighbors in the 4-connectivity) and simultaneously drop the burning matches. When they hit the dry grass they will each start a fire which again will spread in four new directions (up, down, left, right) etc. The result is that every single straw which is connected to your initial position will burn. This is the grass-fire principle. Note that if the grass field contains a river the grass on the other side will not be burned.

Returning to our binary image, the object pixels are the "dry grass” and the nonobject pixels are water. So, the algorithm looks in four different directions and if it finds a pixel which can be "burned”, meaning an object pixel, it does two things.

The grass-fire algorithm. The “big” numbers indicate the order in which the neighbors are visited. The small numbers indicate the label of a pixel

Fig. 7.3 The grass-fire algorithm. The “big” numbers indicate the order in which the neighbors are visited. The small numbers indicate the label of a pixel

Firstly, in the output image it gives this pixel an object label (basically a number) and secondly it “burns” the pixel in the input image by setting it to zero (black). Setting it to zero indicates that it has been burned and will therefore not be part of yet another fire. In the real grass field the fire will spread simultaneously in all directions. In the computer, however, we can only perform one action at the time and the grass-fire is therefore performed as follows.

Let us apply the principle on Fig. 7.3. The pixel at the coordinate (2, 0) is labeled 1, since it is the first BLOB and then burned (marked by a 1 in the lower right corner). Next the algorithm tries to start a fire at the first neighbor (3,0), by checking if it is an object pixel or not. It is indeed an object pixel and is therefore labeled 1 (same object) and “burned”. Since (3,0) is an object pixel, it now becomes the center of attention and its first neighbor is investigated (4,0). Again, this is an object pixel and is therefore labeled 1, “burned” and made center of attention. The first neighbor of (4, 0) is outside the image and therefore per definition not an object pixel. The algorithm therefore investigates its second neighbor (4,1). This is not an object pixel and the third neighbor of (4,0) is therefore investigated (3,0). This has been burned and is therefore no longer an object pixel. Then the last neighbor of (4, 0) is investigated (4, -1).

Two examples of extracted BLOBs. Each BLOB has a unique color

Fig. 7.4 Two examples of extracted BLOBs. Each BLOB has a unique color

This is outside the image and therefore not an object pixel. All the neighbors of (4, 0) have now been investigated and the algorithm therefore traces back and looks at the second neighbor of (3, 0), namely (3,1). This is an object pixel and is therefore labeled 1, burned and becomes the new focus of attention. In this way the algorithm also finds (3,2) to be part of object 1 and finally ends by investigating the fourth neighbor of (2, 0).

All pixels which are part of the top object have now been labeled with the same label 1 meaning that this BLOB has been segmented. The algorithm then moves on following the scan path in Fig. 4.28 until it meets the next object pixel (1, 3), which is then labeled 2, and starts a new grass-fire. The result will be the image shown in Fig. 7.2, where each BLOB has a unique label. In Fig. 7.4 the BLOBs from Fig. 7.1 have been extracted and color coded according to their BLOB label.

The algorithm can be implemented very efficiently by a function calling itself. Such an algorithm is said to be recursive and care should be taking to ensure the program is terminated properly as recursive algorithms have no built-in termination strategy. Another danger is that a computer has a limited amount of memory allocated to function calls. And since the grass-fire algorithm can have many thousands function calls queued up, the computer can run out of allocated memory.

The Sequential Grass-Fire Algorithm

The grass-fire algorithm can also be implemented in a sequential manner. This is less efficient from a programming point of view, but it does not suffer from the problems mentioned above for the recursive grass-fire algorithm.

The sequential grass-fire algorithm also scans the image from top left to bottom right. When an object pixel is found it does two things. Firstly, in the output image it gives this pixel an object label, 1, and secondly it “burns” the pixel in the input image by setting it to zero (black). The next step is to check the neighbors (four or eight pixels depending on the connectivity) and see if any of them is an object pixel. So far this is exactly the same as the recursive grass-fire algorithm, but now comes the difference. If any of the neighbors is an object pixel they are labeled 1 in the output image and set to zero (burned) in the input image. Also, they are placed in a list. Next step is to take the first pixel in the list and check its neighbors. If any are object pixels they are labeled in the output, set to zero in the input and placed in the list. This is continued until all pixels in the list have been investigated. The algorithm then continues following the scan path in Fig. 4.28 until it meets the next object pixel, which is then labeled 2, and starts a new grass-fire.

BLOB Features

Extracting the BLOBs is the first step when confronted with examples like those presented in Fig. 7.1. The next step is now to classify the different BLOBs. For the example with the circles, we want to classify each BLOB as either a circle or not a circle, and, for the other example, human vs. non-human BLOBs. The classification process consists of two steps. First, each BLOB is represented by a number of characteristics, denoted features, and second, some matching method is applied to compare the features of each BLOB with the features of the type of object we are looking for. For example, to find circles we could calculate the circularity of each BLOB and compare that to the circularity of a perfect circle. Below we will present how we can extract different features and in the next section then show how to compare features.

Feature extraction is a matter of converting each BLOB into a few representative numbers. That is, keep the relevant information and ignore the rest. But before calculating any features we first want to exclude every BLOB which is connected to the border of the image. The reason is that we in general have no information about the part of the object outside the image. For example, the semi-circle to the right of the human in Fig. 7.1 might look like a circle, but it might as well be the top of the head of a person lying down! Therefore, exclude all such BLOBs.

Having done so, a number of features can be calculated for each BLOB. Here follows a description of the most common features, but many others exist and new ones can be defined.1

Area of a BLOB is the number of pixels the BLOB consists of. This feature is often used to remove BLOBs that are too small or too big from the image. For example, in Fig. 7.1 (right) the human can be segmented by simply saying that all BLOBs with an area smaller than a certain value are ignored.

Bounding box of a BLOB is the minimum rectangle which contains the BLOB, see Fig. 7.5. It is defined by going through all pixels for a BLOB and finding the four pixels with the minimum x-value, maximum x-value, minimum y-value and maximum y-value, respectively. From these values the width of the bounding box is given as xmax – xmm and the height as ymax – ymm. A bounding box can be used as a ROI.

Bounding circle of a BLOB is the minimum circle which contains the BLOB, see Fig. 7.5. It is found by first locating the center of the BLOB with one of the methods describe below. Next we search from the center and outwards in one direction until we find the point where the BLOB ends. The distance between this point and the center is the radius in this direction. We do this for all possible directions (for example with an angular resolution of 10°) and the biggest radius defines the radius for the minimum circle.

 (a) Bounding box. (b) Bounding circle. (c) Convex hull

Fig. 7.5 (a) Bounding box. (b) Bounding circle. (c) Convex hull

Convex hull of a BLOB is the minimum convex polygon which contains the BLOB, see Fig. 7.5. It corresponds to placing a rubber band around the BLOB. It can be found in the following manner. From the topmost pixel on the BLOB search to the right along a horizontal line. If no BLOB pixel is found increase (clockwise) the angle of the search line and repeat the search. When a BLOB pixel is found the first line of the polygon is defined and a new search is started based on the angle of the previous search line. When the search reappears at the topmost pixel, the convex hull is completed. Note that morphology also can be applied to find the convex hull of a BLOB.

Bounding box ratio of a BLOB is defined as the height of the bounding box divided by the width. This feature indicates the elongation of the BLOB, i.e., is the BLOB long, high or neither.

Compactness of a BLOB is defined as the ratio of the BLOB’s area to the area of the bounding box. This can be used to distinguish compact BLOBs from noncompact ones. For example, fist vs. a hand with outstretched fingers.

tmp7470-6_thumb

Center of mass (or center of gravity or centroid) of a physical object is the location on the object where you should place your finger in order to balance the object. The center of mass for a binary image is similar. It is the average x- and y-positions of the binary object. It is defined as a point, whose x-value is calculated by summing the x-coordinates of all pixels in the BLOB and then dividing by the total number of pixels. Similarly for the y-value. In mathematical terms the center of mass, (xc,yc) is calculated as

tmp7470-7_thumb

where N is the number of pixels in the BLOB and Xi and y are the x and y coordinates of the N pixels, respectively. In situations where the BLOB contains “appended parts” the median can replace Eq. 7.2. An example could be if you want to find the center of a person’s torso. The configurations of the arms will effect the result of Eq. 7.2, but the median is less effected. The median is more computational demanding than the center of mass. An alternative to the median is to erode the BLOB with a large structuring element and then calculate the center of mass.

Center of the bounding box is a fast approximation of the center of mass. In mathematical terms the center of the bounding box, (xbb,ybb) is calculated as

tmp7470-8_thumb

Perimeter of a BLOB is the length of the contour of the BLOB. This can be found by scanning along the rim (contour) of an object and summing the number of pixels encountered. A simple approximation of the perimeter is to first find the outer boundary using the method from Sect. 6.3.4 (or another edge detection algorithm). Following this we simply count the number of white pixels in the image.

Circularity of a BLOB defines how circular a BLOB is. Different definitions exist based on the perimeter and area of the BLOB. Heywood’s circularity factor is, for example, defined as the ratio of the BLOB’s perimeter to the perimeter of the circle with the same area:

tmp7470-9_thumb

A different way of calculating the circularity is to find the different radii as described for the bounding circle. The variance of the radii gives an estimate of the circularity. The smaller the variance the more circular the BLOB is.

In Fig. 7.6 two of the feature values are illustrated for the BLOBs in Fig. 7.4 (left). So after extraction of features a binary image has been converted into a number of feature values for each BLOB. The feature values can be collected in a so-called feature vector. For the BLOBs in Fig. 7.6, the feature vector for BLOB number one is

tmp7470-10_thumb

Since we have seven BLOBs, we will also have seven feature vectors:

tmp7470-11_thumb(a) The figure illustrates two features: Bounding Box and Center-of-Mass. (b) The table illustrates two other features: Circularity and Area. Note the order in which the BLOBs are labeled. This is a result of the scan-pattern in Fig. 4.28

Fig. 7.6 (a) The figure illustrates two features: Bounding Box and Center-of-Mass. (b) The table illustrates two other features: Circularity and Area. Note the order in which the BLOBs are labeled. This is a result of the scan-pattern in Fig. 4.28

2D feature space and position of the seven BLOBs from Fig. 7.6(a). The "x” represents the feature values of the prototype. The dashed rectangle: box classifier. Red circle: Euclidean distance classifier. Blue ellipse: weighted Euclidean distance classifier

Fig. 7.7 2D feature space and position of the seven BLOBs from Fig. 7.6(a). The "x” represents the feature values of the prototype. The dashed rectangle: box classifier. Red circle: Euclidean distance classifier. Blue ellipse: weighted Euclidean distance classifier

Next post:

Previous post: