Estimation Techniques For The Fractal Dimension In Gray-Scale Images (Biomedical Image Analysis)

In Section 10.2, estimation methods for the fractal dimension were introduced that work on sets. To obtain a set, the image must be segmented, that is, divided into the feature (set) and background. The notion of fractal dimension can be extended to gray-scale images. In this case, additional information from the image (the gray-value information) is retained. Whereas binary images allow quantitative analysis of the shape of a feature, the methods that operate on gray-scale images focus more on the texture. As demonstrated in Figure 10.8, the gray value (the image value) of a pixel can be interpreted as elevation. The image can be thought of as a mountainscape, with white pixels being the points with the highest elevation and dark regions being the valleys. In this interpretation, the image becomes a surface embedded in three-dimensional space, much as a jagged line (such as the Koch curve) is embedded in two-dimensional space. Correspondingly, the resulting value of the fractal dimension can be expected to lie between two and three.

Blanket Dimension

The most widely applied estimator of the fractal dimension in gray-scale images is a variation of the box-counting dimension often referred to as the blanket dimension. The surface of the landscape is tightly covered with a blanket, and the surface area is computed. Each pixel forms a triangle with two adjoining pixels. Since each pixel has an elevation, the area of each triangle is different. The sum of all triangular areas is the total surface area. In the next iterative step, four adjoining pixels are averaged to form a single pixel. The averaged image now has twice the base length of the triangles. Since the averaging process has a smoothing effect, the averaged surface is less jagged. Therefore, an irregular surface will have a smaller total area of the triangles than that of the original image, although the size of the individual triangles has increased. This process is illustrated in Figures 10.17 and 10.18. Figure 10.17 shows a cross-sectional CT image of segmented trabecular bone and the corresponding three-dimensional landscape representation. In Figure 10.18, neighboring pixels have been averaged to form larger blocks of size 1 (original resolution), 2,4, and 8. The resulting surface becomes less jagged with a larger block size, and the surface area decreases correspondingly. Analogous to Equation (10.7), the scaling law of the binary box-counting dimension, the surface area A will relate to the box size s through a scaling law with noninteger exponent H:


tmp20234_thumb

The surface dimension Ds related to H through Ds = 2 — H [Equation (10.9)].

The slope H is obtained through linear regression of the log-transformed data pairs, the surface area Ak, and the corresponding size of the averaging box sk, for each averaging step k. The log-transformed data pairs corresponding to the surface plots in Figure 10.18 and the regression line are shown in Figure 10.19. It can be seen that the scaling law does not hold over the box sizes analyzed. The scaling behavior needs to be examined properly. Several factors may account for a change of the slope over different scales. The presence of noise in medical images introduces its own contribution to the apparent dimension at small scales, because noise exists at the pixel level. Normally, noise would increase the apparent fractal dimension at very small scales. Conversely, if the resolution is higher than the image details, a reduction in the apparent fractal dimension at small scales is seen. Figure 10.19 is an example of this behavior. The loss of self-similarity at small scales could ideally be demonstrated using interpolated images. Because interpolation does not provide new information, the fractal dimension would be smaller at the interpolated scales. A careful choice of the scales at which the image is analyzed is essential for accurate estimation of the fractal dimension. Segmented objects with a jagged outline may contribute an additional element of self-similarity, because the outline itself influences the surface area calculation. An advanced algorithm used to compute the blanket dimension would therefore exclude any boxes that are not filled completely with pixels belonging to the feature.

Elevation landscape representation of the segmented spongy area of a vertebral CT image. The left panel shows one cross-sectional CT slice through the vertebra with the spongy area highlighted. The right image is the corresponding elevation map. The white arrow indicates the view direction.

FIGURE 10.17 Elevation landscape representation of the segmented spongy area of a vertebral CT image. The left panel shows one cross-sectional CT slice through the vertebra with the spongy area highlighted. The right image is the corresponding elevation map. The white arrow indicates the view direction.

Four steps in the process of calculating the blanket dimension. Starting with the top left image, the landscape is drawn with one pixel per box (s = 1), whereas in subsequent iterations 4 pixels (s = 2), 16 pixels (s = 4), and 64 pixels (s = 8) are averaged. As a consequence, the elevation landscape becomes less jagged and the surface area decreases.

FIGURE 10.18 Four steps in the process of calculating the blanket dimension. Starting with the top left image, the landscape is drawn with one pixel per box (s = 1), whereas in subsequent iterations 4 pixels (s = 2), 16 pixels (s = 4), and 64 pixels (s = 8) are averaged. As a consequence, the elevation landscape becomes less jagged and the surface area decreases.

Double-logarithmic plot of the surface area as a function of the averaged box size, corresponding to s = 1 through s = 4 in Figure 10.18. The slope of the fitted line is —0.22, corresponding to a blanket dimension of 2.22. The line shows a poor fit, revealing that the surface area does not scale with the same factor at small box sizes.

FIGURE 10.19 Double-logarithmic plot of the surface area as a function of the averaged box size, corresponding to s = 1 through s = 4 in Figure 10.18. The slope of the fitted line is —0.22, corresponding to a blanket dimension of 2.22. The line shows a poor fit, revealing that the surface area does not scale with the same factor at small box sizes.

Two principles are available for use in computing blanket dimensions. One principle uses the tessellation of the surface into triangles of increasing projected size s. This is the principle applied in Figure 10.18 and Algorithm 10.4. Similar results are obtained when the pixels are interpreted as elongated cuboids with a projected size in the xy-plane of s x s pixels and a height corresponding to the image value. The surface area of the sides is limited by the height of the neighboring pixels. Similar to Algorithm 10.4, the total surface area is computed and the scaling properties with increasing cuboid size are determined. This notion is often termed the Manhattan dimension because of the resemblance of the pixel cuboids to Manhattan skyscrapers. Note that Algorithm 10.4 requires the definition of a function to compute the area of a triangle given by its three vertices (triangle_area). If the vertex coordinates are designated x\, y and z\ with 1 < i < 3 (and passed to the function in this order), the area A can be computed:

tmp20239_thumb

Algorithm 10.4 Blanket dimension. In this basic version, the input image IM(x,y) with the pixel dimensions xm and ym is assumed to be completely filled by the feature, so that the feature boundary cannot introduce multifractal elements. The output consists of two corresponding tables of log box size ls() and log surface area lnA(). The blanket dimension Ds is computed by linear regression. With H being the slope of the regression line into lnA over ls, the dimension Ds = 2—H. This algorithm relies on a formula to compute the area of a triangle, given by its three vertices (triangle_area).

Algorithm 10.4 Blanket dimension. In this basic version, the input image IM(x,y) with the pixel dimensions xm and ym is assumed to be completely filled by the feature, so that the feature boundary cannot introduce multifractal elements. The output consists of two corresponding tables of log box size ls() and log surface area lnA(). The blanket dimension Ds is computed by linear regression. With H being the slope of the regression line into lnA over ls, the dimension Ds = 2—H. This algorithm relies on a formula to compute the area of a triangle, given by its three vertices (triangle_area).

Gray-Scale Generalization of the Minkowsky and Mass Dimensions

The definition of the morphological dilation, which is the basis for the Minkowsky dimension, allows its application on gray-scale images. In gray-scale dilation, the center pixel is replaced by the maximum value of its 3 x 3 neighborhood. To determine the scaling behavior, the image mean value is calculated and recalculated after iterative dilations. The image mean value increases with each dilation, but the increase is less pronounced because of the loss of detail associated with each dilation. If a scaling behavior analogous to Equation (10.10) can be observed, with the exception that the number of pixels np, k is now substituted by the image mean value, the measured exponent H can be used to compute the gray-scale Minkowsky dimension through DM = 2—H. In an analogous definition of the mass enclosed by a circle of radius r, the number of pixels in Equation (10.11) needs to be substituted by the sum of the image intensity values. Algorithms 10.2 and 10.3 can be generalized to operate on gray-scale images with minimal changes. One advantage of generalized algorithms is elimination of the segmentation step. However, the gray-scale versions of algorithms report self-similar properties of the texture, whereas the binary versions report self-similar properties of the shape of the feature. Therefore, each of the two versions has specific applications.

Next post:

Previous post: