Current Trends In Biomedical Imaging

Once computers started to play an instrumental role in image formation in modalities such as CT and MRI, the next step was indeed a small one: to use the same computers for image enhancement. Operations such as contrast enhancement, sharpening, and noise reduction became integrated functions in the imaging software. A solid body of image processing operations has been developed over the last 30 years, and many of them provide the foundation for today’s advanced image processing and analysis operations. A continuous long-term goal, however, remains: to use computers to aid a radiologist in diagnosing a disease. Much progress toward this goal has been made. As mentioned above, established computer-aided imaging methods to determine bone density and therefore indicate the degree of osteoporosis are in clinical use. CT scans can be used to find colon polyps and help diagnose colon cancer. Optical coherence tomography has rapidly been established in ophthalmology to diagnose retinal diseases. Yet there are many more areas where increasing computing power combined with more elaborate computational methods hold some promise of helping a radiologist with the diagnosis, but the trained observer proves to be superior to computerized image analysis. An example is computerized mammography, where a lot of progress has been made but no single method has entered mainstream medical practice. With the vision of computer-aided radiology, where computers provide an objective analysis of images and assume tedious parts of the image evaluation, advanced biomedical image analysis is—and will remain for a long time—an area of intense research activity.


Progress in image analysis is aided by a dramatic increase in the memory and processing power of today’s computers. Personal computers with several gigabytes of memory are common, and hard disks have reached beyond the terabyte limit. Storing and processing a three-dimensional image of 512 x 512 x 512 bytes is possible with almost any off-the-shelf personal computer. This processing power benefits not only computerized image analysis but also image acquisition. Volumetric imaging modalities generate images of unsurpassed resolution, contrast, and signal-to-noise ratio. Acquisition speed has also improved, giving rise to real-time imaging that allows motion measurements, for example, of the heart muscle.54,89

As the availability of tomographic scanners increases, multimodality imaging becomes more popular. Clinicians and researchers aim to obtain as much information on the tissue under examination as possible. Predominantly, one imaging modality that provides a high-resolution image of the tissue or organ (normally, CT or MRI) is combined with a functional imaging modality, such as PET.51 Multimodality imaging is particularly popular in cancer diagnosis and treatment, because it is possible to place a radioactive label on tumor-specific antibodies. PET, and to a lesser extent, SPECT, are used to image antibody uptake by the tumor, whereas the exact localization of the tumor is found by combining the PET or SPECT image with MRI or CT (see pertinent reviews33,41,53,55). Multimodality imaging produces two or more images generally with different resolution and probably with different patient positioning between images. The different images need to be matched spatially, a process called image registration. Dual-modality imaging devices are available (e.g., a combined PET/CT scanner), but software registration is most often used, and new registration techniques are an active field of research.71

Another recent field of study is that of optical imaging techniques, more specifically tomographic imaging with visible or near-infrared light. A prerequisite for this development was the introduction of new light sources (specifically, lasers) and new mathematical models to describe photon propagation in diffusive tissues.12,23 Unlike x-ray and CT imaging, visible light does not travel along a straight path in tissue because of the high scattering coefficient of tissue and because of tissue regions with different refractive index. Optical coherence tomography (OCT)39 is often considered the optical equivalent of ultrasound imaging because the image is composed of A-mode scans. OCT has a low penetration depth of a few millimeters, but it provides good spatial resolution. Optical coherence tomography has found wide application in dermatology and ophthalmology (see reviews22,38,63,68,84), but its poor signal/noise ratio calls for advanced image enhancement methods. Optical transillumination tomography, the optical equivalent of CT, faces major challenges because of refractive index changes along the light rays. Progress has been made to use optical transillumination tomography to image bone and soft tissues,74,87 but spatial resolution and contrast remain limited. Attempts have been made to correct the refractive index mismatch in software30 and to reject scattered photons,11,34 but major improvements are needed before this modality enters medical practice. The third major optical tomography method is diffuse optical tomography.27 Its main challenge is the mathematical modeling of light-wave propagation, which is a prerequisite for image reconstruction.16 Presently, diffuse optical tomography requires crucial improvements in spatial resolution and signal-to-noise ratio before it becomes applicable in biomedical imaging. These challenges notwithstanding, optical imaging methods enjoy strong research efforts because they promise fast and radiation-free image acquisition with relatively inexpensive instrumentation.

A special focus of imaging and image analysis is the brain. Brain imaging studies have been driven in part by the availability of MRI, which does not expose study subjects to ionizing radiation, and in part by functional imaging techniques, which make it possible to localize areas of brain activity.40,83 Another important subject is the development of anatomical brain atlases, which allow mapping of images with high interindividual variability onto known anatomical models.3,56 Although our understanding of the brain is still rudimentary, biomedical imaging has helped enormously to find the loci of brain activity, to understand cognitive functions, and to link images to disease (see pertinent articles and reviews5,7,46,52,66).

On the general image processing side, new methods and operators of higher complexity also tend to be more application- and modality-specific. Three recent articles highlight the challenges: Masutani et al.50 review image modalities and image processing methods specifically for the diagnosis and treatment of liver diseases; Hangartner31 demonstrates how a key step in segmentation—threshold selection—affects the quantitative determination of density and geometry in CT images; and Sinha and Sinha70 present MRI-based imaging techniques for breast lesions. All three examples have in common the fact that the methods and conclusions cannot readily be translated into other modalities or applications. The main reason for this very common phenomenon is the inability of computers to understand an image in the same way that a human observer does. A computer typically examines a limited pixel neighborhood and attempts to work its way up toward more global image features. Conversely, a human observer examines the entire scene and discovers features in the scene in a top-down approach. The problem of image understanding, allowing computers to recognize parts of an image similar to the way that a human observer does, has been approached with algorithms that involve learning8 and, more recently, with a top-down analysis of statistical properties of the scene layout57 and with imitation of the human visual system through genetic algorithms.88 Image understanding is not limited to biomedical imaging but also affects related fields of computer vision and robotics and is therefore another area of intensive research.

Related to image understanding is the problem of image segmentation. Meaningful unsupervised image segmentation requires certain image understanding by the computer. The scope of most image segmentation algorithms is limited to special cases (e.g., where the object of interest differs in intensity from the background). The main reason is the extreme variability of medical images, which makes it difficult to provide a consistent definition of successful segmentation. Learning algorithms, artificial neural networks, and rule-based systems are examples of state-of-the art approaches to segmentation.17,90 More recently, new methods to compare segmentation algorithms objectively have been proposed,81 and a database with benchmark segmentation problems has been created.49 These examples illuminate the present search for a more unified segmentation paradigm.

The development of new filters is another area of research. Early spatial- and frequency-domain filters used fixed filter parameters. Subsequently, filter parameters became dependent on the local properties of the image. These filters are called adaptive filters. Many recently developed filters are tuned toward specific modalities, with examples such as a noise filter for charge-coupled-device (CCD) cameras,21 an adaptive filter to remove noise in color images,47 a speckle reduction filter for optical coherence tomography,58 or a fuzzy filter for the measurement of blood flow in phase-contrast MRI.76 Novel filters are highly sought after because a good filter can often make possible an otherwise impossible segmentation and quantification task.

A final example for emerging areas in image processing is the use of image data for modeling. The use of finite-element models to predict bone strength80 is a particularly good example because CT image values depend strongly on mineral content, and it is hypothesized that bone strength and mineral content are strongly related. However, those models need further improvement before they become useful in clinical practice.4 Image-based computational fluid dynamics can be used to compute blood flow and wall shear stress in arteries that are frequently affected by arteriosclerosis.86 One recent example is a study by Sui et al.75 in which MR images of the carotid artery were used to calculate wall shear stress. Experiments with cell culture indicate that shear stress gradients enhance cell proliferation and therefore contribute to arteriosclerosis,85 and image-based flow simulations are a suitable tool to further elucidate the disease and perhaps aid in the prediction and early diagnosis of arteriosclerosis.77

Next post:

Previous post: