The history of biomedical imaging is comparatively short. In 1895, Wilhelm Conrad Rontgen discovered a new type of radiation, which he called the x-ray. The discovery caused a revolution in medicine, because for the first time it became possible to see inside the human body without surgery. Use of x-rays in medical centers spread rapidly, but despite their vast popularity, little progress was made for over half a century. Soon after the discovery of x-rays, materials were discovered that exhibited visible-light fluorescence when illuminated by x-rays. With such materials, the quantum efficiency of film-based x-ray imaging could be improved and the exposure of patients to radiation thus reduced. Contrast agents were introduced around 1906 to allow imaging of some soft tissues (namely, intestines), which show low x-ray contrast. For about six decades, x-ray tubes, film, and x-ray intensifying materials were improved incrementally, but no fundamental innovation was made.
After World War II, the next important development in biomedical imaging finally arrived—ultrasound imaging. The medical technology was derived from military technology: namely, sonar (sound navigation and ranging), which makes use of sound propagation in water. Applying the same principles to patients, sound echos made visible on oscilloscope-like cathode ray screens allowed views into a patient’s body without the use of ionizing radiation. The mere simplicity of creating sound waves and amplifying reflected sound made it possible to generate images with analog electronics—in the early stages with vacuum tubes. Electronic x-ray image intensifiers were a concurrent development. X-ray image intensifiers are electronic devices that are based on a conversion layer that emits electrons upon x-ray exposure.
These electrons are collected and amplified, then directed on a luminescent phosphor. Here, the image is formed with visible light and can be picked up by a video camera. Electronic intensifiers made it possible to further reduce patient exposure to x-rays and speed up the imaging process to a point where real-time imaging became possible. At this time, video cameras could be used to record x-ray images and display them instantly on video screens. Interventional radiology and image-guided surgery became possible.
The next major steps in biomedical imaging required an independent development: the evolution of digital electronics and the microprocessor. Milestones were the invention of the transistor (1948),1 the integrated circuit as a prerequisite for miniaturization (1959), and the first single-chip microprocessor (1971).20 Related to these inventions was the first integrated-circuit random-access memory (RAM; 1970).62 Although the microprocessor itself was built on the principle of the programmable computer devised by Conrad Zuse in 1936, the miniaturization was instrumental in accumulating both computing power and memory in a reasonable space. Early digital computers used core memory, which got its name from small ferrite rings (cores) that could store 1 bit of information because of their magnetic remanence. Core memory was already a considerable achievement, with densities of up to 100 bits/cm2. Early RAM chips held 10 times the memory capacity on the same chip surface area. In addition, integrated-circuit RAM did away with one disadvantage of core memory: the fact that a core memory read operation destroyed the information in the ferrite rings. Consequently, read and write operations with integrated-circuit RAM were many times faster. For four decades, integration density, and with it both memory storage density and processing power, has grown exponentially, a phenomenon known as Moore’s law. Today’s memory chips easily hold 1 trillion bits per square centimeter.*
The evolution of digital electronic circuits and computers had a direct impact on computer imaging. Image processing is memory-intensive and requires a high degree of computational effort. With the growing availability of computers, methods were developed to process images digitally. Many fundamental operators15,18,24,32,36,43,64,72 were developed in the 1960s and 1970s. Most of these algorithms are in common use today, although memory restrictions at that time prevented widespread use. A medical image of moderate resolution (e.g., 256×256 pixels) posed a serious challenge for a mainframe computer with 4096 words of core memory, but today’s central processing units (CPUs) would effortlessly fit the same image in their built-in fast cache memory without even having to access the computer’s main memory. A convolution of the 256 x 256-pixel image with a 3 x 3 kernel requires almost 600,000 multiplications and the same number of additions. Computers in the 1970s were capable of executing on the order of 100,000 to 500,000 instructions per second (multiplication usually requires multiple instructions), and the convolution above would have cost several seconds of CPU time. On today’s computers, the same convolution operation would be completed within a few milliseconds.
The availability of early mainframe computers and minicomputers for data processing enabled new revolutionary imaging modalities. In 1917, mathematician J. Radon stipulated that a manifold can be represented (transformed) by an infinite number of line integrals.60 Almost 50 years later, when mainframe computers became widely accessible, A. M. Cormack developed an algorithm based on Radon’s idea,13,14 which in turn helped G. Hounsfield develop the computed tomography (CT) scanner.37 Cormack and Hounsfield shared a Nobel prize in 1979 for development of the CT scanner. In fact, CT was a completely new type of imaging modality because it requires computed data processing for image formation: The x-ray projections collected during a CT scan need to be reconstructed to yield a cross-sectional image, and the reconstruction step takes place with the help of a computer.42 Other imaging modalities, such as single-photon emission computed tomography (SPECT) and magnetic resonance imaging (MRI) also require the assistance of a computer for image formation.
Another important development in biomedical imaging resulted from the use of radioactively labeled markers. One such example is indium pentetreotide, a compound that acts as an analog for somatostatin and tends to accumulate in neuroen-docrine tumors of the brain.69 Indium pentetreotide can be labeled with radioactive 111 In, a gamma emitter. Another example is fluorodeoxyglucose, a glucose analog. Fluorodeoxyglucose accumulates at sites of high metabolic activity. When fluo-rodeoxyglucose is labeled with 18F, it becomes a positron emitter. Radiation emission becomes stronger near active sites where the radiolabeled markers accumulate, and with suitable devices, tomographic images of the concentration of the radioactive compounds can be gathered. The use of positron emitters that create gamma rays as a consequence of electron-positron annihilation events was proposed in 195178 and eventually led to positron emission tomography (PET).6 With radiolabeled physiologically active compounds (radiopharmaceuticals), it became possible to obtain images of physiological processes. These imaging methods not only improved the diagnosis of carcinomas, but also helped in our understanding of physiological processes, most notably brain activity. Functional imaging has become a key tool in medical diagnosis and research.
Subsequent research and development aimed at the improvement of image quality (e.g., improvement of resolution, better contrast, less noise). Current trends also include an increase in three-dimensional images and the involvement of computers in image processing and image analysis. A detailed overview of current trends is given in section 1.3.