Applications of Spectral Imaging and Reproduction to Cultural Heritage (Digital Imaging) Part 1

Introduction

Color images code color information according to three channels, corresponding to the red, green, and blue components of the image acquisition device. In recent years, the field of digital imaging has extended the traditional trichromatic RGB paradigm to more than three dimensions, introducing what is called spectral or multispectral imaging. The aim of multispectral imaging is to acquire, to process, to reproduce, and to store images with a higher color quality; indeed it is oriented towards those application sectors that request high-quality treatment of color information [1,2]. Many experiences exist on the successful application of spectral imaging to cultural heritage, this being a field where the acquisition and reproduction of accurate color information are two fundamental processes.

Traditionally, artifacts are captured with three-channel devices, and the resulting RGB images are processed within the framework of colorimetry in order to accomplish faithful reproduction across different devices or media. Indeed, the colorimetric approach satisfies – with its inherent limitations – cross-device color communication, but it is far from providing a consistent color reproduction for different viewing conditions. The attempts made at exploiting multispectral imaging for the acquisition of cultural heritage artifacts have revealed the several advantages of this approach [3]-[14]. Different hardware configurations have been used in the state of the art. In [3] a cooled monochrome digital camera with a liquid-crystal tunable filter is used. In [4] and [5], instead, a monochrome digital camera and a filter wheel with seven broadband gaussian filters is considered. Cupitt et al. [6] adopted a combination of micro- and macro-scanning, using a CCD area sensor; the sensor is equipped with a color mosaic mask with filter characteristics closely matched to a linear combination of the CIE-1931 XYZ spectral response. Ribes et al. [7] settled out a linear CCD array camera equipped with a built-in half-barrel mechanism that automatically positions a set of thirteen interference filters, ten filters covering the visible spectrum and the other three covering the near infrared. In [8] a monochrome CCD camera is used together with a multispectral lighting system composed of a slides projector with six color filters. In [10] a combination of a high-resolution photographic image and a low resolution multispectral image is used: the multispectral image is captured using a trichromatic digital camera system with two color filters, for a total of nine color channels. In [12] a monochromatic CCD camera with three to six color filters is used, while in [14] a cooled CCD digital camera with a fast-tunable liquid-crystal filter is adopted.


Multispectral imaging not only extends the traditional trichromatic imaging to a higher dimension, but it aims at providing a description of the reflective or transmissive properties of the surface. A more precise color analysis makes multispectral images suitable for monitoring and restoration of artworks, and for any research activity that requires high-quality color information. These images can be also rendered for specific viewing conditions, device, and reproduction media, in order to be disseminated through different communication channels. A generic framework for the application of multispectral imaging in cultural heritage is illustrated in Figure 7.1. Usually multispectral images are reproduced on colorimetric devices, but several works investigate how to exploit the multispectral information in the reproduction phase, in order to achieve faithful reproduction across different viewing conditions [15,16].

Colorimetric and Multispectral Color Imaging

Multispectral imaging can advocate many advantages over colorimetric imaging [4, 17]. First of all, it is quite straightforward to produce a correspondent colorimetric version of a multispectral image once viewing conditions are assigned, but it is theoretically impossible to reverse this process. In RGB images a pixel is a triplet of integers that code the amount of R, G, and B digital counts of an RGB device. During the acquisition process, these data are a measure of radiance integrated over the wavelength domain of red, green, and blue device’s sensors. The original spectral information is thus critically under-sampled with conventional color acquisition devices and the reproduced colors suffer from metamerism. In a multispectral image a pixel is a vector of real numbers that represents a physical property defined on a per-wavelengths basis. In the case of reflective media, like a painting or photograph, each pixel stores the reflectance spectrum at the point on the artwork surface. Typically thirty-one samples are considered, corresponding to a sampling of the visible spectrum from 400 to 700 nm with steps of 10 nm, as recommended by the Commission Internationale de l’Eclairage (CIE) [18]. The number of samples is not standardized, although several works have investigated the minimum number of sensors to use [19,19-24,24]. In [20] the use of five to seven filters is suggested; in [22] the behavior of a multispectral system having two to six filters is studied. In [23,24] the use of eight and nine filters is respectively suggested.

A general framework for the use of multispectral imaging in the cultural heritage field.

FIGURE 7.1

A general framework for the use of multispectral imaging in the cultural heritage field.

The multispectral image constitutes a fundamental physical description of the artifact, independent from the environment and observer, which can be targeted to any desired description specific for a given observer and viewing conditions. Being device independent, a multispectral image is invariant across different acquisition devices, allowing comparison of artifacts whose images are taken from different devices. Note however that, being in general the reflective properties of a surface dependent from the geometry of the illumination, it is assumed that the illumination and acquisition geometry are controlled, as it would happen if the artifact surface was measured by a spectrophotometer. Color information captured with RGB devices cannot generate a fully accurate colorimetric representation due to the fact that the sensitivities of the sensor employed do not correspond to those of the standard colorimetric observer [25]. If a multispectral image is available, precise colorimetric coordinates can be computed for each pixel in the image. In addition, multispectral images can also show details in the artifacts that are hard to see, if not impossible to detect, in RGB images.

Capturing a Multispectral Image

Two different approaches exist for multispectral imaging, called respectively narrow-band and wideband image capture [26]. They differentiate in the way they sample the wavelengths of the visible spectrum. In the narrow-band approach the acquisition of radiance information is obtained by a set of narrow-band filters, centered in principle one for each wavelength sample. Various technologies are available to produce spectrally narrow filters. One possibility is to realize a filter wheel with narrow bandpass glass filters in front of a camera. This system requires usually costly custom-made filters.

Moreover, a filter wheel is an electro-mechanical tool with several inherent drawbacks: slow band switching, small number of filters, sequential access to color bands, cumbersome design, and limited versatility [27].

More convenient is to realize narrow-band systems using a tunable filter. Spectral transmission of this device can be electronically controlled through the application of voltage or acoustic signal. Tunable filters can provide finer spectral sampling, and rapid and random switching between color bands because there are no moving parts and no discontinuity in the spectral transmission range [27]. In particular a solid state Liquid Crystal Tunable Filter (LCTF) has been widely used [15,28-30]. The peak wavelengths of the LCTF can be controlled, permitting a fine spectral sampling and producing usually thirty-one peaks in the range from 400 to 720 nm [31]. One of the most important advantages of this system is its robustness to arbitrary spectral shapes. In fact a sampling rate of 10 nm (in the ideal case of infinitesimally wide band filters) permits one to reconstruct spectral features which are at least as wide as twice the sampling rate [26]. The LCTF has the advantage of being solid state and reliably repeatable, and can be easily controlled by a computer for an efficient, automated, and relatively fast imaging. On the other hand, a large storage space is required for each acquired target, and registration of the thirty-one images is a serious issue. Moreover this system has severe drawbacks in terms of size, costs, and unwieldiness. The so called wide-band approach has been developed to realize easier multispectral systems. In these systems, the visible spectrum is sampled at a wide step and each adopted sensor is sensitive to light energy in a sufficiently large wavelength interval. Several works have demonstrated that five to eight basis vectors seem to be sufficient for accurate spectral reconstruction [20, 32-36].

Thus it is possible to significantly reduce the number of filters (from the thirty-one adopted in the narrow-band approach), still recovering accurate target reflectances. Wide-band systems have the advantage that they can be assembled from “off the shelf” hardware components typical of scientific research and professional photography. With respect to narrow band acquisition systems, they are much more easily deployed, manageable, flexible in their use, and comparatively cheaper. However, such systems do not perform a direct measure of reflectance, but rather produce data that must be further processed to achieve the true multispectral image [37].

A multispectral acquisition system is composed by a multispectral camera, a processing module to derive reflectance from the acquired radiance images, and a transformation module for the conversion into a colorimetric space, suitable for the colorimetric reproduction on common output devices (Figure 7.2). No true multispectral reproduction devices exist other than prototypes, while multi-spectral characterization of colorimetric devices is still in its early stages [37]. Several multispectral acquisition systems have been developed and tested. These systems differentiate themselves fundamentally for the number of sensors employed. Usually multispectral cameras rely on a standard B/W digital camera and a set of colored filters. A typical wide-band system uses optical filters to simulate sensors of different sensitivity. Either traditional filters like those used in standard photography or a tunable filter can be adopted. Burns and Berns [34], used a monochrome digital camera installed with seven interference filters while Imai et al. [38] combined a monochrome camera with a filter wheel containing six absorption filters. Methods have also been proposed that use commonly available optical filters and trichromatic digital still cameras [39]. Imai et al. [40] adopted a conventional trichromatic digital camera combined with a series of absorption filters. Seven filter combinations were placed in front of the digital RGB color CCD camera. One was no filter at all, while the remaining six were combinations of Wratten filters. A system based on a commercial color-filter array (CFA) digital camera coupled with a two-position filter slider containing absorption filters has been adopted to facilitate multispectral imaging for imaging cultural heritage [13, 41-43]. For each target, two RGB images are taken through each filter, so there are in total six channels for this camera system. Figure 7.3 illustrates typical processes involved in the acquisition of a multispectral image using a wide-band system [37]. First, as the acquired images are usually affected by some form of hardware noise, such noise must be estimated and modeled, so that a noise correction procedure can be established and applied. If the illumination is not evenly distributed across the scene, its uneven effect must then be discounted; this can be done by acquiring a reference image to estimate the effect and correct it.

A system for the acquisition and reproduction of multispectral images.

FIGURE 7.2

A system for the acquisition and reproduction of multispectral images.

The acquisition process for a typical wide-band multispectral acquisition system.

FIGURE 7.3

The acquisition process for a typical wide-band multispectral acquisition system.

After this pre-processing is done, the acquired images can be fed to the characterization model, which reconstructs the true reflectances of the points in the scene. This model can be built based on an analysis of the system behavior, or empirically derived from the acquisition and measurement of the reflectances of a suitable training set. The need of acquiring large or high-resolution images may force the operator to acquire different parts (“tessels”) of the scene separately, and then resort to mosaicking to obtain the whole image [44].

Imaging and Signal Processing Techniques

In general, the acquisition performed using a given i-th sensor at a single 2-D point x will return a value ai (x) in the form [45]:

tmp271b-4_thumb

This value integrates contributions from the energy E that reaches the physical sample observed, the spectral reflectance R of the sample, and the “sensitivity” Si of the i-th sensor. The integration with respect to the wavelength λ is performed in the range Ai to A2 of the sensor’s sensitivity. If this range exceeds that of the visible light spectrum, then appropriate steps must be taken to cut unwanted radiation off. As introduced before, two different approaches are currently used in multispectral imaging to obtain the reflectance estimate: narrow-band and wide-band.

“Narrow-Band” Multispectral Imaging

In the case of narrow-band systems, the device’s sensors are sensitive to a very narrow wavelength interval or the light sources employed show a very narrow emission spectrum. In both cases, assuming that the selective property can be modeled as a delta function, the value ai (x) obtained from an acquisition at a single point x can be interpreted as the value of function E(A, x)R(A, x)S(A) at the specific wavelength Ai, so that, by changing sensors or light sources, different values of this function can be estimated on the whole visible light spectrum. For a given wavelength Ai, Equation 7.1 then becomes:

tmp271b-5_thumb

and if the properties of the illuminant and sensor(s) are known or can be measured, then the values E(Ai) and S(Ai, x) are known and R(Ai, x) can be computed. As an alternative, the output values ai (x) can be compared with the corresponding values previously obtained from the acquisition of a reference physical sample whose reflectance is known [44]. If the result of this previous acquisition is indicated with ai (x), then it is:

tmp271b-6_thumb

where R (Ai, x) is the (known) reflectance of the reference sample. The value of R(Ai, x) can then be computed using the following equation:

tmp271b-7_thumb

 

 

Model of a pure reflective material (left) and of a translucent material in which subsurface light scattering occurs (right).

FIGURE 7.4

Model of a pure reflective material (left) and of a translucent material in which subsurface light scattering occurs (right).

If necessary, the model can be further improved to cope with such phenomena as subsurface light scattering and photoluminescence effects such as fluorescence and phosphorescence. Subsurface light scattering is a mechanism of light transport in which light penetrates the surface of a translucent object, is scattered by interacting with the material, and exits the surface at a different point. The light will generally penetrate the surface and be reflected a number of times at irregular angles inside the material, before passing back out of the material at an angle other than the angle it would have if it had been reflected directly off the surface (see Figure 7.4).

“Wide-Band” Multispectral Imaging

The second approach to multispectral acquisition is based on wide-band sensors. In this case, each sensor is sensitive to light energy in a sufficiently large wavelength interval, and the emission of the light source considered has a sufficiently broad spectrum, so that the values a* (x) obtained from the acquisition cannot be associated to specific wavelengths and do not permit a direct measure of reflectance [44]. Wide-band approaches require a correlation method learned from a suitable training set to relate the output from the multispectral camera at some pixel with the reflectance spectrum of the corresponding surface point in the scene. The output of a generic multispectral camera may be denoted as:

tmp271b-9_thumb

where i is an index that varies with the filter used (or the spectral band examined), and x is a two-dimensional vector identifying the point considered within the acquired scene. If M filters are used, then a(x) is an M-dimensional vector. The reflectance of the object at point x is a function of the wavelength λ, and can be denoted as r(A, x); however, since in practice it is not easy (or even always possible) to give an analytical form to r, a sampling of its value is customarily considered instead. The light spectrum is then sampled at a discrete number of values of A, and the reflectance is expressed as:

tmp271b-10_thumb

where j is an index that varies with the sample wavelengths. If N sample values of A are considered, then r(A, x) is an N-dimensional vector. To establish a correlation between the system output and the corresponding reflectance, the system characterization function:

tmp271b-11_thumb

must be described or estimated in some way. If the value of R (Aj, x) at N different wavelengths values Aj is wanted, then the discrete form of Equation 7.1 will be written for the i-th sensor (filter) as:

tmp271b-12_thumb

M such equations can be written to form a linear system, where M is the number of sensors (filters) used. In algebra notation, this system can be written as:

tmp271b-13_thumb

with:

tmp271b-14_thumb

and if matrix D were known, then Equation 7.9 could be solved with respect to r by means of some system inversion technique.

Methods that perform this inversion belong to what is called direct reconstruction approach [46]. The simplest but the most inaccurate method simply inverts Equation 7.9 by using a pseudoinverse approach or ordinary least squares regression. This method, adopted by Tominaga [47] to recover the spectral distribution of illumination from a six-channel imaging system, is not well applied in practice because this solution is very sensitive to noise [48]. Herzog et al. [49] have proposed a weighted least squares regression based on a weighting matrix to invert the system characteristics under a smoothness constraint. Hardeberg [48] has proposed a method based on a priori knowledge of a spectral reflectance database, but does not consider camera noise.

However, direct reconstruction is not widely used as it requires spectral characterization of the whole image system. Estimation of the illuminant E and of the sensitivity Si is not straightforward, and a complex illumination geometry (such as multiple and possibly different light sources used together from different angles) would require costly computations as well. The analysis in the frequency domain of reflectance spectra and color signals motivates the recourse to dimensionality reduction techniques involving, in the most common approach, empirical linear models. In general, when representative data are available, linear models are defined on the basis of statistical information, applying Principal Component Analysis (PCA) [50] and Independent Component Analysis (ICA) [51] to estimate the relationship between the acquisition output a and the sampled reflectance function r. If P samples are available, and their corresponding ak and rk vectors (with k ranging from 1 to P) are considered, then it is:

tmp271b-15_thumb

and therefore

tmp271b-16_thumb

with:

tmp271b-17_thumb

The (pseudo-)inverse D- of matrix D can then be computed by inverting Equation 7.12 with some chosen technique, and the reflectance r for a generic acquisition output a can thus be computed using the relationship:

tmp271b-18_thumb

Learning-based reconstruction is the most popular approach for spectral reflectance recovering. Several methods have been implemented to solve Equation 7.14 based on learning processes (see for instance [34] and [52]- [54]). Zhao and Berns [42] have developed a method named “Matrix R Method” to reconstruct spectral reflectance accurately while simultaneously achieving high colorimetric performance for a given illuminant and observer. All these methods do not require the knowledge of the spectral characteristics of the imaging system. However, as they are learning-based techniques their performance is greatly affected by the choice of a calibration target.

Some studies that outline the theoretical bases for the choice of the “training set,” which is the set of the colors used to build the empirical model, were recently published [55] – [58]. This set must be “sufficiently representative” of the whole range of possible colors, which intuitively means that the resulting model can actually be extended to any other color. This is not a clear-cut notion, but specific targets that include a varied selection of sample colors, such as the Macbeth ColorChecker and ColorChecker DC [59] are available.

Moreover, as the system data are obtained using a finite number of different optical filters that sample the wavelength of the visible spectra, an important issue relies on the filter selection in terms of shape and number. Assuming that any system noise in the acquired data has been properly corrected, it is reasonable to expect that the best quality will be obtained using all the available filters. It is also reasonable that choosing to employ different subsets of the available filters will give different results, and not all possible subsets will lead to acceptable results when reflectances are reconstructed from the acquired data. However, minimizing the number of filters used is important to reduce operational costs and acquisition time, as well as the amount of data needed to store the acquired spectral images. Furthermore, it is generally not guaranteed that all noise can be corrected, so a greater number of filters may yield a greater amount of noise and actually lead to biased results.

Next post:

Previous post: