Information Technology Reference
In-Depth Information
luminance comparison is computed only at the highest scale. The overall MSSIM
measure is obtained by combining the measures at different scales.
VSNR (Visual Signal-to-Noise Ratio) [33] operates via a two-stage approach.
In the first stage, contrast thresholds for detection of distortions in the presence of
natural images are computed by wavelet-based models of visual masking and vis-
ual summation. The second stage is applied if the distortions are suprathreshold,
which operates based on low-level visual property of perceived contrast and mid-
level visual property of global precedence. These two properties are measured by
the Euclidean distance in a distortion-contrast space of multi-scale wavelet de-
composition. VSNR is computed based on a simple linear sum of these distances.
VIF (Visual Information Fidelity) [34] is to quantify loss of image information
to the distortion process based on natural scene statistics, the human visual system,
and an image distortion model in an information-theoretic framework.
UQI (Universal Quality Index) [35] is similar to SSIM, and it is to model image
distortions as a combination of three factors: loss of correlation, luminance distor-
tion, and contrast distortion.
IFC (Information Fidelity Criterion) [36] is a previous work of VIF. IFC is to
model the natural scene statistics of the reference and distorted images in wavelet
domain using steerable pyramid decomposition [37].
NQM (Noise Quality Measure) [38] is a measure aiming at the quality assess-
ment of additive noise by taking into account variation in contrast sensitivity,
variation in local luminance, contrast interaction between spatial frequencies, and
contrast masking effects.
WSNR (Weighted Signal-to-Noise Ratio) [38] is to compute a weighted signal-
to-noise ratio in frequency domain. The difference between the reference image
and distorted image is transformed into the frequency domain using a 2D Fourier
transform and then weighted by the contrast sensitivity function.
PHVS (PSNR based on the Human Visual System) [39] is a modification of PSNR
based on a model of visual between-coefficient contrast masking of discrete cosine
transform (DCT) basis functions. This model can calculate the maximal distortion that
is not visible at each DCT coefficient due to the between-coefficient contrast masking.
JND (Just Noticeable Distortion) [40] model integrates spatial masking factors
into a nonlinear additivity model for masking effects to estimate the just notice-
able distortion. A JND estimator applies to all color components and accounts for
a compound impact of luminance masking, texture masking and temporal mask-
ing. Finally, a modified PSNR is computed by excluding the imperceptible distor-
tions from the computation of the traditional PSNR.
Because four typical distortion types were adopted in the subjective quality as-
sessment on the stereoscopic images in Section 2, we will also investigate the per-
formance of these IQMs on 2D images with the same distortion types. The source
2D images and corresponding subjective evaluation results were collected from
the LIVE image quality database [15, 31, 41], and the distortions are as following:
Gaussian blur: The R, G, and B color components were filtered using a circu-
lar-symmetric 2-D Gaussian kernel of standard deviation σB pixels. These three
color components of the image were blurred using the same kernel. The values
of σB ranged from 0.42 to 15 pixels.
Search WWH ::




Custom Search