Game Development Reference
In-Depth Information
distortion as the differences between the filter responses along computed motion
trajectories. A similar framework is proposed in Moorthy and Bovik ( 2010 ), which
calculates the temporal distortion as the SSIM index between motion compensated
video patches. With the block-based motion compensation, this method is quite
computationally efficient. In Wang et al. ( 2012 ), a new methodology is proposed
to deal with the motion information. Instead of explicitly calculating the optical
flow and independently modeling distortions in temporal and spatial domains, the
structural descriptors are extracted from the localized space-time regions to account
for spatial and temporal distortions simultaneously. The largest eigenvalue and its
corresponding eigenvector of the 3-D structure tensors are used as the descriptors
because they can well represent spatiotemporal structural features of the localized
space-time regions.
In Ou et al. ( 2011 ), the impact of the frame rate and quantization distortion on the
final quality of video is investigated, and it is observed that the temporal correction
factor follows as an inverted falling exponential function, whereas the quantization
effect on the compressed frames can be modeled by a sigmoid function of the peak
signal-to-noise ratio. InMa et al. ( 2011 ), the rate-distortionmodel is further proposed
in terms of the quality-frame rate-quantization model, and rate-constrained scalable
bitstream adaptation and frame rate adaptive rate control are proposed with the rate-
quality models.
The NR and RR VQA have also been studied in the literature. The NR IQA
(Brandão and Queluz 2008 ) is further extended to NR VQA (Brandão and Queluz
2010 ) to evaluate the visual quality of the AVC/H.264 coded videos. In Huynh-Thu
and Ghanbari ( 2009 ), the authors proposed an NR VQA for assessing the perceptual
quality of the frame freezing impairments. In Ma et al. ( 2012 ), the reduced reference
quality assessment algorithm is proposed based on the spatial information loss and
the temporal statistical characteristics of the interframe histogram.
11.2.2.1 MOVIE
Motion-based Video Integrity Evaluation or MOVIE index is a general, spatiospec-
trally localized multiscale framework for evaluating dynamic video fidelity that com-
bines both spatial and temporal (and spatiotemporal) aspects of distortion assessment.
The block diagram of the MOVIE index is shown in Fig. 11.3 . Firstly, the reference
and distorted videos are decomposed into spatiotemporal bandpass channels using
a Gabor filter family. Then the spatial quality and temporal quality are measured,
respectively. The temporal quality is calculated using motion information from the
reference video sequence. Finally, the spatial and temporal scores are pooled as one
overall video integrity score known as the MOVIE index.
Let r
)
is a vector of the spatiotemporal location in the video. Denote the Gabor filtered
reference video by
(
i
)
and d
(
i
)
denote the reference and distorted videos, where i
= (
x
,
y
,
t
f
(
i
,
k
)
and the Gabor filtered reference video by
g (
i
,
k
)
, where
k
be a vector
of dimension N which is composed of the complex magnitude of N elements of
=
1
,
2
,...,
K indexes the filters in the Gabor filterbank. Let f
(
k
)
Search WWH ::




Custom Search