Game Development Reference
In-Depth Information
Database, inwhich bothMPEG-2 andH.264 compression as well as different types of
network impairments are considered. The EPFL-PoliMI Video Quality Assessment
Database contains H.264 compressed videos corrupted by simulated packet loss
due to transmission over an error-prone network. The database includes one set of
78 video sequences at CIF spatial resolution and another set of 78 sequences at
4CIF spatial resolution. The LIVE Video Database contains videos such as MPEG-
2 compression, H.264 compression, simulated transmission of H.264 compressed
bitstreams through error-prone IP wired and wireless networks. A set of 150 distorted
videos were created and each video was assessed by 38 human subjects. The LIVE
Mobile Database contains 200 distorted videos created from 10 HD reference videos.
The features of the LIVEMobile Database is that it incorporates dynamically varying
distortions that change as a function of time, such as frame-freezes and temporally
varying compression rates. The Poly-NYU Video Quality Databases contain four
separate but related tests using videos with different frame rates and quantization
parameters. The distortion videos are generated by different temporal, spatial, and
SNR resolutions. The TUM 1080p25 Data Set contains 48 different videos in HDTV
1080p25, which are encoded with two AVC/H.264 encoder settings at four different
rate points between 5-30MBit/s.
11.2.2 Video Quality Assessment
The straightforward way of conducting VQA is to perform IQA on each individual
frame and then summate all the frame scores to obtain a composite score. Many
image quality metrics have been directly extended to VQA metrics using such a
frame-by-frame approach. A proper temporal pooling scheme can also be employed
to assign each frame a weighting factor. For example, to produce a video quality
score based on the individual frame quality scores, a motion weighting model (Wang
et al. 2004b ) is proposed to account for the fact that the accuracy of visual perception
is significantly reduced for the high-speed motion, and in Wang and Li ( 2007 ), an
alternate weighting scheme based on human perception of motion information is
utilized. Although motion information is more or less explored in these temporal
weighting schemes, however, temporal distortions were not yet taken into account.
In recent years, temporal distortion has drawn more and more attention from
the VQA researchers. Video quality metric (VQM) proposed by NTIA (Pinson and
Wol f 2004 ) is a popular VQA metric that was included in the Recommendation
ITU-T J.144 (ITU-T 2004 ) as a normative FR VQA model. This metric extracts
seven features from spatiotemporal blocks to compute the video distortion. Frame
differences are embedded into one feature to account for the interaction between
motion and spatial distortion. In Ninassi et al. ( 2009 ), temporal distortion is defined
as temporal evolution of the spatial distortion in a spatiotemporal “tube,” since the
perception of spatial distortions over time can be largely modified by their temporal
changes. Seshadrinathan et al. proposed a motion-based video integrity evaluation
(MOVIE) index in Seshadrinathan et al. ( 2010 ), where they defined the temporal
 
Search WWH ::




Custom Search