Image Processing Reference
In-Depth Information
ground segmentation are the most important concepts in this study. Motion characterization
results in the video representation formalism, while background segmentation provides the
background reconstruction which is integrated to scene change detection. These two concepts
and the color histogram intersection together become the fundamental approach for calculat-
ing the similarity of scenes. The study of Sand and Teller [ 12 ] presents a new approach which
implements motion estimation in video scenes. The representation of video motion is carried
out by using some sort of particles. Each particle is an image point with its trajectory and other
features. In order to optimize the particle trajectories, appearance stability along the particle
trajectories and distortion between the particles are measured. The motion representation can
be used in many areas. It cannot be constructed using the standard methods such as optical
flow or feature tracking. Optical flow is a spatiotemporal motion feature describing the motion
of visual features. Optical flow-based representation is especially strong for video segment
classification References [ 13 , 14 ] present methods for representing video segments with optic-
al flow. Lertniphonphan et al. [ 13 ] propose a representation structure based on direction his-
tograms of optical flow. In Ref. [ 14 ] , video segments are tried to be represented by using his-
togram of oriented optical flow (HOOF). By the help of this representation, human actions are
recognized by classifying HOOF time series. For this purpose, a generalization of the Binet-
Cauchy kernels to nonlinear dynamical systems is proposed.
Temporal video segment classification is an important subproblem in content-based video
information retrieval addressing video action classification in our study. By definition, it is the
classification of scenes in a video. The classification highly depends on the representation of
temporal video information and the classification methods working on this representation.
Authors of Refs. [ 15 - 18 ] propose the approaches based on 3D interest points. These methods
tackle the problem of video segment classiication by puting new interest points or visual fea-
tures forward by enriching with time dimension. Therefore, the features in the studies can be
conceptualized as space-time shapes.
The methods proposed in Refs. [ 19 , 20 ] view the problem from the point of spatiotemporal
words. The segments are seen as bag-of-features and make the classification according to the
code words.
Authors of Refs. [ 13 , 14 , 21 - 23 ] present optical flow-based methods for video segment clas-
siication. Optical flow histograms are constructed and utilized in segment representation. By
using this representation, segment classification is carried out.
3 Temporal segment representation
Temporal video segment representation is the problem of representing video scenes as time
poral video segments. While this problem generally runs through the video information in-
cluding visual, audio, and textual features, our study deals with visual features only. Men-
tioned problem is originated from representing the temporal information. Temporal informa-
tion provides a combined meaning composed of time and magnitude for a logical or physical
entity. Robot sensor data, web logs, weather, video motion, and network flows are common
examples of temporal information. Independent from domain, both representation and pro-
cessing methods of temporal information are important in the resulting models. Regarding the
processing methods, prediction, classification, and mining can be considered as first comers
for the temporal information. In most cases, the representation is also a part of the processing
methods due to the specific problem. While the representation and processing methods are
handled together, the focus is especially on the processing methods rather than on the repres-
 
Search WWH ::




Custom Search