Information Technology Reference
In-Depth Information
given by Dubois and Konrad in [13] where they state that even though motion tra-
jectories are often nonlinear, accelerated and complex, a simple linear flow model
will suffice in many cases. In [14] Chahine and Konrad state that complex motion
modeling can improve objective results (PSNR). The work by Brox et al. [15] shows
how variational optic flow algorithms can model and compute complex flows.
A problem, somewhat more complex than the interpolation of new frames, is
trying to create a new arbitrary viewpoint 2D sequence from a multi-camera record-
ing of a scene as done by Vedula et al. in [16]. The TSR used in that work is flow
computations using the method from [17] followed by simple intensity interpola-
tion. Shechtmanet al. [18] use a multiple camera approach to TSR (and spatial SR)
where all the cameras are assumed to be close spatially, or the scene assumed planar,
allowing simple registration to replace flow computation. This technique can not be
used on standard single camera recordings of film/television/video.
Using patches to represent salient image information is well-known [19, 20] and
an extension to spatiotemporal image sequences as video epitomes is presented and
used for TSR by Cheung et al. in [21]. It is unclear from [21] if video epitome TSR
can handle more than simple and small motion, and its learning strategy is (still)
computationally very costly.
1.4
Motion Compensated Frame Rate Conversion
with Simultaneous Flow and Intensity Calculations
The traditional approach to temporal super resolution is to first compute the flow of
the sequence and then interpolate the intensity values in the new frames. The sim-
plest TSR methods use linear interpolation along the flow trajectories; they weigh
each of the two original input frame contribution inversely by their distance to the
new frame being interpolated. Simple TSR gives perfect results if the computed flow
field is always reliable and precise, but this is rarely the case. Thus a fall back option
is needed, often to simple temporal averaging with no use of motion information.
Dane and Nguyen e.g. reports 4-42% fall back in [11].
When interpolating or warping the flow computed between two known frame into
any new frame(s) positioned between them, it will not always be so that there is a
flow vector in all pixel positions of the new frame(s). A fall back as above could be
used, but one could also fill in neighboring flow vectors hoping they will be correct.
This is a very complex strategy as seen in [8]. Without knowing the intensities of the
new frame(s), it is impossible to know if the guessed flow is correct, but to get the
intensities we need to know the flow! This case of two unknowns each depending
on the other is truly a hen-egg problem.
In order to work around this problem, we use an approach that aims at recovering
both the image sequence and the motion field simultaneously. In actual computa-
tions, a pure simultaneous approach might become very complex and instead we
use an iterative procedure: Given an estimate of the image sequence, we can update
the estimate of the motion field, and given an estimate of the motion field, we can
produce a new estimate of the image sequence. This procedure is embedded into a
Search WWH ::




Custom Search