Image Processing Reference

In-Depth Information

problem. This section also describes the Euler-Lagrange equations which are impor-

tant tools for solving the problems based on the calculus of variations. The design of

the weighting function, and the formulation of the problem in a variational framework

are provided in Sect.
6.3
. Sections
6.4
and
6.5
provide the implementation details, and

the corresponding results of fusion over a couple of hyperspectral datasets. Summary

is presented in Sect.
6.6
.

6.2 Calculus of Variations

Computer vision usually deals with problems that are ill-posed in Hadamard sense.

Hadamard defined a mathematical problem to be well-posed when the solution [139]

(i)
exists,
(ii)
is unique, and
(iii)
depends continuously on the input data. The prob-

lems that are not well-posed in Hadamard sense, are said to be ill-posed. Generally,

inverse problems are ill-posed. A few examples of ill-posed problems are estima-

tion of structure underlying the scene, computer tomography, and super-resolution.

The ill-posed problems are often converted into well-posed problems by introducing

appropriate constraints. This process forces the solution to lie in that subspace where

it has been well defined [139]. This process is known as the regularization which

reformulates the problem for a better numerical analysis. Consider an ill-posed prob-

lem of finding

ζ

from the data

ρ

where they are related by the model
A
, such that

ζ
=
ρ

(The ill-posedness of the problem could be due to either the non-existence or

ill-conditioning of
A
−
1
when

A

ζ

may not exist). The most commonly used regulariza-

tion technique in such cases is to select a suitable norm, and introduce an additional

constraint over

. This process is known

as the Tikhonov regularization. The regularization term is often weighted in order

to decide the preference between the particular constraint and the data fitting term.

Thus, the overall problem can be formulated as the minimization of the following

functional:

ζ

in the form of a stabilizing functional

Γ

2

2

||

A

ζ
−
ρ
||

+
λ
||
Γζ
||

,

where the scalar

controls the relative weightage given to the constraint against

the data fitting term. It is known as the regularization parameter. The choice of

regularization functional is guided by a physical analysis of the system, and its

mathematical consideration.

Since the early days of computer vision, smoothness is one of the most commonly

employed constraints. The natural surfaces are generally smooth and so are the nat-

ural images except at edges. This choice is popular because of its-
(i)
realistic and

practical nature,
(ii)
physical interpretation, and
(iii)
mathematical elegance. The last

point refers to the fact that the smoothness can be easily incorporated by penalizing

large deviations, i.e., by penalizing the higher values of the derivative. Smooth-

ness constraint has been adopted in several algorithms, few examples of which

include shape from shading [76, 204], stereo analysis, optical flow estimation [8,

125], image compositing [108, 145], active contours [85], and super-resolution [82].

λ