Information Technology Reference
In-Depth Information
X = CX , Y = CY
where . represents the element-wise matrix multiplication and
and Z = CZ .
θ is a real number and can be seen as a normalisation factor. A convenient
choice for θ can be:
θ =
CZ
2
(15)
where
2 represents the L 2 norm of the vector CZ .
J X and J Y are the sum of two terms. The first one will minimise the sum of
the surfaces of the triangles formed by two connected points and the projection
of one of the point on the Z-axis. As a result, the density of the points in
the area where the variations in Z are large will increase. The second terms
( θ X
.
Y ) is a weighting in respect
of the initial coordinates of the points to avoid large movement in the grid.
X t Q X
X or θ Y
Y t Q Y
4.2 Convergence
Consider the optimisation problem of the function J X (the convergence of J Y
can be proven in a similar fashion). The element-wise product of X by X can
be written as follows:
X. X = G i X g i t X
(16)
where G i is a square matrix which elements are null except the i th element of
the diagonal which is equal to 1 and g i is a vector which elements are null except
the i th which is equal to 1.
If u i t = Z. Z t G i , J X can be expressed as
u i t
X
X + θ X
X t X
X g i t
1
2
J X =
(17)
The gradient is then equal to
x J X = u i g i t CX + X
X
(18)
At the optimum,
x J X =0
u i g i t CX opt + X opt
X = 0 (19)
u i g i t C + I X opt = X (20)
The inverse of of u i g i t C + I ) exists and for small size problems the
above equation may be solved to give
X opt = u i g i t C + I 1
X
(21)
For large size problem, the inversion of the matrix is computationally too expen-
sive. The gradient descent method can then be applied. The gradient descent
algorithm can be written as
Search WWH ::




Custom Search