Image Processing Reference
In-Depth Information
similar patterns in the input space are grouped together and associated with a single
neuron. We will briefly review the use of self-organizing maps (SOM) [58]. These
are single-layered networks constituted by a two-dimensional lattice of nodes: each
node is a pattern in the input space. At the beginning of the training process of the
network, the nodes are initialized randomly, and then the centers are updated
through an iterative process consisting of randomly selecting an element of the
input space and finding the winner node, i.e., the node whose distance from the
input data element is smaller, and updating all the centers of the lattice. The rule
is that the centers that are modified heavily are those of the winner node and its
neighbors. The update function for all the cluster centers c i is
ct
(
+= +
1
)
ct
()
ht
(, )(
cxc
i
)
(16.16)
i
i
i
where x is the input data element, and h ( t , c i ) is the neighborhood function that
takes into account the centers' distance in the lattice and allows the selection and
the modification of the neighboring centers. The neighborhood function has to
decrease as much as the learning process evolves in order to achieve convergence.
The SOM algorithm results in a topology-preserving algorithm, facilitating the
merging operation of neighboring nodes to create super clusters from smaller
clusters whose distances are small [22]. This operation allows the separation of
both small and large clusters from the same data set, overcoming some limitations
of other algorithms such as for example the k -means, which result in homogeneous
clusters. Moreover, this reduces the importance of the initial choice regarding the
number of expected clusters. In Reference 23 the algorithm was modified in order
to take into account spatial proximity among the voxels in the original image.
16.4
PCA
This is a multivariate technique that decomposes the data into a set of linearly
independent components ordered according to explained data variance. This
method is related to the Karhunen-Loève transform or the Hotelling transform
[59], and was first proposed by Pearson [60]. PCA has found applications in data
compression, image and statistical data analysis, and has been used in fMRI data
analysis in order to explore and decompose the correlations in spatial or temporal
domains present in the data set [27]. This analysis results in eigenimages and
associated time vectors. The images can be seen as maps of functional connec-
tivity [28,61,62] because they share the same temporal pattern. PCA in functional
neuroimaging studies was first applied to PET data in Reference 62: the temporal
resolution of PET allowed the acquisition of 12 images, alternating between a letter
repetition task and a word generation task. The analysis of regional cerebral blood
flow (rCBF) by means of PCA resulted in an eigenimage, or spatial distribution
of voxel values, with positive loadings in regions involved in verbal fluency. The
associated time pattern can be seen as a modulating function of the loadings and
showed high levels during the verbal fluency task and low levels during the letter
Search WWH ::




Custom Search