Database Reference
In-Depth Information
2.3 Selected Soft-Computing Methods
In the following, we focus our investigations on two method groups. With the
objective to achieve a (semi)automatic monitoring and control system, se-
lected classification methods are regarded first. These shall serve the purpose
of automatic assessment or classification of online generated process data
with regard to its relevance and potential storage as well as the determina-
tion of the current process state within the process window. As the second
group, methods for o ine exploratory data analysis are regarded. We focus
on relevant methods of dimensionality reduction and interactive visualization
that allow us to extract nonobvious structure and underlying dependencies
from the database. The results obtained using these methods also provide the
baseline for the design of the effective (semi)automatic classification methods.
2.3.1 Novelty or Anomaly Detection
For the (semi)automatic classification task, powerful decision units are re-
quired that can deal with complex, nonlinear, separable, nonparametric,
and potentially multimodal data. For instance k -nearest-neighbor classi-
fiers (kNN), multi-layer perceptrons (MLP), radial-basis-function networks
(RBF), or, more recently, support-vector machines (SVM) are attractive can-
didates for this task. In the context of the regarded application, dominantly
decision trees, adaptive-resonance theory (ART) networks, and MLPs have
been applied so far (see, e.g., [2.53] [2.36] [2.3]). However, especially RBF
networks are intriguing for this application due to numerous salient features.
In addition to being universal function approximators, RBF networks pro-
vide iterative topology learning, rapid training, fast convergence, and excel-
lent predictable generalization capabilities [2.4], [2.43], [2.44]. In contrast to
MLPs, the hidden layer of RBF networks comprises distance computation
units equipped with a radially declining nonlinearity. The Euclidean distance
and the Gaussian function are typical instances for RBF networks, which are
closely related to the Parzen-Window technique [2.41]. However, storing all
sample patterns is a significant burden with regard to storage and computa-
tion requirements. Thus, generalized RBF networks [2.4], i.e., networks with
fewer hidden neurons N than training patterns N , are typically applied,
which are given for the case of a one-dimensional function s (
x
)by
N
M .
s (
x
)=
w i φ i (
x t i
) ,
x
(2.1)
i =1
Here, t i denotes the centroid vector of the basis function, φ i denotes the ra-
dial basis function, w i denotes the weight for the linear combination of the
basis function outputs by the output neuron,
denotes an input vector, M
denotes the dimension of the input vector, and N denotes the number of
hidden neurons. Judicious and e cient choice of a su cient but minimum
x
Search WWH ::




Custom Search