Geology Reference
In-Depth Information
The learning algorithm for ANFIS is a hybrid algorithm, which is a combination
between the gradient descent method and the least squares method for identi-
fying nonlinear input parameters {a i , b i , c i } and the linear output parameters
{p i ,q i ,r i }, respectively. The ANFIS modeling was performed using the
function because of its good performance with a
small number of rules. Further detailed information of ANFIS can be found in
Jang [ 39 ].
subtractive fuzzy clustering
4.6.4 Support Vector Machines
Support vector machines are state-of-the-art
cation
tools introduced in 1992 by Boser, Guyon, and Vapnik. SVMS (SVMs) are
essentially a sub-discipline of machine learning. Applications of SVM have pro-
duced some promising results in the various
'
kernel method
'
-based classi
fields of science. However, they have
not yet been fully exploited in engineering. SVMs are derived from the concept of
structural risk minimization hypothesis to minimize both empirical risk and the
con
dence interval of the learning machine, which in turn helps to produce good
generalization capability.
SVM-based modeling is becoming popular in the
field of hydrology, just like
other soft computing techniques. ANN was developed at AT&T Bell Laboratory by
Vapnik and co-workers in the early 1990s [ 11 ]. SVMs for regression were
rst
introduced in Vapnik [ 93 ]. A paper at the COLT conference, and the
first appli-
cations, were reported in the late 1990s [ 22 ]. Just as with ANNs, SVM can be
represented as two-layer networks (where the weights are nonlinear in the
first layer
and linear in the second). It has two major advantages in that (1) it has the ability to
generate nonlinear decision boundaries using linear classi
ers and (2) the use of
kernel functions allows the user to apply a classi
er to the data, having no obvious
fixed dimensional vector space representation [ 9 ]. SVM helps to map the original
input data sets from the input space to a high/in
nite-dimensional feature space to
make the classi
cation problem quite simpler [ 96 ]. The SVMs are actually devel-
oped for a classi
cation approach for linearly separable classes of objects. As
shown in Fig. 4.15 , SVM tries to
find a unique hyperplane which produces a
maximum margin between hyperplane and the nearest data point of each class. The
objects of either class (H1 with * objects and H2 with + objects in Fig. 4.15 ) which
fall exactly over the hyperplanes H1 and H2 are termed support vectors and these
are the most important training points in the whole input space. In real life prob-
lems, the original input space can always be mapped to some higher-dimensional
feature space (which is known as Hilbert space) using nonlinear functions called
feature functions (Raghavendra and Deka 2014). The feature space is high
dimensional but it is not practicably feasible for classi
cation of the hyperplane
directly using feature space. In this scenario, Kernels can be applied for nonlinear
mapping.
 
Search WWH ::




Custom Search