Java Reference

In-Depth Information

Maximum

Margin

Hyperplane

Support

Vectors

(a)

(b)

Non-Attriter Attriter

Figure 7-4
Support vector machine: (a) two-dimensional

hyperplane, (b) with support vectors.

shown in Figure 7-4(a), the line that divides the target values

Attriter
and
Non-attriter
is called a
hyperplane
. A
hyperplane
exists as

a complex equation that divides the space using N attribute

dimensions, where N is the number of predictor attributes. To

understand the concept of
support vectors
we look at two-dimensional

space. In Figure 7-4(b), the
hyperplane
that classifies
Attriters
from

Non-attriters
and those data points that the margin pushes up

against are called
support vectors
. Margin is the minimal distance

between the data points and the
hyperplane
that divides
Attriters

and
Non-attriters
.

SVM allows the selection of a
kernel
function. Kernel functions

morph data into high-dimensional vector space and look for relations

in such space. Many kernel functions have been introduced in the

data mining community. JDM includes
kLinear, kGaussian, hypertangent,

polynomial, and sigmoid
. For more details about the SVM and kernel

functions refer to [Cristianini/Shawe-Taylor 2000].

Feed Forward Neural Networks

Multilayer feed-forward neural networks with a back propagation

learning algorithm are one of the most popular neural network

techniques used for supervised learning. Despite the fact that neural

networks often take longer to build than other algorithms and they

do not provide interpretable results, they are popular for their

predictive accuracy and high tolerance to noisy data.

Overview

A
neural network
is an interconnected group of
simulated neurons
that

represent a computational model for information processing. A
sim-

ulated neuron
is a mathematical model that can take one or more

inputs to produce one output. The output is calculated by multiply-

ing each input by a corresponding
weight
, and combining them to

Search WWH ::

Custom Search