Databases Reference
In-Depth Information
base is an advantage from the viewpoint of ANN. An adaptive neurofuzzy
IDSisproposedinShah et al. , 8 Abraham et al. , 31 has proposed a three
fuzzy rule based classifier to detect intrusion in a network. Further a
distributed soft computing based IDS has modeled by Abraham et al. , 31 as
a combination of different classifiers to model lightweight and more accurate
(heavy weight) IDS.
MARS is an innovative approach that automates the building of
accurate predictive models for continuous and binary-dependent variables.
It excels at finding optimal variable transformations and interactions, and
the complex data structure that often hide in high-dimensional data. An
IDS based on MARS technology is proposed in Mukkamala et al. , 7 LGP is a
variant of the conventional genetic programming (GP) technique that acts
on linear genomes. An LGP-based IDS is presented in Mukkamala et al . 6
Intrusion detection systems based on the human immunological system
have been proposed in Esponda et al. , 32 and Hofmeyr and Forrest. 33
Hofmeyr and Forrest proposed a formal framework for anomaly detection
in computer systems, inspired by the characteristics of the natural immune
system. Hofmeyr and Forrest 33 applied the concepts derived from natural
immune system to design and test an artificial immune system to detect
network intrusion.
5.3. Preliminaries
5.3.1. Naive Bayesian classifier
Classification is considered as the task of assigning a sample to one of the k
classes,
{
C 1 ,C 2 ,C 3 ,...,C k }
, based on the n-dimensional observed feature
vector x .Let p ( x
C i ) be the probability density function for the feature
vector, x , when the true class of the sample is C i . Also, let P ( C i )be
the relative frequency of occurrence class C i in the samples. If no feature
information is available, the probability that a new sample will be of class
C i is P ( C i ) this probability is referred to as the a priori or prior probability.
Once the feature values are obtained, we can combine the prior probability
with the class-conditional probability for the feature vector, p ( x
|
C i ), to
obtain the posteriori probability that a pattern belongs to a particular
class. This combination is done using Bayes theory:
|
p ( x
|
C i ) P ( C i )
P ( C i | x )=
.
(5.1)
j =1
p ( x
|
C j ) P ( C j )
Search WWH ::




Custom Search