Information Technology Reference
In-Depth Information
probability distribution is definite. The second available information source is
ORACLE. In basic version, when submitting data, it will tell whether learning
the data is positive example of concept.
Assuming that
. If an
example is in concept, it is a positive example; else it is a negative example.
Concept representation is a concept description; concept class is a group of
concept representation. Learning model has effective learnability of concept class.
Valiant learning theory only requires that good approximation to objective
concept has high probability. It permits concept description generated by learner
and objective concept has a small bias, which is an argument of learning
algorithm. In addition, probability permit learner fail is δ , which is also an input
argument. Differences between two concepts employ probability distribution
X
is an example space, a concept is a subset of
X
D
in example space X to evaluate:
Ã
diff
(
c
,
c
)
=
D
(
x
)
(7.37)
D
1
2
x
X
,
c
(
x
)
c
(
x
)
1
2
According to protocol, a concept class
is learnable if and only if there exists an
algorithm which use protocol to represent all objective concepts
C
c
* ∈
C
and all
distribution
D
:
1 ,
1
(1) Executive time is multinomial related to
, number of
c
* and other
ε
δ
arguments.
(2) Concept
has probability 1- δ
diff D (
c
in output
C
* )<ε
In Valiant's learning theory, there are two complexity measures. One is
sample complexity, which is the number of random example, used to generate
high probability and low error. Another computational complexity is
performance complexity, which is defined as computing time needed to generate
assumption with given numbers of samples in the worst case.
Assuming that
c, c
L
is learning algorithm,
C
is a kind of object concept in
L
. For any 0 < ō , Ō <1,
S
(
ε
,
δ
)
example space
X
represents minimum number of
ε
sample
m
so that in any objective concept
c C
, any distribution in
X
, given m
samples of
c, L
generates an assumption, whose probability is at least 1- Ō , error is
L
S
(
ε
,
δ
)
at most ō .
is referred to as
L
sample complexity of target class
c
.
ε
Following we discuss sample complexity of two learning algorithms.
a) Classification algorithm of learning conjunction concept
The content of the algorithm is as follows:
(1) For given sample, find the minimum main atom of each attribute. These
Search WWH ::




Custom Search