Information Technology Reference
In-Depth Information
σ k were generated on random 11 and remained fixed .There-
fore we have a set of functions linear in parameters ( w 0 , w 1 ,..., w K ) . As one can see
values of f where constrained by
where centers µ k and widths
±
1. For the classification learning task, the decision
boundary was arising as the solution of f ( x , w 0 , w 1 ,..., w K )= 0. For the regression es-
timation, we simply looked at the values of f ( x , w 0 , w 1 ,..., w K ) . Examples of functions
from this set are shown in figures 2, 3
1.0
1.0
1.0
1.0
0.8
0.8
0.8
0.8
{
}
,
,
,
···
,
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.0
0.0
0.0
0.0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
Fig. 2. Illustration of the set of functions for classification
{ 0.0
}
,
,
,
···
,
0.5
0.0
0.5
1.0
0.5
0.0
0.5
1.0
0.5
0.0
0.5
1.0
0.5
0.0
0.5
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
0.5
0.0
0.5
0.0
0.5
0.0
0.5
0.5
0.5
0.5
0.5
0.0
0.0
0.0
0.0
1.0
1.0
1.0
1.0
Fig. 3. Illustration of the set of functions for regression estimation
4.2
System and Data Sets
As a system y ( x ) we picked on random a function from a similar class to (50) but
broader , in the sense that the number K was greater and the range of randomness on
σ k was larger. Data sets for both classification and regression estimation were taken
by sampling the system according to the joint probability density p ( x , y )= p ( x ) p ( y
|
x )
whereweset p ( x )= 1 — uniform distribution on the domain [ 0 , 1 ] 2
and p ( y
|
x )=
( y y ( x )) 2
1
2
exp (
2 ) — normal noise with
σ
= 0 . 1.
πσ
4.3
Algorithm of the Learning Machine
In the case of finite sets of N functions, the learning machine was simply choosing
the best functions as f (
ω
I )= arg min j = 1 , 2 ,..., N R emp (
ω
j ) or in cross-validation folds
ω I )= arg min j = 1 , 2 ,..., N R emp (
f (
ω
j ) .
11
Random intervals: µ k [ 0 , 1 ] 2 , σ k [ 0 . 02 , 0 . 1 ] .
Search WWH ::




Custom Search