Database Reference
In-Depth Information
Due to the fact that ω
→∞
we can use the central limit theorem again
this time: µ = ωq i = ωq i (1
q i ):
p Z>
ω
k
q i (1
ω
2
ω
q i ω
ωq i (1
2
q i ) ω−k =
lim
ω→∞,q i < 1/2
lim
ω→∞,q i < 1/2
q i )
k =0
p Z> ω ( 1 / 2
q i )
q i (1
=
lim
ω→∞,q i < 1/2
q i )
= p ( Z>
)=0 .
(13.10)
13.5.1
Independent Algorithmic Framework
Roughly speaking, the feature selectors in the ensemble can be created
dependently or independently. In the dependent framework, the outcome
of a certain feature selector affects the creation of the next feature selector.
Alternatively, each feature selector is built independently; the resulted
features subsets are then combined in some fashion. Here, we concentrate on
an independent framework. Figure 13.1 presents the proposed algorithmic
framework. This simple framework receives as an input the following
arguments:
(1) A Training set ( S ) — A labeled dataset used for feature selectors.
(2) A set of feature selection algorithms
—Afeature
selection algorithm is an algorithm that obtains a training set and
outputs a subset of relevant features. Recall that we employ non-
wrapper and non-ranker feature selectors.
(3) Ensemble Size ( ω ).
(4) Ensemble generator ( G ) — This component is responsible for gen-
erating a set of ω pairs of feature selection algorithms and their
corresponding training sets. We refer to G as a class that implements
a method called “genrateEnsemble”.
(5) Combiner ( C ) — The combiner is responsible for creating the subsets
and combining them into a single subset. We refer to C as a class that
implements the method “combine”.
{
FS 1 ,...,FS ξ }
The proposed algorithm simply uses the ensemble generator to create
a set of feature selection algorithm pairs and their corresponding training
sets. Then it calls the combine method in C to execute the feature selection
Search WWH ::




Custom Search