Information Technology Reference
In-Depth Information
Those inputs are the secondary variables ζ i ( i =1to q =( p ( p +1) / 2) +
p + 1); the set of secondary variables includes the primary variables. Then the
model can be written as
g ( ζ , w )= ζ T w = q
i =1
w i ζ i ,
where ζ T is the transpose of vector ζ whose q components are the ζ i (in the
present chapter, the superscript T stands for the transposition of a vector of
a matrix).
Assume that N measurements of each input are available, together with
the corresponding measurements of the quantity to be modeled. We define
the N -dimensional space (called observation space ) in which each candidate
variable is represented by a vector whose components are the N measured
values of that input, and where, similarly, the process output is represented
by the vector whose components are the measured values of the latter. We
denote by ξ i the vector whose components are the N values of the i th variable
of the polynomial model, and by y p the vector whose components are the N
measured values of the quantity of interest. If the model is linear with respect
to the parameters, the angle between the vector representing the i th variable
and the vector representing the output decreases as the correlation between
the i th variable and the output increases.
If that angle is zero, i.e., if the output is proportional to variable i ,the
latter explains completely the output.
If that angle is π/ 2, i.e., if the output is fully uncorrelated to variable i ,
the latter has no influence on the output.
Observation space is different from input space; the dimension of input space
is equal to the number of variables of the model, whereas the dimension of
observation space is equal to the number of measurements performed on the
process prior to modeling.
In order to rank the inputs in order of decreasing relevance, it is not
necessary to compute the angle θ i between the vector that represents input i
and the vector that represents the output y p : it is more convenient to compute
the quantity cos 2 θ i = ((( ξ i ) T y p ) 2 ) / (( ξ i ) T ξ i )(( y p ) T y p ).
In order to rank the inputs in order of decreasing relevance, the following
orthogonalization procedure can be used [Chen 1989]:
Choose the input that is most correlated to the output (with largest cos 2 θ ).
Project the output vector and all other candidate inputs onto the null
space of the selected input.
Iterate in that subspace.
The procedure terminates when all candidate inputs are ranked, or when a
maximal number of inputs are ranked (for models with many inputs, the full
Search WWH ::




Custom Search