Digital Signal Processing Reference
In-Depth Information
With the form of the model established, we can fit it to the experimental data.
The most widely used fitting technique is known as the method of least squares ,
which calculates the coefficients from the system response and inputs such that
the sum of the squares of the errors ( ε i ) is minimized. Omitting the derivation,
the least squares fit algorithm yields the following equation, which provides an
estimate of β that satisfies the least squares criterion:
= ( X T X ) 1 X T y
b
(14-5)
where b is the k ×
, the vector of true model
fit coefficients. The fitted regression model is then expressed as
1 vector containing an estimate of
β
y
ˆ
=
Xb
(14-6)
where
y is the vector of estimated system responses for the given input matrix,
X , and vector of estimated fit coefficients, b .
When fitting the model, we have multiple choices for the form of the model
factors and responses. Rather than fitting to the raw data, regression tools often
fit the model to a transformed version of the data. Throughout this chapter we
fit the model to input variables that are coded according to
ˆ
round 2 (x ik x k )
x k, max
x ik =
(14-7)
x k, min
where x ik is the value of the i th observation of the input for the k th model
term, x k the mean value of the observations for the k th model term, x k, max the
maximum value of the observations for the k th model term, x k, min the minimum
value of the observations for the k th model term, and round( x ) rounds x to the
nearest integer.
The variable coding maps the input variables such that the minimum, nominal,
and maximum values for each variable correspond to coded values of
1, 0,
and 1, respectively. By fitting the model to the coded variables, we reduce the
variation in magnitude of the individual coefficients to avoid instabilities in the
model.
Case Study Application We now apply the least squares fitting procedure to
our case study. A frequently used experiment design for second-order response
surface models is a central composite design [Montgomery, 2005]. For a set of
five input variables, the central composite experiment requires a total of n =
28
observations, which are summarized in the first eight columns of Table 14-3.
Since we have five independent variables, equation (14-3) specifies that the input
matrix will have 21 columns. The form of the input matrix is shown in equation
(14-8), and the expressions for the individual columns are defined in Table 14-4.
For example, the term x 1 , 1 would be equal to the R Tx value of zero from run 0 in
Table 14-3, x 1 , 2 would be equal to the R Tx value of
1 from run 1, x 1 , 3 would
Search WWH ::




Custom Search