Geoscience Reference
In-Depth Information
microwave sounder observations have a linear relationship between temperature
and the Planck radiance. This means that a model first guess field is all that is
needed to compute a good temperature profile. For monitoring temperature
trends, this type of processing is advantageous. Because this method is useful
in data-sparse regions, the model output can be used as a first guess field in the
atmosphere and surface. Data have been selected to cover a larger geographic
area from 55° E to 110° E in longitude and from 00° N to 32° N in latitude.
2.2 Neural Networks
A neural network is a computer model composed of individual processing
elements called neurons. The neurons are connected by links in terms of weights.
A neural network may consist of multiple layers of neurons interconnected
with other neurons in the different layers. These layers are referred to as the
input layer, a hidden layer, or output layer. The inputs and the interconnection
weights are processed by a weighted summation function to produce a sum
that is passed to a transfer function. The output of the transfer function is the
output of the neurons. A neural network is trained with input and output pattern
examples. It then constructs a nonlinear numerical model of a physical process
in terms of network parameters. The weights and the biases in the network are
determined during the training process. They are obtained using a back-
propagation algorithm that is described in detail in Shi (2001). In order to
retrieve unique temperature profiles by the network, we have chosen a network
that is capable of modelling nonlinear data from example and are able to
generalize and interpolate. The generalization of network makes it possible to
train a network on a representative set of input/output pairs and get good results
without training the network on all possible input/output pairs. The weights
and biases are adjusted iteratively to reduce the difference between the actual
training set output vectors and the estimated output vectors calculated by the
network using the input vectors of the training set. The structure of a three-
layer backpropagation neural network is illustrated in Fig. 1.
In the figure, three layers—mainly input, hidden and output—have been
used. The neuron of the input layer is represented by T B (brightness temperature,
where 1, 2, 3…. n is the AMSU channels). The number of neurons in the
hidden layer is determined during network architecture design and adjusted to
achieve better network performance. The number of neurons in output layer is
the retrieved temperature profile i.e., T A . The number of transfer functions
have been examined in constructing the network architecture and it was found
that using a tan-sigmoid transfer function to propagate to the hidden layer and
a linear transfer function to propagate to the output layer in a three-layer
backpropagation architecture gives the optimum network performance for the
type of data we used in this study. The training algorithm is the basic Levenberg-
Marquardt method, which has been used for the minimization of mean square
error criteria. The ANN technique proposed in this paper is based on the three-
layer feed forward back propagation described in Nath et al. (2008).
Search WWH ::




Custom Search