Geology Reference
In-Depth Information
Fig. 5.5 The entropy information based on 6-hourly records in Beas Basin (Each scenarios are
[1000], [0100], [0010], [0001], [0011], [1001], [1100], [0101], [1010], [0110], [0111], [1011],
[1101], [1110*] and [1111] Mask indicates different combinations of the input effects (inclusion
and exclusion indicated by 1 or 0 in the mask). From left to right. The horizon extraterrestrial
radiation (ETR), air dry bulb temperature (DT), wet bulb temperature (WT), and atmospheric
pressure (p).)
extraterrestrial radiation, air dry bulb temperature, and wet bulb temperature can
make a good model (i.e. Scenario 14), comparable to the combination which
composes of all inputs. The worst model with three inputs is the Scenario 11 (i.e.
the model excluding horizon extraterrestrial radiation). The signi
cance of the
atmospheric pressure data set was relatively small when compared with other input
sets since the elimination of these inputs made little variation in transinformation
values. The variation of marginal, joint and conditional entropies for different
scenario can be found in Fig. 5.5 .
In the previous paragraphs, we have seen the capabilities of Entropy Theory in
assessing the in
uencing data series in a model. Another major problem which
decides the data ef
ciency in predicting model is how many data points are needed
to suf
ciently calibrate the correct model from a vast number of data, in a noisy
environment. The quantity of available input data to predict the desirable output
was analyzed using entropy information. The transinformation value would help us
to determine whether there were suf
cient data to provide the maximum knowledge
about a system and subsequently a reliable model. The transinformation result is
shown in Fig. 5.6 which indicates how much information was actually transferred
between variables, and how much information was still left to be transferred. The
test produced a maximum transinformation to a value 0.774 at around 1,010 data
points. Figure 5.7 show the marginal entropy and joint entropy variation with the
number of data. As it can be seen, the joint entropy of the set data increases but the
marginal entropy of the input variables is stable after 700 points. One can also
observe that conditional entropy follows the same decreasing trend like transin-
formation information curve after 1,010 data points.
 
Search WWH ::




Custom Search