Information Technology Reference
In-Depth Information
that the sequence should not be generated from a function starting from an index 0
or 1. Two features of the hypothesis (i.e. coefficients and the largest exponent) give
a clue that the sequence may be generated from a function starting with index 4 or
4. This
information is returned to the retroduction so it can be used in refining the hypothesis.
If no such refinement is necessary, criteria assessment will be carried out.
If the factual information is exhausted before a suitable hypothesis reaches a
stage of confidence or has achieved a satisfactory pass of the criteria, a second stage
of hypothesis generation is begun. Under these conditions, it is assumed that the
facts are generated (can be explained) by more than one hypothesis. There is a set of
strategies based on symmetry that attempts to examine sub-sequences. The controller
breaks the main sequence apart into sub-sequences (e.g. taking alternate values as
two independent sequences) where each subsequence will be subjected to the same
rigorous process of creation and validation.
4. The alternating sign (
+
and
) suggest that the most likely index is
12.4
Experiments and Discussion
We carried out two sets of experiments to test the model (see Fig. 12.5 and also
Chap. 2, Fig. 2.10). The purpose in the first set of the two experiments is to test intel-
ligence using Direct Learning (DL1, DL2). The 'D' indicates that all the information
in a series is used, thus every given number in a test sequence is considered. This is
similar to 'A' Fig. 12.2 except that every example is given from the beginning. The
'1' will indicate that no previous learning is done on other training sets whereas the
'2' shows previous learning on example series have been done before the final test
sequence is given.
The alternative learning method is to use Window Learning (WL1, WL2), or
running probability, as described in Chap. 11 and shown as 'b' in Fig. 12.5 . In order
to see clearly the behavior of the running probabilities, all hypotheses were processed
at all stages through a numeric sequence. Thus the model would attempt to work out
each next number in the given test sequence using each potential concept shown in
Chap. 11, Table 11.2. The best one of the competing beliefs is chosen to formulate
and present as an answer.
In the first experiment 1 of the first set DL, the probability of the likelihood of
each concept was set initially to 0.2 (all of five potential concepts shown in Fig. 11.2
have an equal chance of being selected). In the second experiment '2' of this first set
DL, the model was initially trained by using 85 sequences obtained from Eysenck
( 1974a , b ) as the training set. The role of this initial training is to bias each concept
towards the expected normal distribution implied by Eysenck's collection of test
sequences.
In the second experiment, the running conditional probabilities WL1 and WL2
were calculated for each initial condition and the results obtained are shown in the
graphs (see 'b' in Fig. 12.2 )
Search WWH ::




Custom Search