Information Technology Reference
In-Depth Information
(and past) decision states gleaned from different sources is a set-valued rather than point-
valued feature (Sicilia, 2006).
A three-valued extension of classical (i.e. binary) fuzzy logic was proposed by Smarandache
(2002) when he coined the term “neutrosophic logic” as a generalization of fuzzy logic to
such situations where it is impossible to de-fuzzify the original fuzzy-valued variables via
some tractable membership function into either of set T or its complement T C where both T
and T C are considered crisp sets. In these cases one has to allow for the possibility of a third
unresolved state intermediate between T and T C . As an example one may cite the well
known “thought experiment” in quantum metaphysics of Schr ö dinger's cat (Schrödinger,
1935) - the cat in a closed box is in limbo between two states “dead” and “alive” and it is
impossible to tell which unless one opens the box at which point the effect of observer
participation is said to intervene and cause that indeterminate state to collapse into a
classical state of either a dead or an alive cat to be observed in the box. But as long as
observer participation is completely absent one cannot in any way disentangle these two
crisp sets!
This brings us to the final form of uncertainty that an artificially intelligent decision system
ought to be able to resolve - something which we christened here as “comprehension
uncertainty”. While some elements of “comprehension uncertainty” is sought to be handled
(often unknowingly) by the designers of intelligent systems by using one or more tools
targeted to resolve either temporal or knowledge uncertainty, the concept of
“comprehension uncertainty” has not yet been adequately described and addressed in
contemporary AI literature. That is the reason we decided to depict this form of uncertainty
using a dashed rather than continuous connector in the above chart. Also the question mark in
the chart denotes the fact that there is no known repository of theoretical knowledge (not
necessarily limited to the discipline of AI ) that addresses such a form of uncertainty. The
purpose of this chapter is to therefore posit a scientific theory of “comprehension
uncertainty”.
2. The meaning of “comprehension uncertainty”
While all the other forms of uncertainty as discussed above necessarily originates from and
deals with the contents/specification of an elementary set of interest , which is a subset of the
universal set, by the term “comprehension uncertainty” we mean and include any form of
uncertainty that originates from and deals with the contents/specification of the universal set itself .
If the stock of our entire knowledge about a problem is universal (i.e. there is absolutely
nothing else that is 'fundamentally unknown' about that problem) only then we can claim to
fully comprehend the problem so that no “comprehension uncertainty” would then exist.
There is a need here to distinguish between “complete knowledge” and “universal
knowledge”. The knowledge about a problem can be said to be complete if it consists of the
entire stock of current knowledge that is pertinent to that particular problem. However the
current stock of knowledge, even in its entirety, may not be the universal knowledge simply
because ways of adding to that current stock of knowledge could be beyond the current
limits of comprehension i.e. the universal set could itself be ill-defined . If intelligent systems are
primarily intended to emulate natural intelligence and treat “functional comparability” with
natural intelligence as the most desirable outcome, then the limits to comprehension for
natural intelligence should translate to similar limits for such systems as well.
Search WWH ::




Custom Search