Information Technology Reference
In-Depth Information
4. Conclusion: “comprehending the incomprehensible” - the future of
AI systems design
In its current state, the design of artificially intelligent systems is pre-occupied with solving
the “how” problems and as such do not quite recognize the need for resolving
comprehension uncertainty. In fact, the concept of comprehension uncertainty was not even
formally posited prior to this work by us although there have been a few takes on the
mathematics of H-O probabilities. Earlier researchers mainly found the concept of H-O
probabilities superfluous because they failed to view it in the context of formalizing
comprehension uncertainty like we have done in this article.
However, given that the exact emulation of human intelligence continues to remain the
Holy Grail for AI researchers, they have to grapple with comprehension uncertainty at some
point or the other. The reason for this is simple - a hallmark of human intelligence is that it
recognizes the limitations of the current stock of knowledge from which it draws. Thus any
artificial system that ultimately seeks to emulate that intelligence must also necessarily see
the limitations in current domain knowledge and allow for the fact that the current domain
knowledge can evolve over time so that the global optimum attained with the current stock
of knowledge may not remain the same at a future time. Once an artificially intelligent
system is hardwired to recognize the time-dynamic aspect of the relevant event space within
which it has to calculate the probabilities of certain outcomes and take a decision so as to
maximize the expected value of the most desirable outcome, it will not terminate its search
as soon as global optimality is reached in terms of the contents/contours of the current
event space. It would rather go into a 'dormant' mode and continue to monitor the
evolution of the event space and 're-engage' in its search as soon as P{(U t-1 =U t )/ E } > 0 at any
subsequent time point.
With the formal hardwiring of comprehension uncertainty within the core design of an
artificially intelligent system it can be trained to transcend from simply answering the
“how” to ultimately formulating the “why” - firstly; why is the current body of knowledge
an exhaustive source to draw from for finding the optimal solution to a particular problem
and secondly; why that current body of knowledge may not be continue to remain an
exhaustive source to draw from for all time in future. When it has been trained to formulate
these “why” questions, only then can we expect an artificially intelligent system to take that
significant leap towards finally gaining parity with natural intelligence.
5. References
Atkinson, D. (2000). Bell's Inequalities and Kolmogorov's Axioms, Pramana - Journal of
Physics , Vol. 54, pp. 1-15
Ball, L. J. and B. T. Christensen (2009). Analogical reasoning and mental simulation in
design: two strategies linked to uncertainty resolution, Design Studies , Vol. 30, No.
2, pp. 169-186
Bhattacharya, S., Y. Wang and D. Xu (2010). Beyond Simon's Means-Ends Analysis: Natural
Creativity and the Unanswered 'Why' in the Design of Intelligent Systems for
Decision-Making, Minds and Machines , Vol. 20, No. 3, pp.327-347
Clark, D. A. (1990). Numerical and symbolic approaches to uncertainty management in AI,
Artificial Intelligence Review , Vol. 4, pp. 109-146
Search WWH ::




Custom Search