Information Technology Reference
In-Depth Information
3.3 H-O probability implications for intelligent resolution of comprehension
uncertainty
Although we do not mathematically compute H-O probabilities while taking decisions (or
for that matter even ordinary probabilities), human intelligence does enough 'background
processing' of fringe information (mostly even without knowing) to 'see' a bigger picture of
the likely scenarios. Going back to the example of crossing a busy road, we are continuously
processing information (often unknowingly) from the environment in terms of the rapidly
changing pertinent event space. As long as the pertinent event space is 'pre-populated' with
likely forms of road hazards, an artificially intelligent system can be 'trained' to emulate
human decision-making and cross the road. It is when the contents of the pertinent event
space dynamically changes that would throw off even the most advanced of AI-based
systems given the current state of design of such systems. This is pretty much what
Bhattacharya, Wang and Xu (2010) identified as a 'gap' in the current state of design of
intelligent systems. The current design paradigm is overwhelmingly concerned with the
“how” rather than the “why” - and resolution of comprehension uncertainty involves more
of the “why”. Rather than trying to answer “how to avoid being hit by a vehicle or some
other hazard while crossing” AI designers ought to be focusing on “why are we vulnerable
while crossing a busy road”.
As soon as the focus of the design shifts to the “why”, the link with comprehension
uncertainty becomes a very natural extension thereof. Then we are simply asking why a
particular event space is a pertinent one for the problem at hand? The natural answer is that
in a specified time window, it contains all the elementary events out of which one or a few
are conducive for the desired outcome. Then the question naturally progresses to what
would happen outside that specified time window? If we are pre-populating the pertinent
event space and then assuming that it would hold good for all times, it would be at the cost
of ignoring comprehension uncertainty which can defeat the AI design. At this point it is
perhaps useful to again remind readers that it is not the vagueness or imprecision associated
with some contents of an event space that is of importance here (existing uncertainty
resolution methods like rough sets, fuzzy logic etc. are adequate for dealing with those) - it
is a temporal instability of the event space itself that is crux of the comprehension
uncertainty concept.
The mathematics of H-O probabilities then offers a plausible route towards formal
incorporation of comprehension uncertainty within artificially intelligent systems designed
to replicate naturally intelligent decision-making. As naturally intelligent beings, humans
are capable of somehow grasping the “limits to comprehension” that result from a gap
between current knowledge and universal knowledge. If this was not the case then
'research' as an intellectual endeavour would have ceased! In the current design paradigm
the focus is on training AI models to 'search' for global optimality while, ideally, the focus
ought to be on training such models to do 'research' rather than 'search'! Recognition and
incorporation of comprehension uncertainty in their learning framework would at least
allow future AI models to 'grasp' the limits to comprehension so as not to invariably
terminate as soon as a 'globally optimal' decision point has been reached using the current
domain knowledge.
Search WWH ::




Custom Search