Information Technology Reference
In-Depth Information
of the same cognitive mechanism? This was a research issue for which all the mentioned work seemed
hinting at, but without directly focusing on it. A previous publication (Prince, 2006) has described an
implemented agent model in which communication is a native property, and messages could be exchanged
as 'knowledge chunks', following theoretical specifications first presented in (Prince, 1996) . The present
research is broader, and aims at making artificial agents acquire knowledge through dialog, but mostly
implement dialog materializations of their knowledge revision process. A feasibility study has been
presented in (Yousfi-Monod & Prince, 2005), extended in (Yousfi-Monod & Prince, 2007) to include
knowledge conflict management. However, this point has been tackled only from the reasoning point
of view. But knowledge gathering has its discrepancies: Misunderstandings, conflict between captured
knowledge and inner knowledge base are usual situations. Some authors (Beun & Eijk, 2003) have
tackled the issue. They have also recognized the important 'repair role' devoted to interaction (Beun
& Eijk, 2005). Therefore, misunderstandings and discussion, external forms of an internal knowledge
process, and dialog counterparts of knowledge revision through an external help, have been here thor-
oughly analyzed. Their originality could be summarized by two items. First, conflict in a knowledge
base is 'uttered' and therefore triggers an external and no more an internal revision process: The agent
is no more 'alone' and knowledge revision is no more a solitary action. Second, the response of the
other agent is a fresh contribution of new facts or knowledge and therefore cannot be anticipated. This
creates a very interesting situation in which misunderstandings can pile up, and where discussion might
create unanticipated reasoning tracks.
This chapter tries to provide a possible instantiation of this issue, highlighting the deep interaction
between the knowledge acquisition and revision process on one hand, and the dialog strategy and model
on the other.
Background
Several notions have to be explained and grounded before proceeding further. In this section, a survey
on cognitive artificial agents basic properties is provided. Since learning is a typical task of knowledge
acquisition and revision, we focus here on learning related to communication, and the way it has been
dealt with in the literature.
Cognitive Artificial Agents
By cognitive agents we mean entities able to acquire, store and process knowledge therefore able of
“understanding, inferencing, planning and learning” (Lycan, 1999). Notice that communication, and
its basic tool, language, are not mentioned here, although they appear as such in cognitivists' works.
(Nelson, 1996) specifically states that: “language is a catalyst for cognitive change”. A catalyst does not
enter the reaction and is not modified by it. In this chapter, we will show that if 'language', as a tool,
is not modified, dialog, as a language and communication process, gets a feedback when knowledge
is revised.
When cognitive agents are programs or robots, they are called artificial cognitive agents . In AI and
knowledge representation terminology, this can be translated into systems possessing knowledge bases,
making them evolve either by environment observation (reactivity) or by derivation modes (inductive,
deductive and abductive reasoning). Deduction is the natural mode of knowledge derivation in a propo-
Search WWH ::




Custom Search