Information Technology Reference
In-Depth Information
general architecture is then established and a comment on a part of the theory implementation is given.
The conclusion is about the achievements carried out and the potential improvement of this work.
introduction
Recent research in artificial intelligence (AI) focusing on intelligent software agents has acknowledged the
fact that communication has to be seen as an intrinsic cognitive process instead of a plain external data
exchange protocol. Communication, and more specifically dialog, is an active process, which modifies
the agents internal state. It could be directly or indirectly related to a change in agents environment, as
any action does when performed. This is why the Speech Acts Theory (Searle, 1969), formerly defended
in philosophy, has emigrated toward computational linguistics and cognitive science, to finally provide
a proper frame for a new communication language between artificial agents (Smith & Cohen, 1996),
especially within agents societies (Pedersen, 2002). However, even though communication has changed
status, it has not been totally exploited by those who have promoted Speech Acts based enhancements to
agents design. Communication has been examined as a constraints systems preceding action (Mataric,
1997), a set of actions (mostly with performative communication, where any utterance is equivalent
to an action (Cerri & Jonquet, 2003)), a set of heuristics for negotiation strategies (Parsons, Sierra, &
Jennings, 1998), (Wooldridge & Parsons, 2000). But, seldom its feedback on the agent knowledge base
has been considered as a main issue. Some advances have been attempted to tackle it: Negotiation has
been recognized as tied to a process of belief revision by (Amgoud & Prade, 2003) (Zhang, Foo, Meyer,
& Kwok, 2004), thus acknowledging the role of communication as a part of knowledge processing in
artificial agents, mostly as a back up.
On the other hand, a complementary field of AI has been addressing communication issues: Several
Human-Machine Interaction researches have fostered interesting models of an 'intelligent' communi-
cation, i.e, an information exchange in which actions related with knowledge acquisition and update
are involved. (Draper & Anderson, 1991) and (Baker, 1994) model dialogs as fundamental elements
in human learning, and try to import them into automated tutoring systems (ITS). (Asoh et al., 1996),
(Cook, 2000) and (Ravenscroft & Pilkington, 2000), among several others, relate dialog to cognitive
actions such as mapping, problem-seeking and investigation by design. All authors tend to emphasize
the same point: Dialog supports cognition in human activity, and thus might support it if modeled in an
ITS. Cognition is seen in AI as the sum of belief and knowledge acquisition or change, and reasoning.
Supporting it in human learning process could be also done in machine learning: The idea that a learn-
ing process could be triggered or handled through queries, which are an element of the query-answer
basic pattern in dialog, has been long defended by (Angluin, 1987). Strangely, descriptions of cognition
do not directly include communication as an intrinsic cognitive process, although this has been pointed
out in the more 'human' part of cognitive science (i.e. in cognitive psychology) and despite the fact that
some twenty years ago, researchers in AI did emphasize the deep relationship between knowledge and
its communicative substrate in very famous publications such as (Allen & Perrault, 1980) or (Cohen &
Levesque, 1992).
In our opinion, there was a gap that was not filled: When considering artificial agents, especially
those which are qualified as cognitive, is it possible to find an equivalency between a communicative
process and a learning process, to model and implement communication and learning as dual aspects
Search WWH ::




Custom Search