Information Technology Reference
In-Depth Information
We have outlined:
the criteria about when and why to adjust the autonomy of an agent (for example, when
one believes that the agent is not doing (in time) what it has been delegated to do and/or
is working badly and makes; and/or one believes that there are unforeseen events, external
dangers and obstacles that perhaps the agent is not able to deal with); and
possible protocols of both monitoring and inspection, and of physical or communicative
intervention, that are necessary for control and adjustment.
A very important dimension of such an interaction has been neglected: the normative dimension
of empowerment and autonomy (entitlement, permission, prohibition, etc.) which is related to
a richer and institutional relation of delegation. Also, this dimension is a matter of run-time
adjustment and must be included as a necessary component when modeling several forms of
interactions and organizations.
Another important issue for future works is the acceptable limits of the agent's initiative
in helping. Would, for example, our personal assistant be too intrusive by taking care of our
'interests' and 'needs' beyond and even against our request ( Hyper-critical help )? Will the
user/client like such a level of autonomy or would they prefer an obedient slave without
initiative? Let us leave this question unanswered as it is enough to have characterized and
delimited the complex framework of such an issue.
Finally, we will leave for another topic a rather important clarification for engineering: does
the implementation of such a model necessarily require deliberative agents?
In fact our framework for collaboration and adjustable autonomy is presented in terms of
cognitive agents, i.e. of agents who have propositional attitudes, reason about plans, solve
problems, and even assume an 'intentional stance' by having a representation of the mind of
the other agent. This can be exemplified via some kind of BDI agent, but in fact it is more
general (it does not only apply to a specific kind of architecture). We present our framework
in a cognitive perspective because we want to cover the higher levels of autonomy, 28 and also
the interaction between a human user and a robot or a software agent, or between humans.
However, the basic ontology and claims of the model could also be applied to non-cognitive,
merely rule-based agents.
Obviously, a cognitive agent (say a human) can delegate in a weak or mild sense a merely
rule-based entity. Strong delegation based on mutual understanding and agreement cannot
be used, but it can be emulated. The delegated device could have interaction protocols and
reactive rules such that if the user (or another agent) asks to do something - given certain
conditions - it will do that action. This is the procedural emulation of a true 'goal adoption'.
Our notions could in fact be just embedded by the designer in the rules and protocols of
those agents, making their behavior correspond functionally to delegation or adoption, without
the 'mental' (internal and explicit) goal of delegating or of helping. One could, for example,
have fixed rules of over-help like the following ones:
28 In our view, to neglect or reduce the mental characterization of delegation (allocation of tasks) and adoption
(to help another agent to achieve its own goals) means, on the one hand, to lose a set of possible interesting kinds
and levels of reliance and help, and, on the other hand, not to completely satisfy the needs and the nature of human
interaction that is strongly based on these categories of cooperation.
 
Search WWH ::




Custom Search