Information Technology Reference
In-Depth Information
is no trust). In fact, in addition to the goal, it is also necessary that the trustor believes himself
to be (strongly or weakly) dependent from the trustee himself see Figure 2.14.
On the basis of the goal, of her (potential) dependence beliefs, 54 of her beliefs about
the trustee attributes (internal trust), of her beliefs about the context in which the trustee
performance will come, the trustor (potentially) arrives at the decision to trust or not
(Figure 2.15).
As explained in Section 2.2.1, all these possible beliefs are not simply external bases and
supports of X's trust in Y (reduced to the Willingness and Competence and Dependence
beliefs, and to the Decision and Act), but they are possible internal sub-components and forms
of trust, in a recursive trust-structure. The frame looks quite complicated and complex, but, in
fact, it is only a potential frame: not all these sub-components (for example, the beliefs about
X's morality, or fear of authority, or self-esteem) are necessarily and already there or explicitly
represented.
Moreover, as we will see in detail in Chapter 3, a relevant role is played by the quantification
of the different elements: the weight of the beliefs, the value of the goal, the potential utilities
resulting from a delegation and so on (see Figure 2.16).
References
Bacharach, M. and Gambetta, D. (2000) Trust in signs. In K. Cook (ed.), Trust and Social Structure .NewYork:
Russel Sage Foundation.
Bacharach, M. and Gambetta, D. (2001) Trust as type detection, in: C. Castelfranchi, Y.-H. Tan (eds.), Trust and
Deception in Virtual Societies , Kluwer Academic Publishing, Dordrecht, The Netherlands, pp. 1-26.
Baier, A. Trust and Antitrust, Ethics 96: 231-260, 1986.
Bandura, A. (1986) Social Foundations of Thought and Action: A social cognitive theory . Englewood Cliffs, NJ:
Prentice-Hall.
Butz, M.V. (2002) Anticipatory Learning Classifier System . Boston, MA: Kluwer Academic Publisher.
Butz, M.V. and Hoffman, J. Anticipations control behaviour: animal behavior in an anticipatory learning classifier
system. Adaptive Behavior , 10: 75-96, 2002.
Castaldo, S. (2002) Fiducia e relazioni di mercato . Bologna: Il Mulino.
Castelfranchi, C. Social Commitment: from individual intentions to groups and organizations. In ICMAS'95 First
International Conference on Multi-Agent Systems , AAAI-MIT Press, 1995, 41-49 (versione preliminare in AAAI
Workshop on 'AI and Theory of Group and Organization' , Washington, DC, May 1993).
Castelfranchi, C. (1996) Reasons: belief support and goal dynamics. Mathware & Soft Computing , 3: 233-247.
Castelfranchi, C. (1997) Representation and integration of multiple knowledge sources: issues and questions. In
Cantoni, Di Gesu', Setti e Tegolo (eds), Human & Machine Perception: Information Fusion . Plenum Press.
Castelfranchi, C. (1998) Modelling social action for AI agents. Artificial Intelligence , 103: 157-182, 1998.
Castelfranchi, C. Towards an agent ontology: autonomy, delegation, adaptivity. AI IA Notizie. 11 (3), 1998; Special
issue on 'Autonomous intelligent agents', Italian Association for Artificial Intelligence, Roma, 45-50.
Castelfranchi, C. (2000a) Again on agents' autonomy: a homage to Alan Turing. ATAL 2000: 339-342.
Castelfranchi, C. (2000) Affective appraisal vs cognitive evaluation in social emotions and interactions. In A. Paiva
(ed.) Affective Interactions. Towards a New Generation of Computer Interfaces . Heidelberg, Springer, LNAI 1814,
76-106.
Castelfranchi, C. Through the agents' minds: cognitive mediators of social action. Mind and Society. Torino, Rosem-
bergh, 2000, pp. 109-140.
54 To be true, the dependence belief already implies some belief about Y 's skills or resources, that are useful for
X 's goal.
 
Search WWH ::




Custom Search