Information Technology Reference
In-Depth Information
As we shall discuss in a while, we agree with Hardin that there is a restricted notion of
social trust which is based on the expectation of adoption , not just on the prediction of a
favorable behavior of Y . When X trusts Y in a strict social sense and counts on him, she
expects that Y will adopt her goal and that this goal will prevail, in case of conflict with other
active goals of his. That is, X not only expects an adoptive goal by Y , but also an adoptive
decision and intention . A simple regularity-based prediction or an expectation grounded on
some social role or normative behavior of Y are not enough (and we agree with Hardin on
this) for characterizing what he calls 'trust in a strong sense', which is the central nature of
trust (Hardin, 2002), what we call genuine social trust .
However, to our mind, Hardin still fails to conceive a broad theory of goal-adoption, so that
his notion of 'encapsulated interests' provides us with only a restricted and reductive view of
it. Indeed, the theory of adoption is crucial, although not yet well understood, in sociology,
economics and game theory (Tummolini, 2006), and in cooperation theory (Castelfranchi,
1997), (Tuomela, 1988, 1993). The fundamental point is to realize that there are different
kinds of goal-adoption depending on Y's motives (higher goals) for doing something 'for
X' , for spending resources in order to realize another agent's goal. Let us list these different
cases:
1. Adoption can be just instrumental to Y 's personal and non-social goals; completely selfish .
Like when we satisfy the need of chickens for food, only in order to later kill and eat them to
our best satisfaction; or like when we enter a do ut des relation, like an economic exchange
in Adam Smith's view (Smith, 1776).
2. Adoption can be cooperative in a strict sense. X and Y are reciprocally dependent on each
other but just for one and the same goal, which constitutes their common goal. This is
very different from other social situations, e.g. exchange, where there are two different and
private/personal goals. Here instead the agents care for the same result in the world, and
they need each other for achieving it. For this reason (being part of a necessarily common
plan), each of them favors the actions and goals of the other within that plan, since he
needs them and relies on them. In a sense, this adoption may be considered a sub-case
of instrumental adoption, but it is definitely better to clearly distinguish (2) from (1). In
fact, in (1) a rational agent should try to cheat, to avoid his contribution to the other:
especially after having received Y 's service or commodity, and assuming no iteration of
the interaction, X has no reason for doing her share, for giving Y what Y expects. On
the contrary, in strict cooperative situations, based on real reciprocal dependence for the
same objective, to cheat is self-defeating; without doing her share, X will not achieve her
own (and common) goal (Conte and Castelfranchi, 1995), (Castelfranchi, Cesta, Conte,
Miceli, 1993).
3. Finally, there is also non-instrumental, terminal , or altruistic adoption. The good of X ,the
realization of X 's needs, desires, and interests is an end per se , i.e. it does not need to be
motivated by higher personal goals.
It is then just an empirical matter whether a behavior and cognitive structure as postulated in
(3) does really exist in humans, or if humans are always motivated by selfish motives (although
reduced to internal hidden rewards, like self-approval and esteem, or avoiding feelings of guilt
Search WWH ::




Custom Search