Information Technology Reference
In-Depth Information
global trust in the global event or process and its result, which is also affected by external
factors like opportunities and interferences .
Trust may be said to consist of or to (either implicitly or explicitly) imply the subjective
probability of the successful performance of a given behavior
, and it is on the basis of this
subjective perception/evaluation of risk and opportunity that the agent decides to rely or not,
to bet or not on Y . However, the probability index is based on, derives from those beliefs and
evaluations. In other words the global, final probability of the realization of the goal g , i.e. of
the successful performance of
α
, should be decomposed into the probability of Y performing
the action well (that derives from the probability of willingness, persistence, engagement,
competence: internal attribution ) and the probability of having the appropriate conditions
(opportunities and resources external attribution ) for the performance and for its success, and
of not having interferences and adversities ( external attribution ).
Why is this decomposition important? Not only to cognitively ground such a probability
(which after all is 'subjective' i.e. mentally elaborated) - and this cognitive embedding is
fundamental for relying, influencing, persuading, etc., but because:
α
a) the agent trusting/delegating decision might be different with the same global probability
or risk , depending on its composition;
b) trust composition (internal vs external) produces completely different intervention strate-
gies : to manipulate the external variables (circumstances, infrastructures) is completely
different from manipulating internal parameters.
Let's consider the first point (a). There might be different heuristics or different personalities
with a different propensity to delegate or not in the case of a weak internal trust (subjective
trustworthiness ) even with the same global risk. For example, 'I completely trust him but he
cannot succeed, it is too hard a task!', or 'the mission/task is not difficult, but I do not have
enough trust in him'). The problem is that - given the same global expectation - one agent
might decide to trust/rely in one case but not in the other, or vice versa! In fact, on those terms
it is an irrational and psychological bias. But this bias might be adaptive, for example, perhaps
useful for artificial agents. There could be logical and rational meta-considerations about a
decision even in these apparently indistinguishable situations. Two possible examples of these
meta-considerations are:
to give trust (and then delegation) increases the experience of an agent (therefore comparing
two different situations - one in which we attribute low trustworthiness to the agent and
the other in which we attribute high trustworthiness to him; obviously, the same resulting
probability - we have a criteria for deciding);
the trustor can learn different things from the two possible situations; for example, with
respect to the agents; or with respect to the environments.
As for point (b), the strategies to establish or increment trust are very different depending
on the external or internal attribution of your diagnosis of lack of trust. If there are adverse
environmental or situational conditions your intervention will be in establishing protection
conditions and guarantees, in preventing interferences and obstacles, in establishing rules and
infrastructures; while if you want to increase your trust in your trustee you should work on his
motivation, beliefs and disposition towards you, or on his competence, self-confidence, etc.
Search WWH ::




Custom Search