Information Technology Reference
In-Depth Information
where '
p
i
is satisfied from
' means that the trustee (from the point of view of the trustor)
has the right features for satisfying all the core properties of the task. In particular, the needed
features are over the minimal threshold (
f
i
}
{
σ
j
) so that the main properties of the task will also be
over the minimal threshold (
ρ
i
):
C
(
∀
f
j
∈{
f
i
}
,
(
f
j
>σ
j
))
and
(
apply
(
{
f
i
}
,τ
)
⇒
(
∀
p
i
∈
τ
,
p
i
>ρ
i
))
(6.21)
where the
apply
function defines the match between the agent's features and the task's prop-
erties. Also the different thresholds (
ρ
i
) are depending on the trustor.
We have to say that even if it is possible to establish an objective and general point of
view about both the actual composition of the tasks (the set of their properties, including the
actual set of core properties for each task) and the actual features of the agents,
what is really
important are the specific beliefs of the trustors about these elements
. In fact, on the basis of
these
beliefs
the trustor would determine its trust. For this reason alone we have introduced as
main functions those regarding the trustors' beliefs.
σ
j
and
6.6.4 Generalizing to Different Tasks and Agents
Let us now introduce the reasoning-based trust generalization. Consider three agents:
Ag
X
,
Ag
Y
and
Ag
Z
(all included in
AG
) and two tasks
τ
(both included in
T
).
Ag
X
is a trustor
τ
and
and
Ag
Y
and
Ag
Z
are potential trustees.
Where:
τ
≡{
p
1
,...,
p
k
}∪{
p
1
,...,
p
m
}=
τ
C
∪
τ
NC
)
(6.17bis)
and in general
(
p
j
=
ith p
j
∈
τ
C
∪
τ
NC
)
and p
j
∈
C
NC
);
p
j
)
w
(
(
τ
∪
τ
Ag
Z
≡
f
Ag Z
≡{
f
Z
1
,...,
f
Zn
}
(6.18bis)
Thefirstcase(
caseA
) we consider is when
Ag
X
does not know either the
τ
's properties, or
τ
the
Ag
Y
features, but they trust
Ag
Y
on
(this can happen for different reasons: for example,
he was informed by others about this
Ag
Y
's trustworthiness, or simply he knows the successful
result without assisting in the whole execution of the task, and so on). In more formal terms:
a1)
Trust
Ag X
(
Ag
Y
,τ
)
a2)
¬
Bel
Ag X
(
f
AgY
≡{
f
Y
1
,...,
f
Yn
}
)
p
1
,...,
p
k
}∪{
a3)
¬
Bel
Ag X
(
τ
≡{
p
1
,...,
p
m
}
)
In this case which kind of trust generalization is possible? Can
Ag
X
believe that
Ag
Y
is trustworthy on a different (but in some way analogous) task
τ
(generalization of the
task) starting from the previous cognitive elements (
a1, a2, a3
)? Or, can
Ag
X
believe that,
another different (but in some way analogous) agent
Ag
Z
is trustworthy on the same task
τ
(generalization of the agent), again starting from the previous cognitive elements (
a1, a2, a3
)?
The problem is that the
analogies
(between
τ
, and between
Ag
Y
and
Ag
Z
)are
not available to
Ag
X
because they do not know either the properties of
τ
and
τ
or the features
Search WWH ::
Custom Search