Information Technology Reference
In-Depth Information
proposed to simplify the process of finding the maximum value of the enumerators
number of optimizations that are required to take in order to measure the possibility of
each trust rating τ . Due to lack of space we cannot elaborate more on this process. The
detailed procedure is presented in [18].
3.5
Manipulation of the Possibility Distributions
An agent, say a S , needs to acquire information about the degree of trustworthiness of
agent a D unknown to him. On this purpose, it acquires information from its advisors like
a who are known to a S and have already interacted with a D . Each agent a is not nec-
essarily truthful for reasons of self-interest, therefore it may manipulate the possibility
distribution it has built on a D 's trust before reporting it to a S . The degree of manipu-
lation of the information by agent a is based on its internal probability distribution of
trust. More specifically, if the internal trust distribution of agents a and a indicate that
a 's degree of trustworthiness is lower than a , then the reported possibility distribution
of a is more prone to error than a . The following two algorithm introduced in this
section are examples of manipulation algorithms:
Algorithm I
for each τ
T do
τ
random trust rating from T, according
to agent a 's internal trust distribution
error τ =1
τ
Π a→a D ( τ )= Π a→a D ( τ )+ error τ
end for
where Π a→a D ( τ ) ,
τ
T is the possibility distribution built by a through its interac-
tions with a D
T is the manipulated possibility distribution which
is reported to a S . In this algorithm, for each trust rating τ
and Π a→a D ( τ ) ,
τ
T a random trust value,
τ , is generated following the internal trust distribution of agent a . For highly trustwor-
thy agents, the randomly generated value of τ is closer to τ and the subsequent error
(error τ ) is closer to 0. Therefore the manipulation of the possibility value of Π a→a D ( τ ) ,
is insignificant. On the other hand, for highly untrustworthy agents, the value of τ is
closer to τ and therefore the derived error, error τ , is closer to 1. In such a case, the
possibility value of Π a→a D ( τ ) is considerably modified causing noticeable change in
the original values.
After measuring the distribution of Π a→a D ( τ ) ,
T , it is normalized and then
reported to a S . The normalization satisfies: (1) the possibility value of every trust rating
τ in T is in [0 , 1] , and (2) the possibility value of at least one trust rating in T equals to 1.
let Π ( τ ) be a non-normalized possibility distribution. Either of the following formulas
[7] generates a normalized possibility distribution of Π ( τ ) :
τ
(1) Π ( τ )= Π ( τ ) /h,
(1)
(2) Π ( τ )= Π ( τ )+1
h,
(2)
τ∈T Π ( τ ) .
where h =max
 
Search WWH ::




Custom Search