Information Technology Reference
In-Depth Information
The previous definition of guilt-dependent utility is related with the definition of regret-
dependent utility proposed in regret theory [18,19,15]. Specifically, similarly to Loomes
& Sugden's regret theory, we assume that computation of emotion-dependent utility
consists in adding to player i 's personal utility the value
δ i ( Emotion ( i
,
s )) which mea-
sures the intensity of player i 's current emotion. 3
There are several possible instantia-
tions of the function
δ i ( Guilt ( i
,
s )). For example, it might be defined as follows:
δ i ( Guilt ( i
,
s ))
=
c i ×
Guilt ( i
,
s )
(3)
where c i ∈ R + = {
x
∈ R|
x
0
}
is a constant measuring player i 's degree of guilt
aversion.
2.3
Grounding Moral Values on Personal Utilities
In the preceding definition of normal form game with moral values a player i 's util-
ity function U i and ideality function I i are taken as independent. Harsanyi's theory of
morality provides support for an utilitarian interpretation of moral motivation which
allows us to reduce a player i 's ideality function I i to the utility functions of all players
[17,16]. Specifically, Harsanyi argues that an agent's moral motivation coincides with
the goal of maximizing the collective utility represented by the weighted sum of the
individual utilities.
Definition 5 (Normal form game with moral values based on Harsanyi's view).
A normal form game with moral values
Γ + =
,{
S i } i N ,{
U i } i N ,{
I i } i N ) is based on
( N
Harsanyi's view of morality if and only if for all i
N:
I i ( s )
=
k i , j ×
U j ( s )
(4)
j N
for some k i , 1 ,...,
k i , n
[0
,
1] .
The parameter k i , j in the previous equation can be conceived as the agent i 's degree of
empathy towards agent j . This means that the higher the degree of empathy of agent i
towards agent j , the higher the influence of agent j 's personal utility on the degree of
ideality of a given alternative for agent i . In certain situations, it is reasonable to suppose
that an agent has a maximal degree of empathy towards all agents, i.e ., k i , j =
1forall
i
,
j
N . Under this assumption, the previous equation can be simplified as follows:
=
I i ( s )
U j ( s )
(5)
j N
An alternative to Harsanyi's utilitarian view of morality is Rawls' view [22]. In response
to Harsanyi, Rawls proposed the maximin criterion of making the least happy agent as
happy as possible: for all alternatives s and s , if the level of well-being in the worst-
off position is strictly higher in s than in s ,then s is better than s . According to this
3
On the ground of empirical evidence, Loomes & Sugden also suppose that the function δ i
should be convex. To keep our model simpler, we do not make this assumption here.
 
Search WWH ::




Custom Search