Graphics Reference
In-Depth Information
valued outcomes or resources)” (Simpson, 2007). Most studies try to
increase trust by designing systems in a more anthropomorphic way,
but encounter contrary effects in doing so. For example, Nowak (2004)
found that the agent with the highest level of anthropomorphizm in the
study was reported as being the least socially attractive, reliable and
credible, whereas the agent with the lowest level of anthropomorphizm
was reported as being highly socially attractive, reliable and credible.
The uncanny valley effect, as reflected in these findings, describes
the complicated relation between realism/anthropomorphizm and
users' acceptance of artificial figures like avatars, interface agents, etc.
(Pollick, 2010): The assumption that an artificial figure will be more
acceptable the more realistically/anthropomorphically it is designed,
has to be rejected. In fact, acceptance does not linearly increase with
any increase in anthropomorphizm, but at a certain point of 'very
realistic but not perfect', it abruptly decreases into unacceptability, and
then increases again when the representation of life is indistinguishable
from reality, such as in films. The impressions of being uncanny and
confused as experienced by our subjects seemed to have been increased
further by the system's disregard of the need for distance and privacy
(loss of anonymity). This was followed by mistrust regarding a possible
abuse of confidence. The ascription of intentional deception further
increased this feeling. If you follow the consequences further, the
users' co-operation up to the point of reacting, in terms of a complex
reaction as a defense against experiencing outer and inner constraints
(Miron and Brehm, 2006), seems probable, e.g. for example the
ascription of intentions to pressurize, compel or demand subservience.
Several studies have shown that reactance during HCI and human-
robot interaction is an observable phenomenon (Lui et al., 2008). It
seems that reactance occurs more readily if (animated) agents/avatars
are involved. For example, there was the highest level of reactance
(operationalized as feelings of anger and negative cognitions) when
presented with information by an animated robot, the moving mouth
of which was accompanied by text shown within a speech-bubble,
in contrast to the same information using a non-animated image of
the robot, or as text alone without an agent (Roubroeks et al., 2011).
When users ascribe more, and therefore the anthropomorphization
of the system increases, impressions of uncanniness (uncanny valley)
and thereby mistrust, seemed more probable, which could have led
to reactance. The users seemed to be trying to deal with such feelings
of foreignness, skepticism, etc. by adapting to the identified abilities
and aims of the system. The effect of user characteristics known
from personality psychology (age, sex, education, technophile, stress
handling, interpersonal problems, and personality traits according to
Search WWH ::




Custom Search