Information Technology Reference
In-Depth Information
when robots interact with real people, they need to be aware of the shared social
environment and be capable of social interaction.
Another long-term experiment was performed by Kanda et al. ( 2004 ). The study
describes a field trial evaluation for two weeks with elementary school Japanese
students and two English-speaking interactive humanoid robots behaving as peer
English tutors for children. The study revealed that the robot failed to keep most of
the children's interest after the first week, mostly because the first impact created
unreasonably high expectations in the children.
A longer study was carried out at Carnegie Mellon University using Valerie, a
“roboceptionist” (Gockley et al. 2005 ). Students and university visitors interacted
with the robot over a nine month period. The results indicated that many visitors
continued to interact daily with the robot, but over a certain period of time only few
of them interacted for more than 30 s.
Some of the studies on long-term human-computer relationships are grounded
on human social psychology theories, such as the work of Bickmore and Picard
( 2005 ). They developed a social agent and evaluated it in a controlled experiment with
approximately 100 users who were asked to interact daily with an exercise system.
After four weeks of interaction, the social behaviours increased the participant's
perceptions of the quality of the working alliance (on measures such as liking, trust
and respect), when comparing the results with an agent without social behaviours.
Besides, participants interacting with the social agent expressed significantly higher
desire to continue interacting with the system.
So how do we design for long-term interaction? To develop artificial agents that
are capable of building long-term social relationships with users, we need to model
the complex social dynamics present in human behaviour (Leite et al. 2010 ). For
users to remain engaged for months, or years, social agents need to be capable of
long-term adaptiveness, associations, and memory (Fong et al. 2003 ). Also, if the
interaction with a social agent is enjoyable throughout long periods of time, users
may eventually spend more time interacting with them. This is an important step
for designing artificial companions or, in our case, opponents that are capable of
engaging users in the long term.
7.3
Towards Socially Present Board Game Opponents
Current artificial opponents lack social presence and when human players perceive
artificial opponents as not socially present, their enjoyment while interacting with
them decreases. Johansson ( 2006 ) stated that “bots are blind and objective, while
humans may decide to eliminate the bots first, just because they are bots”. This
sentence shows that, over repeated interactions, humans attribute very low sense
of social presence to artificial opponents. To struggle this kind of degradation in
interaction, in this section, we present five guidelines for designing more socially
present board game opponents.
In this section, we will argue that to improve social presence an artificial board
game opponent should:
Search WWH ::




Custom Search