Robotics Reference
In-Depth Information
emotions in humans, robots will be able to influence our moods and
our feelings as we become more and more susceptible to their overtures
and, in the near future, persuasive technologies will be commonplace,
affecting many people in many ways.
It has already been shown by a group at Stanford that humans are
susceptible to flattery from computers and that the effects of such flat-
tery are the same as the effects of flattery from humans. In an experiment
involving a co-operative task with a computer, Brian Fogg, Clifford Nass
and their team arranged for 41 subjects performing a task on a com-
puter to receive one of three types of feedback from a computer: “sincere
praise”, “flattery” (insincere praise) or “generic feedback” (i.e., a placebo
feedback). The flattery subjects reported more positive affect, better per-
formance, more positive evaluations of the interaction and more positive
regard for the computer, all in comparison with the placebo group, even
though the subjects knew that the flattery from the computer did not de-
pend in any way on their performance. Subjects receiving sincere praise
responded similarly to those in the flattery condition. The study con-
cluded that the effects of flattery from a computer can indeed induce the
same general effects as flattery from humans.
Flattery, in one form or another, lies at the heart of marketing and
selling—persuading us to part with our money. Flattery is often em-
ployed, for example, in advertisements that aim to convince us that we
will be more appealing to our partners or dates if we wear a particu-
lar brand of perfume, after-shave or designer-wear. Clearly the capa-
bility of robots to persuade introduces significant ethical issues in how
persuasive technology should be applied. If you are in the Garden of
Eden and a serpent persuades you to eat a fruit, and if in eating it
you cause distress to some individual or even to the whole of human-
ity, whose fault is it, yours or the serpent's? Ethicists have struggled with
such questions for thousands of years, and so has every persuader with a
conscience.
Daniel Berdichevsky and Eric Neuenschwander have listed eight eth-
ical principles of persuasive technology design, principles that subsume
some of the ethics discussed earlier in this chapter and that add to those
responsibilities imposed on robots and their designers by Asimov's Laws:
1. The intended outcome of any persuasive technology should
never be one that would be deemed unethical if the persua-
sion were undertaken without the technology or if the out-
come occurred independently of persuasion.
Search WWH ::




Custom Search