Information Technology Reference
In-Depth Information
accelerometer and capacitance. Finally, it had twin color cameras that provided ste-
reo depth perception. It had a blackboard architecture with a multitude of percep-
tion/action daemons, and a state machine that had seven emotions with a three
level “adrenalin” excitation.
We then made another interesting discovery about how people perceived and
interacted with the robot as both moved through space and time. Our robot had
no memory; it was what we called a “Zen bot”—it always acted on the Now of its
senses. Yet people insisted that it remembered them, and would say “See? It remem-
bers me! It acts differently with me than it does with you!”
And we realized that the last statement was true— because the person was the
robot's environment —and the coupled system of person and robot always behaved
differently from a different dyad— because the person felt differently .
—Rob Tow
in the same way—through facial expression. So, for example, mirroring the
facial expression of another person is a way of establishing empathy or con-
nection. In acting exercises, actors carry on entire “conversations” with the
use of facial expressions alone.
Of course, our voices, words, and gestures communicate emotion as well,
often refi ning what our faces are saying about us. Klaus Scherer has done ca-
nonical work in “The Expression of Emotion in Voice and Music” (1995):
Vocal communication of emotion is biologically adaptive for socially liv-
ing species and has therefore evolved in a phylogenetically continuous
manner. Human affect bursts or interjections can be considered close par-
allels to animal affect vocalizations. The development of speech, unique
to the human species, has relied on the voice as a carrier signal, and thus
emotion effects on the voice become audible during speech.
 
Search WWH ::




Custom Search