Sunday, August 20, 2017

The perception of emotion in artificial agents

Hortensius, R., Hekele, F., & Cross, E. S. (2017, August 1). The perception of emotion in artificial agents. Retrieved from PsyArXiv osf.io/preprints/psyarxiv/ufz5w

Abstract: Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges to be addressed as we move ever closer to truly emotional artificial agents.

---
In the original Milgram study, participants were instructed that they would collaborate with the experimenter to investigate the effects of punishment on learning. To this end, the participant was instructed to administer electric shocks of increasing voltage to a learner – who was actually a confederate – whenever he made a mistake in the learning task. At a certain threshold of 300V, the learner no longer responded to the task and kicked the wall in protest, yet the average maximum shock had a voltage of 312V. Twenty-six out of 40 participants were willing to deliver the maximum intensity of 450V to the learner. This paradigm has subsequently been used to test whether and to what extent people will punish an artificial agent (Fig. 3). One replication of the classic Milgram experiment might provide insight [98]. The experiment featured a physically present robot learner, which led to even more pronounced effects than the original study –the highest electric shock of 450V was administered to the robot, despite its verbal protest and painful facial expression, by all participants.

Another study used a female virtual learner in a similar setup where participants read words to the learner and were able to punish the learner with shocks when she made a mistake [99]. The virtual learner was either visible or partially hidden throughout the experiment, which influenced the amount of shocks administered, with fewer shocks being administered when the virtual learner was fully visible. Overall results were similar to the original Milgram experiment, with 85% of participants delivering the maximum voltage. Even though all participants were aware that the learner was a virtual agent, the visual and verbal expression of pain in response to the shock was sufficient to trigger discomfort and stress in participants.

No comments:

Post a Comment