Publication | Closed Access
Sacrifice One For the Good of Many?
231
Citations
27
References
2015
Year
Unknown Venue
Ethical DilemmaSocially Assistive RobotValue TheoryPsychologySocial SciencesHumanrobot CollaborationSacrifice OneHealth SciencesBehavioral SciencesHuman Agent InteractionAltruismMoral HriMoral JudgmentsHuman-robot InteractionMoral PsychologyProsocial BehaviorMoral NormsSocial BehaviorRobotics
Moral norms play an essential role in regulating human interaction. With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior. We report the first comparison of people's moral judgments (of permissibility, wrongness, and blame) about human and robot agents. Two online experiments (total N = 316) found that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a "utilitarian" choice), and they were blamed more than their human counterparts when they did not make that choice. Though the utilitarian sacrifice was generally seen as permissible for human agents, they were blamed more for choosing this option than for doing nothing. These results provide a first step toward a new field of Moral HRI, which is well placed to help guide the design of social robots.
| Year | Citations | |
|---|---|---|
Page 1
Page 1