Getting Robots to Behave

One of people’s biggest concerns regarding the possibility of owning robots is losing control of these robots(1). Getting robots to cooperate with humans is a challenge given the numerosityImage and complexity of the rules governing social conduct. Isaac Asimov illustrated this difficulty in his short story “Runaround,” in which the rules governing a robot’s behavior conflict, cancel one another out, and supersede one another in unpredictable ways. Whether instilling appropriate behavior in robots is the job of its designers or its owners, and whether these behaviors should be intrinsic or learned, is up for debate. What is clear is that we need a better understanding of what guides moral behavior in humans as well as how to program these codes into robots to ensure ethicality and effectiveness.

Advanced social robots may need built-in ethical guidelines for their behavior in order to avoid situations in which robots are used to manipulate or harm people (2). The first step is getting people to agree on what these guidelines are — for example, should robots merely avoid bumping into people, or should they politely evade them by saying “Excuse me”? (3). The next step is implementing these guidelines. Wendell Wallach and Colin Allen’s book Moral Machines: Teaching Robots Right from Wrong describes two approaches to robot morality: programming the robot to predict the likelihood of various consequences of various actions, or having the robot learn from experience and acquire moral capabilities from more general intelligence.

But as it stands, the social abilities of robots are currently very rudimentary (4). According to the 2013 Roadmap for U.S. Robotics, “medical and healthcare robots need to understand their user’s state and behavior to respond appropriately,” which requires a multiplicity of sensory information that current robots have trouble collecting, let alone integrating into something meaningful and utilizing to inform their own behavior. As an illustration, voice recognition features on cell phones often have trouble understanding what the user is saying, and security cameras often fail to recognize human faces (3). The roadmap also mentions the importance of empathy in healthcare, rendering socially assistive robotics (SAR) for mental health problematic. Joseph Weizenbaum, creator of the first automated psychotherapist in the 60s, balked at the idea that his creation, Eliza, could replace a human therapist (5). Today, there are more advanced automated therapist programs, but like Eliza, they mostly echo what the user says and ask generic questions. Therapeutic robots have been more effective in the realm of interactive toys for autistic children, as long as these are viewed as toys and, like any robotic invention, not as replacements for human relationships.

-Suzannah Weiss

1 Ray, C., Mondada, F., & Siegwart, R. (2008) What do people expect from robots? IEEE/RSJ International Conference on Intelligent Robots and Systems, 3816–3821.

2 Scheutz, M. The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots. Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012.

3 Murphy, R. R., & Woods, D. D. (2009). Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent Systems, 24(4), 14–20.

4 “A Roadmap for U.S. Robotics: From Internet to Robotics.” 2013 Edition. March 20, 2013.

5 Schanze, J. (Director) (2011). Plug & Pray [Documentary]: B.P.I.

Should we model robots after humans?

Children get attached to their toys. They bring them on trips, care for them, ascribe human qualities to them, and even worry about them. As it turns out, adults do too, especially when these toys lend themselves to human empathy. An object triggers an empathetic response when it moves on its own, has eyes, and/or responds to stimuli in the environment. But it doesn’t need to have all these qualities. Two-thirds of Roomba owners have named their automated vacuum cleaner (1). If you’ve ever seen a Roomba, you know it basically just looks like a big disk with some buttons.

Some designers and manufacturers aim for realism in humanoid robots, even believing that these robots will be so realistic as to be sentient. David Hanson, known for extremely lifelike and seemingly emotional robots such as Jules, writes on his website  that his robots have “the spark of soul—enabling our robots to think, feel, to build relationships with people […].” Cynthia Breazeal, a robot engineer at MIT, said that after watching science fiction films as a child “it was just kind of a given for me that robots would have emotions and feelings.” Perhaps because this is interesting to consumers the same way a very exact painting would be, the advantage of this approach is often assumed rather than examined.

Scientists who aim to make robots relatable often justify their approach by pointing out the advantages of people liking and feeling cared for by their robots. Breazeal, creator of cutest-robot-ever Leonardo, said she aimed to design a robot “that understands and interacts and treats people as people.” She believes that robots put to tasks such as household chores and childcare should be able to socialize and respond to humans, just like other humans, so that people can interact with them more easily. The Roomba, she points out, does not recognize people, so it runs into them all the time.

Making machines that treat humans as humans, however, is a slightly different task than making machines that demand to be treated as humans themselves. Wendell Wallach and Colin Allen have been thinking about how to create “moral machines.” For them, this means bestowing robots with the ability to read human emotions and understand human ethical codes. And sometimes it means expressing empathy for people. A semblance of empathy is especially important for therapeutic robots (2).

But while most people want well-behaved robots, some argue that we should not want deceptively human ones. Matthias Scheutz at Tufts is an outspoken critic of social robots. When robots are convincingly human,  they may be not only endearing but also persuasive to humans. Humans may become excessively dependent on and devoted to their robot companions, to the point of feeling and behaving as if these robots are sentient. This gives robots great power over humans, which is especially dangerous if their intentions are not honorable (1).

Science and science fiction alike have acknowledged the complications of human-robot relationships. In the 1991 novel He, She and It by Marge Piercy, a human falls in love with a robot who (spoiler alert) ultimately must self-destruct in battle to fulfill the commands of the people who created him. Renowned roboticist Joseph Weizenbaum explained in the documentary Plug and Pray that he believes love between humans and robots would ultimately be illusory because a robot cannot understand what it is like to have the personal history of a human.

As such a future grows ever nearer, we will have to draw the line between developing robots with the social skills to carry out the desired tasks and creating a deceptive appearance of conscious thought.

-Suzannah Weiss

Scheutz, Matthias. “The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots.” Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012.

2 “A Roadmap for U.S. Robotics: From Internet to Robotics.” 2013 Edition. March 20, 2013. http://robotics-vo.us/sites/default/files/2013%20Robotics%20Roadmap-rs.pdf