Should we model robots after humans?

Children get attached to their toys. They bring them on trips, care for them, ascribe human qualities to them, and even worry about them. As it turns out, adults do too, especially when these toys lend themselves to human empathy. An object triggers an empathetic response when it moves on its own, has eyes, and/or responds to stimuli in the environment. But it doesn’t need to have all these qualities. Two-thirds of Roomba owners have named their automated vacuum cleaner (1). If you’ve ever seen a Roomba, you know it basically just looks like a big disk with some buttons.

Some designers and manufacturers aim for realism in humanoid robots, even believing that these robots will be so realistic as to be sentient. David Hanson, known for extremely lifelike and seemingly emotional robots such as Jules, writes on his website  that his robots have “the spark of soul—enabling our robots to think, feel, to build relationships with people […].” Cynthia Breazeal, a robot engineer at MIT, said that after watching science fiction films as a child “it was just kind of a given for me that robots would have emotions and feelings.” Perhaps because this is interesting to consumers the same way a very exact painting would be, the advantage of this approach is often assumed rather than examined.

Scientists who aim to make robots relatable often justify their approach by pointing out the advantages of people liking and feeling cared for by their robots. Breazeal, creator of cutest-robot-ever Leonardo, said she aimed to design a robot “that understands and interacts and treats people as people.” She believes that robots put to tasks such as household chores and childcare should be able to socialize and respond to humans, just like other humans, so that people can interact with them more easily. The Roomba, she points out, does not recognize people, so it runs into them all the time.

Making machines that treat humans as humans, however, is a slightly different task than making machines that demand to be treated as humans themselves. Wendell Wallach and Colin Allen have been thinking about how to create “moral machines.” For them, this means bestowing robots with the ability to read human emotions and understand human ethical codes. And sometimes it means expressing empathy for people. A semblance of empathy is especially important for therapeutic robots (2).

But while most people want well-behaved robots, some argue that we should not want deceptively human ones. Matthias Scheutz at Tufts is an outspoken critic of social robots. When robots are convincingly human,  they may be not only endearing but also persuasive to humans. Humans may become excessively dependent on and devoted to their robot companions, to the point of feeling and behaving as if these robots are sentient. This gives robots great power over humans, which is especially dangerous if their intentions are not honorable (1).

Science and science fiction alike have acknowledged the complications of human-robot relationships. In the 1991 novel He, She and It by Marge Piercy, a human falls in love with a robot who (spoiler alert) ultimately must self-destruct in battle to fulfill the commands of the people who created him. Renowned roboticist Joseph Weizenbaum explained in the documentary Plug and Pray that he believes love between humans and robots would ultimately be illusory because a robot cannot understand what it is like to have the personal history of a human.

As such a future grows ever nearer, we will have to draw the line between developing robots with the social skills to carry out the desired tasks and creating a deceptive appearance of conscious thought.

-Suzannah Weiss

Scheutz, Matthias. “The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots.” Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012.

2 “A Roadmap for U.S. Robotics: From Internet to Robotics.” 2013 Edition. March 20, 2013. http://robotics-vo.us/sites/default/files/2013%20Robotics%20Roadmap-rs.pdf

Cyber-sentience: detecting “artificial” consciousness

“We already have impressive models and simulations of a couple dozen of the brain’s several hundred regions,” writes Ray Kurzweil in The Singularity is Near.1 But how accurate can these models get before they stop being models, and how advanced can artificial intelligence be before it is no longer artificial?

We are getting closer and closer to creating machines that do all the work of human brains — work that previously required thoughts and feelings. David Hanson’s robot Jules, for instance, mimics not only awareness but also self-awareness when he muses over his identity as a robot in this video. It’s hard not to feel sympathetic to his human-like struggles. Hanson’s website implies that he thinks his robots are or will be sentient: “The Character Engine A.I. software adds the spark of soul—enabling our robots to think, feel, to build relationships with people as they understand your speech, see your face, hold natural conversations, and evolve.”

Though there are robots that realistically portray such mental states, we usually view them as just that: portrayals. The problem is, we are unprepared to identify when these semblances of thoughts and feelings turn into actual thinking and feeling.

Our knee-jerk reaction is still that such robots aren’t at the level of humans. But who says you have to be human to be sentient? Many ascribe consciousness to simpler animals. And many machines have more sophisticated circuitry than many animals. One might argue that animals are biological and thus not a valid comparison, but we don’t know that the stuff of consciousness is biological. In fact, we don’t know what the stuff of consciousness is at all because there is no way of knowing which beings are conscious. We could ask them, but they could lie. We could probe their brains (or equivalent internal circuitry), but there is no way of knowing which objective data indicates subjective experience (though we can make reasonable presumptions based on which brain areas seem correlated with consciousness in humans; see my conclusion below).

Kurzweil suggests that we use the same guidelines we use for humans: We don’t know other people are conscious — called “the problem of other minds” in philosophy — but we give them the benefit of the doubt because of their apparent emotions and social abilities. Many give animals the benefit of the doubt too for the same reason, and because they would rather unnecessarily protect creatures that don’t suffer than harm ones that do. The American Society for the Prevention of Cruelty to Robots, started in 1999, offers a similar solution: “the question of self-awareness comes down to an individual leap of faith: ‘I am self-aware, and I see others behaving in similar ways, so I will assume that they are self-aware, also.'”

The loophole I see in these arguments, aside from the fact that attributions of consciousness are necessarily assumptions, is that humans are predisposed to ascribe sentience to beings with certain appearances and behaviors — specifically, those which are human-like. People generally ascribe minds to things that move on their own, have eyes, and react to stimuli.

Terry Bisson critiques biases in other-mind attribution in the short story “They’re made out of meat,” in which (spoiler alert) conscious planetary gases debate whether humans could possibly think, given that they’re just “made out of meat.” The point is, anything could be conscious, but we assume that because we are and this appears to come from our brains, all conscious beings must have (organic) brains. If the possibility of conscious hydrogen seems too crazy, the possibility of consciousness in robots that socialize and express understanding of humans is far less outlandish.

On the flip side of our anthropocentric affinity for beings resembling us, people may easily attribute consciousness to things that aren’t really conscious, even when they know intellectually what sort of being they’re dealing with. For example, people report naming their roombas and feeling guilty that they make them do all the work.2 This can cause psychological trauma when people ascribe minds to machines destined for destruction, such as military drones and IED attackers.2 I remember my 7-year-old self lying in bed at night, worrying about the well-being of my Tamagotchi.

We need guidelines for the ascription of sentience that are more reliable than human instincts. Though in their infancy and incapable of certainty, elaborate theories of consciousness are emerging to help us decide when moral restrictions apply to other creatures. Physicist Roger Penrose has proposed criteria for consciousness that are implementable in organisms and computers, but, he argues, not in their current state: They would need a different mode of causation that follows a different form of physics.

It’s very hard to think of current robots as conscious. But eventually (and possibly soon!), we will have to make a leap of faith in believing in robot minds, like we do with human minds or animal minds. Though consciousness is a subjective experience, I believe we can find reasonable, objective ways for trained professionals to make educated guesses. True, for all we know our pillows hurt when we punch them, but this is a philosophical question far broader than what is necessary to create conscious robots.

For now, this is the safest approach I can see: If sentience seems like a possibility, treat your machines with respect. This assumption prevents harm to something that could be sentient, keeps our consciences clean, and sets an example for the benevolent treatment of living things and property alike — even if we don’t know which we’re dealing with.

-Suzannah Weiss

1 Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York: Viking, 2005.

2 Scheutz, Matthias. “The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots.” Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012. Print.