Getting Robots to Behave

One of people’s biggest concerns regarding the possibility of owning robots is losing control of these robots(1). Getting robots to cooperate with humans is a challenge given the numerosityImage and complexity of the rules governing social conduct. Isaac Asimov illustrated this difficulty in his short story “Runaround,” in which the rules governing a robot’s behavior conflict, cancel one another out, and supersede one another in unpredictable ways. Whether instilling appropriate behavior in robots is the job of its designers or its owners, and whether these behaviors should be intrinsic or learned, is up for debate. What is clear is that we need a better understanding of what guides moral behavior in humans as well as how to program these codes into robots to ensure ethicality and effectiveness.

Advanced social robots may need built-in ethical guidelines for their behavior in order to avoid situations in which robots are used to manipulate or harm people (2). The first step is getting people to agree on what these guidelines are — for example, should robots merely avoid bumping into people, or should they politely evade them by saying “Excuse me”? (3). The next step is implementing these guidelines. Wendell Wallach and Colin Allen’s book Moral Machines: Teaching Robots Right from Wrong describes two approaches to robot morality: programming the robot to predict the likelihood of various consequences of various actions, or having the robot learn from experience and acquire moral capabilities from more general intelligence.

But as it stands, the social abilities of robots are currently very rudimentary (4). According to the 2013 Roadmap for U.S. Robotics, “medical and healthcare robots need to understand their user’s state and behavior to respond appropriately,” which requires a multiplicity of sensory information that current robots have trouble collecting, let alone integrating into something meaningful and utilizing to inform their own behavior. As an illustration, voice recognition features on cell phones often have trouble understanding what the user is saying, and security cameras often fail to recognize human faces (3). The roadmap also mentions the importance of empathy in healthcare, rendering socially assistive robotics (SAR) for mental health problematic. Joseph Weizenbaum, creator of the first automated psychotherapist in the 60s, balked at the idea that his creation, Eliza, could replace a human therapist (5). Today, there are more advanced automated therapist programs, but like Eliza, they mostly echo what the user says and ask generic questions. Therapeutic robots have been more effective in the realm of interactive toys for autistic children, as long as these are viewed as toys and, like any robotic invention, not as replacements for human relationships.

-Suzannah Weiss

1 Ray, C., Mondada, F., & Siegwart, R. (2008) What do people expect from robots? IEEE/RSJ International Conference on Intelligent Robots and Systems, 3816–3821.

2 Scheutz, M. The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots. Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012.

3 Murphy, R. R., & Woods, D. D. (2009). Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent Systems, 24(4), 14–20.

4 “A Roadmap for U.S. Robotics: From Internet to Robotics.” 2013 Edition. March 20, 2013.

5 Schanze, J. (Director) (2011). Plug & Pray [Documentary]: B.P.I.

Should we model robots after humans?

Children get attached to their toys. They bring them on trips, care for them, ascribe human qualities to them, and even worry about them. As it turns out, adults do too, especially when these toys lend themselves to human empathy. An object triggers an empathetic response when it moves on its own, has eyes, and/or responds to stimuli in the environment. But it doesn’t need to have all these qualities. Two-thirds of Roomba owners have named their automated vacuum cleaner (1). If you’ve ever seen a Roomba, you know it basically just looks like a big disk with some buttons.

Some designers and manufacturers aim for realism in humanoid robots, even believing that these robots will be so realistic as to be sentient. David Hanson, known for extremely lifelike and seemingly emotional robots such as Jules, writes on his website  that his robots have “the spark of soul—enabling our robots to think, feel, to build relationships with people […].” Cynthia Breazeal, a robot engineer at MIT, said that after watching science fiction films as a child “it was just kind of a given for me that robots would have emotions and feelings.” Perhaps because this is interesting to consumers the same way a very exact painting would be, the advantage of this approach is often assumed rather than examined.

Scientists who aim to make robots relatable often justify their approach by pointing out the advantages of people liking and feeling cared for by their robots. Breazeal, creator of cutest-robot-ever Leonardo, said she aimed to design a robot “that understands and interacts and treats people as people.” She believes that robots put to tasks such as household chores and childcare should be able to socialize and respond to humans, just like other humans, so that people can interact with them more easily. The Roomba, she points out, does not recognize people, so it runs into them all the time.

Making machines that treat humans as humans, however, is a slightly different task than making machines that demand to be treated as humans themselves. Wendell Wallach and Colin Allen have been thinking about how to create “moral machines.” For them, this means bestowing robots with the ability to read human emotions and understand human ethical codes. And sometimes it means expressing empathy for people. A semblance of empathy is especially important for therapeutic robots (2).

But while most people want well-behaved robots, some argue that we should not want deceptively human ones. Matthias Scheutz at Tufts is an outspoken critic of social robots. When robots are convincingly human,  they may be not only endearing but also persuasive to humans. Humans may become excessively dependent on and devoted to their robot companions, to the point of feeling and behaving as if these robots are sentient. This gives robots great power over humans, which is especially dangerous if their intentions are not honorable (1).

Science and science fiction alike have acknowledged the complications of human-robot relationships. In the 1991 novel He, She and It by Marge Piercy, a human falls in love with a robot who (spoiler alert) ultimately must self-destruct in battle to fulfill the commands of the people who created him. Renowned roboticist Joseph Weizenbaum explained in the documentary Plug and Pray that he believes love between humans and robots would ultimately be illusory because a robot cannot understand what it is like to have the personal history of a human.

As such a future grows ever nearer, we will have to draw the line between developing robots with the social skills to carry out the desired tasks and creating a deceptive appearance of conscious thought.

-Suzannah Weiss

Scheutz, Matthias. “The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots.” Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012.

2 “A Roadmap for U.S. Robotics: From Internet to Robotics.” 2013 Edition. March 20, 2013. http://robotics-vo.us/sites/default/files/2013%20Robotics%20Roadmap-rs.pdf

Humanoids for Human-olds: Sociable Service Robots Could Be the First Wave of the Robotics Revolution

As computer and mechanical engineering becomes more advanced, we are drawing ever closer to a revolution in robotics, and it’s coming just in time. By 2030, the populations of elderly people in the U.S., Japan, and Europe will explode. Worldwide, the number of people above the age of 80 will double. (1) Meanwhile life expectancy is getting higher every day. As fertility rates in these developed countries plunge below replacement rate, (2) the workforce will not be able to support the aging population without dramatic improvements in productivity. However, advances in the burgeoning field of medical robotics may offer a way for the elderly to sustain their health and independence indefinitely. Robots such as the inMotion ARM Interactive Robot™ are on the market today to provide sensory-motor therapy for victims of stroke who are re-learning how to move their limbs. It is not difficult to imagine sensing robots capable of detecting and recording vital signs, administering medicines, and assisting with the tasks of day-to-day living that become difficult at advanced ages. From this perspective, robots have several advantages even over human caregivers. They have more strength, more endurance, more consistency, and importantly, less pity.

However, the introduction of medical and service robots to the elderly presents its own set of challenges. This model deviates from standard paradigms of technology adoption, because the new technology will not be pitched at the younger, more adaptable generation. Because these robots might need to serve people relatively unfamiliar with computer interfaces and operating modes, they must be as understandable and intuitive as the human caregivers they could replace. These service robots must be able to interact socially with their patients.

Fortunately, the field of robotics is one that has long ensnared our narcissistic, anthropomorphic tendencies. One of the dreams of robotics engineers and researchers everywhere is to create an accurate emulation of a human. One such recently created lab devoted to this goal is GENI Lab. Founded in part by RISD alumnus David Hanson, GENI Lab states as its central goal “the creation of a life-sized humanoid robot featuring a realistic, emotional face and personality.“ (3) Hanson (who also took AI classes at Brown!) has gained fame in the robotics community for his daring and extraordinary attempts to overcome what is known in the industry as the Uncanny Valley. Coined in the 1970s, the term refers to the tendency of realistic robots and animations to suddenly become creepy as they approach a human appearance. With Hanson at the head, GENI Lab may be one of the first to achieve this Holy Grail of robotics.

However, creating humanistic robots requires more than just a pretty face. A good HRI relationship should result in emotions, engagement, and trust. The robots must be able to perceive and communicate intent, both verbally and nonverbally. Hanson’s heads might help with the latter requirement, but the former is an open problem. Robots have an unparalleled ability to fuse data streams from multiple sources, and this signature skill allows them to, through the use of Artificial Intelligence, combine various physiological data in order to interpret human emotions and actions. Imagine a robot capable of combining your medical history, current heart rate, previous interaction with your spouse, and what you had for breakfast in order to determine whether or not you are feeling angry – and then easing off on your prescribed exercise regimen for the day. The next big step in robotics is to create an all-purpose AI capable of making these inferential decisions, while simultaneously learning and interacting with humans in a dynamically changing environment.

The use of social service and medical robots to assist the elderly could provide a foothold for robots to gain more widespread acceptance. By filling an otherwise unmet need for full-time caregivers, these robots could form attachments with their patients – who would be able to remain independent longer – and with their patients’ families. By reinventing the public image of a robot as an emotive assistant, this first wave of humanized robot helpers could pave the way for robot assistance in the rest of our society.

-Sam

 

1. “A Roadmap for U.S. Robotics: From Internet to Robotics.” 2013 Edition. March 20, 2013. http://robotics-vo.us/sites/default/files/2013%20Robotics%20Roadmap-rs.pdf

 

2. U.N. Economic and Social Council. “World Population to 2300.” 2004 http://www.un.org/esa/population/publications/longrange2/WorldPop2300final.pdf

 

3. “About.” Geni-Lab.com. GENI LAB, 2013. Web. 21 June 2013. http://geni-lab.com/about-geni-lab/

Cyber-sentience: detecting “artificial” consciousness

“We already have impressive models and simulations of a couple dozen of the brain’s several hundred regions,” writes Ray Kurzweil in The Singularity is Near.1 But how accurate can these models get before they stop being models, and how advanced can artificial intelligence be before it is no longer artificial?

We are getting closer and closer to creating machines that do all the work of human brains — work that previously required thoughts and feelings. David Hanson’s robot Jules, for instance, mimics not only awareness but also self-awareness when he muses over his identity as a robot in this video. It’s hard not to feel sympathetic to his human-like struggles. Hanson’s website implies that he thinks his robots are or will be sentient: “The Character Engine A.I. software adds the spark of soul—enabling our robots to think, feel, to build relationships with people as they understand your speech, see your face, hold natural conversations, and evolve.”

Though there are robots that realistically portray such mental states, we usually view them as just that: portrayals. The problem is, we are unprepared to identify when these semblances of thoughts and feelings turn into actual thinking and feeling.

Our knee-jerk reaction is still that such robots aren’t at the level of humans. But who says you have to be human to be sentient? Many ascribe consciousness to simpler animals. And many machines have more sophisticated circuitry than many animals. One might argue that animals are biological and thus not a valid comparison, but we don’t know that the stuff of consciousness is biological. In fact, we don’t know what the stuff of consciousness is at all because there is no way of knowing which beings are conscious. We could ask them, but they could lie. We could probe their brains (or equivalent internal circuitry), but there is no way of knowing which objective data indicates subjective experience (though we can make reasonable presumptions based on which brain areas seem correlated with consciousness in humans; see my conclusion below).

Kurzweil suggests that we use the same guidelines we use for humans: We don’t know other people are conscious — called “the problem of other minds” in philosophy — but we give them the benefit of the doubt because of their apparent emotions and social abilities. Many give animals the benefit of the doubt too for the same reason, and because they would rather unnecessarily protect creatures that don’t suffer than harm ones that do. The American Society for the Prevention of Cruelty to Robots, started in 1999, offers a similar solution: “the question of self-awareness comes down to an individual leap of faith: ‘I am self-aware, and I see others behaving in similar ways, so I will assume that they are self-aware, also.'”

The loophole I see in these arguments, aside from the fact that attributions of consciousness are necessarily assumptions, is that humans are predisposed to ascribe sentience to beings with certain appearances and behaviors — specifically, those which are human-like. People generally ascribe minds to things that move on their own, have eyes, and react to stimuli.

Terry Bisson critiques biases in other-mind attribution in the short story “They’re made out of meat,” in which (spoiler alert) conscious planetary gases debate whether humans could possibly think, given that they’re just “made out of meat.” The point is, anything could be conscious, but we assume that because we are and this appears to come from our brains, all conscious beings must have (organic) brains. If the possibility of conscious hydrogen seems too crazy, the possibility of consciousness in robots that socialize and express understanding of humans is far less outlandish.

On the flip side of our anthropocentric affinity for beings resembling us, people may easily attribute consciousness to things that aren’t really conscious, even when they know intellectually what sort of being they’re dealing with. For example, people report naming their roombas and feeling guilty that they make them do all the work.2 This can cause psychological trauma when people ascribe minds to machines destined for destruction, such as military drones and IED attackers.2 I remember my 7-year-old self lying in bed at night, worrying about the well-being of my Tamagotchi.

We need guidelines for the ascription of sentience that are more reliable than human instincts. Though in their infancy and incapable of certainty, elaborate theories of consciousness are emerging to help us decide when moral restrictions apply to other creatures. Physicist Roger Penrose has proposed criteria for consciousness that are implementable in organisms and computers, but, he argues, not in their current state: They would need a different mode of causation that follows a different form of physics.

It’s very hard to think of current robots as conscious. But eventually (and possibly soon!), we will have to make a leap of faith in believing in robot minds, like we do with human minds or animal minds. Though consciousness is a subjective experience, I believe we can find reasonable, objective ways for trained professionals to make educated guesses. True, for all we know our pillows hurt when we punch them, but this is a philosophical question far broader than what is necessary to create conscious robots.

For now, this is the safest approach I can see: If sentience seems like a possibility, treat your machines with respect. This assumption prevents harm to something that could be sentient, keeps our consciences clean, and sets an example for the benevolent treatment of living things and property alike — even if we don’t know which we’re dealing with.

-Suzannah Weiss

1 Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York: Viking, 2005.

2 Scheutz, Matthias. “The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots.” Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012. Print.

Raunchy robotics: the ethics of sexbots

You may have seen recent Kia Forte commercials featuring “Hotbots” — their version of a female sex robot commonly known as a fembot.1 In this ad, it is implied that the owner of the car is using the robot for sex. Kia used a human actress for the robot,2 but such advanced technology is not far off, and robots* for such uses already exist, ranging in complexity from inflatable dolls to realistic-looking and -feeling machines with built-in motion. They are especially popular in Japan and South Korea, where individuals rarely own them because they sell at 6-figure prices**, but rent-a-doll “escort” services make a profit by renting them out to customers.3 There is also a significant following among American men with “robot fetishes” who use fembots as sexual and/or romantic partners.4

Personally, I find the whole practice creepy. But as with any sexual practice, finding something aversive is not a reason to oppose others doing it. In my opinion, though, there are some justifiable reasons one might oppose (if not prohibit) sexbots:

They can promote unhealthy attitudes toward relationships. Those who market sex robots aim to produce, or at the very least encourage, desires for unbalanced relationships in which one has total control over one’s partner (particularly, as it stands now, where men have total control over manufactured women). Teaching people that sex with an unresponsive, undesiring partner is desirable could perpetuate rape culture. One male user raved that the dynamic was “as close to human slavery as you can get.”4

They could be considered prostitutes. South Korean authorities have debated whether rent-a-doll escort services, which were created to dodge prostitution laws, should also be illegal.3

They could encourage gender (and possibly any type of) stereotypes. Manufacturers already ascribe gendered characteristics to other types of robots.5 Sex robots open the door for the promotion of stereotypes of women as submissive fulfillers of men’s desires or “men as perpetually-ready sexual performance machines.”A current line of sex robots has different models representing some problematic stereotypes, such as Wild Wendy and Frigid Farrah (racializing as well as gendering robots).

They could promote misogyny. Fembots are significantly more popular than their malebot counterparts, and they possess traditional “feminine” qualities: submissiveness, adhesion to beauty ideals, and desire (if one could call it that) to please men without any desires of her own. Like the Stepford Wives, fembots allow men to fulfill patriarchal fantasies in which women exist solely to please them. One male user said “the whole idea of man taking charge” attracted him to fembots.4

They may replace real relationships and distance users from their partners or other potential partners. A plurality of Americans polled indicated that sex with a robot while in a relationship is cheating.5 Whether or not it is infidelity, robots designed to meet needs that humans cannot, such as “lack of complaints or constraints,”3 may give humans unrealistic standards to live up to in order to capture their partners’ attention. In addition, having sexual and possibly even romantic interactions that don’t require any effort may discourage users from meeting real people. In addition, people with partners who are “immune .. to their clients’ psychological inadequacies”3 may not address these inadequacies, making it even harder to form real relationships.

Users could develop unhealthy attachments to robots. Unless robots achieve consciousness (which merits a separate blog post), people could develop feelings for robots who don’t feel anything back. This may sound silly, but people already name and dress up their roombas. It is natural for people to develop attachments to sexual partners — whether this applies when the partner is not human remains to be seen — and some even hope for their sexbots to double as romantic partners.4 An asymmetrical relationship of this nature could end up unfulfilling, isolating, or depressing.

Whether sexbots are ethical is a separate issue from whether they should be legal. For countries and US states that ban sex toys altogether, the decision will be obvious. Countries that ban prostitution, like South Korea, may debate whether a machine can be considered a prostitute. Some laws banning sex toys in the US have been struck down on the grounds of privacy, and I suspect these same arguments will come up for sexbots. David Levy, a scholar on the topic, argues that sexbots are no more legally problematic than vibrators.3 But sexbots have raised and will further raise ethical, if not legal, issues specific to their goal of simulating human beings.

-Suzannah Weiss

* It is debatable whether the current generation of sex dolls can be considered robots, but they do seem to be headed in that direction.

**This information may be outdated, as there are currently sex robots on the market for 4-figure prices.

1 “Kia Chalks up Another Ad as a Sexist Fail | About-Face.” About-Face. Accessed June 18, 2013. http://www.about-face.org/kia-chalks-up-another-ad-as-a-sexist-fail/.

2 “2014 Kia Forte Super Bowl Ad Features Sexy Robots | Edmunds.com.” Edmunds. Accessed June 18, 2013. http://www.edmunds.com/car-news/2014-kia-forte-super-bowl-ad-features-sexy-robots-disrespectful-reporter.html.

3 Levy, David. “The Ethics of Robot Prostitutes.” Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012. Print.

4 “Discovery Health ‘Sex Robot Roxy: Sex Robot’.” Discovery Fit and Health. Accessed June 18, 2013. http://health.discovery.com/tv-shows/specials/videos/sex-robot-sex-robot.htm.

5 “Robot Sex Poll Reveals Americans’ Attitudes About Robotic Lovers, Servants, Soldiers.” Huffington Post, April 10, 2013. http://www.huffingtonpost.com/2013/04/10/robot-sex-poll-americans-robotic-lovers-servants-soldiers_n_3037918.html.

6. Brod, Harry. “Pornography and the alienation of male sexuality.” Social Theory and Practice 14.3 (1988): 265-84.

The Dangers of Drones

The concern about the growing impersonality of warfare is an old one. But now more than ever, critics worry about the distance from which war is conducted. The increasingly prevalent drones on war-grounds can save the lives of the soldiers they replace, but they also present new safety and ethical issues.

Drones are prone to targeting errors, which have already caused hundreds of civilian deaths by the US in countries such as Pakistan and Afghanistan, in addition to thousands at the hands of the Taliban.1 It is hard to say whether this surpasses human error — the statistics on drones are not comparable to those on human error because humans and drones undergo different types of commissions. But it is at the very least disconcerting that the nature of drone piloting can lead to miscommunications among correspondents and misjudgments of who is being targeted, as was the case in a 2010 attack on Afghan civilians resulting from disagreements among a Predator pilot, the crew, and field commanders over the presence of children and of Taliban members (it turned out there were children in the targeted area but not Taliban members).2 To minimize unwanted damage by drones, the Department of Defense dictates that operations involving drones must proceed only under fully approved human supervision in accordance with laws of war.3

The use of drones also poses risks to military personnel. A hyperbolic illustration of the psychological effects of killing from a distance can be found in the new season of Arrested Development, in which the mentally challenged Buster thinks he is playing a video game for training purposes when he is actually acting as a drone pilot. Even those informed about the situation may still feel as if they are playing a video game, viewing their duties less gravely than they would in person, until the force of their impact hits them in hindsight. PTSD is as common among drone pilots as among other air force members, but can arise for different reasons. Though more distant, the pilots are in some ways more connected to their target zones as a result of their constant exposure through the computer screen. “They witness the carnage,” said Lin Otto, an epidemiologist who studied PTSD in drone pilots. In a 2011 survey, about half of drone pilots reported high stress levels, citing “long hours and frequent shift changes.” Other stressors they face include “witnessing combat violence on live video feeds, working in isolation or under inflexible shift hours, juggling the simultaneous demands of home life with combat operations and dealing with intense stress because of crew shortages.” The Pentagon has created a new medal for drone pilots and extended psychological and religious counseling services to them.4

For an interesting presentation of moral dilemmas related to robotic warfare, see this Ted Talk by political scientist P.W. Singer.

-Suzannah Weiss

1.“Report: US Drone Attacks Rapidly Increasing in Afghanistan (Wired UK).” Wired UK. Accessed June 14, 2013. http://www.wired.co.uk/news/archive/2013-02/20/un-afghanistan-drone-deaths.

2. Drew, Christopher. “Study Cites Drone Crew in Attack on Afghans.” The New York Times, September 10, 2010, sec. World / Asia Pacific. http://www.nytimes.com/2010/09/11/world/asia/11drone.html.

3. “A Roadmap for U.S. Robotics: From Internet to Robotics.” 2013 Edition. March 20, 2013. http://robotics-vo.us/sites/default/files/2013%20Robotics%20Roadmap-rs.pdf

4. Dao, James. “Drone Pilots Found to Get Stress Disorders Much as Those in Combat Do.” The New York Times, February 22, 2013, sec. U.S. http://www.nytimes.com/2013/02/23/us/drone-pilots-found-to-get-stress-disorders-much-as-those-in-combat-do.html.