Could your robot hurt you? Perhaps, but not intentionally.

Because robots’ decision-making abilities are not at the level of humans’, there are liabilities that come with industrial robots, automated vehicles, caretakers, and other positions that involve life-or-death situations. More so than robots rebelling and taking over the world, people should be worrying about robots malfunctioning or falling into the hands of the wrong people.

There have been three reported incidents of industrial robots causing deaths in factories, the latter two of which involved human error at least in part. In 1979, a Michigan Ford Motor robot’s arm crushed a worker while both were gathering supplies. The jury awarded the employee’s family $10 million, the state’s largest personal injury award ever at the time. Ford Motors was blamed for the incident because of a lack of safety precautions, including an alarm that should have sounded when the robot approached.

In 1984, a die-cast operator was pinned between a hydraulic robot and a safety pole. The worker was blamed for entering a robot envelope, which was prohibited in training. The company subsequently installed a fence to keep unauthorized workers away from the robots. This incident led the National Institute for Occupational Safety and Health’s Division of Safety Research to make recommendations in “ergonomic design, training, and supervision” for industrial robots, including design measures for the robot envelope, safety precautions for workers and programmers, and instructions for training. Supervisors were advised to emphasize in training that workers should not assume that a robot will keep doing its current activity or stay still when stopped.

More recently, in 2009, a robot’s arm again crushed and killed a worker at a Golden State Foods bakery in California. According to the inspection detail, the case is not closed yet, but it appears that Golden State Foods has to pay several fines, over $200,000 total. The incident was chalked up partially to the worker’s lack of precaution. The inspection report reads: “At approximately 7:55 a.m. on July 21, 2009, Employee #1 was operating a robotic palletizer for Golden State Foods, Inc., a food processor and packager for fast food restaurants. She entered the caged robotic palletizer cell while the robotic palletizer was running. She had not deenergized the equipment. Her torso was crushed by the arms of the robotic palletizer as it attempted to pick up boxes on the roller conveyor. She was killed.”

The International Organization for Standardization (ISO) has published 10 standards for industrial robots and drafted two standards for personal care robots. The American National Standard and the Robotics Industries Association have jointly developed detailed safety regulations for industrial robots, which were recently updated to incorporate the ISO’s guidelines.

A newer development for governments to respond to is the automated car. Self-driving cars are legal in California, Nevada, and Florida for testing on roads as long as there is a human behind the wheel. The only Google car accident so far occurred when the car was controlled by a human, as opposed to a computer. When asked who would get ticketed when the Google car ran a red light, co-creator Sergey Brin responded, “self-driving cars do not run red lights.”

Currently, however, the United States National Highway Traffic Safety Administration (NHTSA) is working on a set of rules governing the use of self-driving cars. The recommendations include special training to obtain a license for an automatic car and a requirement that those testing the vehicles report all accidents. The NHTSA proposal points out that most cars already have automated features, such as brake pulses during potential skids, automatic braking during potential collisions, and cruise control.

Another liability for robots is hacking. People can hack into not only computers, but also any machine that is part of a network— including cars with the features described above. To illustrate this possibility, computer scientists Charlie Miller and Chris Valasek hacked into a Ford Escape and caused it to crash even when the driver hit the brakes. Robot hacking similarly contains the potential for physical damage.

Increasingly automated machines are also bringing up security questions.1 Some medical and domestic robots record personal information and behavioral patterns, which privacy laws do not yet address. If information is not kept between the machine and its user, the consequences for medical robots, automated vehicles, or smart houses could be dire. Apps such as Nest, which connect your phone to your house in order to control the temperature, could give a hacker information about your home — and if it were a robot the app was controlling, much more.

Current hacking laws are difficult to apply to the 21st century, let alone beyond. The Computer Fraud and Abuse Act, passed in 1986, has been used to prosecute internet users for finding loopholes in websites without even revealing the information they find.

When we ascribe blame for a crime, we usually ascribe it to an individual who has acted maliciously or carelessly. But those words don’t really apply to robots, who act based on programming, intended or not. Adapting our laws to robots may require us to rethink agency, or at least to think more about who the agents are in these situations.

–Suzannah Weiss

A Roadmap for U.S. Robotics: From Internet to Robotics. 2013 Edition. March 20, 2013. 

Who Will Drive the Revolution?


“I will remember that artificially intelligent machines are for the benefit of humanity and will strive to contribute to the human race through my creations.”

Lee McCauley’s 2007 proposed a Hippocratic oath for roboticists

The field of robot ethics often focuses on ethical problems with creating robots. But because certain populations are in need of the extra care, protection, or labor that robots provide, it can be unethical not to create robots. The US Congress’s Robotics Caucus Advisory Committee’s 2013 Roadmap for US Robotics documents how robots can improve and even save lives. At work, they can “assist people with dirty, dull, and dangerous tasks;” at home, they can offer “domestic support to improve quality of life;” and in warzones, “robots have already proven their value in removing first-responders and soldiers from immediate danger.” The US government’s National Robotics Initiative (NRI) hopes to encourage robotics in order to boost American manufacturing, space exploration, medical discoveries, and food safety.

Many roboticists share the vision of using robots to solve some major problems the world currently faces. For example, an area receiving particular attention among researchers is elderly care. The number of people over 80 worldwide is expected to double by 2030, leading to increased demands for living assistance. Smart homes and robot caretakers can help these people stay independent longer (1). Even in military use, robots are not necessarily killing machines. iRobot CEO Colin Angle said that his PackBot has saved the lives of EOD technicians by hunting IEDs in Afghanistan and Iraq. Furthermore, drones have proven applicable beyond the military: Association of Unmanned Vehicles International CEO Michael Toscano estimated in July 2013 that agriculture would overtake defense as the most common use for drones in the next 10 years, noting that this advancement will help feed an ever-growing population.

Those who fund and use robots can ensure that robots’ potential to do good for the world does not go to waste. An active approach may be necessary because research may otherwise be driven simply by potential profits.


“A profitable market for expensive new technology is not of itself sufficient moral defense for allowing widespread sales.” -Whitby (2012)

The first ethical question in the development of robots is what kinds of robots to make. Are we creating the kinds of robots that our society needs? The answer to this question depends on the motivations not only of researchers but also of their sponsors. The main sources for robotics funding are the government, businesses, and research institutions.

Government funding

The government has played a particularly significant role in robotics funding since 2011, with the advent of the NRI, whose mission is to fund research that facilitates human-robot interactions. Its 2011 and 2012 solicitations were subtitled “The realization of co-robots acting in direct support of individuals and groups” and stated that proposals would be selected by a peer-review process. The National Science Foundation listed the recipients of $30 million of the $50 million awarded for this initiative (the destination of the other $20 million is unclear). The plurality of the projects listed (12 out of 31) named medicine as an application for its robots, including aid in rehabilitation and surgery, followed by domestic care — either general service robots or those geared toward the elderly or incapacitated — and manufacturing, such as industrial robots for use in factories (2).

As part of the NRI, the Defense Advanced Research Projects Agency (DARPA)’s FY2012 solicitation for the Defense University Research Instrumentation Program (DURIP) encouraged proposals for robotics research equipment applicable the military (Army Research Office). Of the 190 recipients of DURIP grants, which totaled about $55 million, 22 projects involved robots, autonomous machines or vehicles, or artificial intelligence. Many robotics-related projects were not directly or solely applicable to the military, including 3 grants for general human-robot interaction research and 7 for ocean surveillance and research. Others objects of study included the response of miniature air drones to wind and water-splitting catalysts for autonomous energy generation.

DARPA will account for nearly half of the government’s Research and Development funding under Obama’s FY2014 plan. Though this suggests that research applicable to the military may have priority, it does not necessarily mean that the robotics projects DARPA sponsors are primarily for killing. For example, the agency is currently sponsoring a $2 million “Robotics Challenge” in which robots compete in a series of tasks simulating natural and man-made disasters. The press release cites robots that diffuse explosives as an example of the type of innovations DARPA is looking for. The Robotics Challenge has sponsored robots made by universities, companies, and government organizations such as NASA.

However, government funding of robotics has its critics, perhaps the most outspoken being Oklahoma Senator Tom Coburn. Coburn criticized a $1.2 million NSF grant to a laundry-folding robot at UC Berkeley and a $6000 grant for a “Robot Rodeo and Hoedown” at a computer science education symposium to introduce teachers to robotics (3). A researcher in the UC Berkeley group countered that the laundry-folding algorithm was a big step for robotics more broadly, and the organizers of the robot rodeo event explained that they were trying to get more students interested in computer science.

Venture Capital Funding

Venture Capital funding is also a major source of funding for robotics. In 2012, the 22 most funded robotics companies accrued almost $200 million total from VC funds (4). Like the NRI funds, the plurality went toward medicine — about $84 million of the money awarded by VC funds went toward companies that create robots for surgery (e.g. MedRobotics and Mazor Robotics), bionics (e.g. iWalk), and hospital service (e.g. Aethon). Another $30 million went to Rethink Robotics for the manufacturing robot Baxter.

Some have criticized excessive funding for entertainment robots, such as a robotic toy car largely funded by Andreessen Horowitz. The car’s maker, the new start-up Anki, raised $50 million total in 2012 and 2013. Anki’s founders believe this investment is justified because their technology can lead to more advanced and affordable robots down the line.

Research institution funding

Universities themselves sometimes fund robotics research, but these funds are often funneled from the sources above — in the United States, particularly the government. The Robotics Institute at Carnegie Mellon, the first and largest robotics department at any US university, lists its biggest sponsors as the Department of Defense, DARPA, the National Aeronautics and Space Administration, the National Institutes of Health, and the NSF. The Georgia Tech Center for Robotics and Intelligent Machines and the MIT Media Lab get sponsorship from companies in the robotics industry in exchange for developing their products. This model appears to be the most common one in Japan, China, and South Korea, according to a WTEC report. Google and other companies have grants specifically for funding technological research, along with private foundations such as the Keck Foundation and the Alfred P. Sloan Foundation.

While grants are usually given for specific projects, some departments retain control over which projects their funds go toward. The Robotics Institute at Carnegie Mellon has a budget of $65 million per year, which it has used to fund 30 start-up companies in addition to supporting its own institution. Carnegie Mellon also, along with companies like Google and Yahoo and private donors, sponsors a foundation called TechBridgeWorld that has aided technological progress in developing areas of the world by funding innovations such as an “automated tutor” to improve literacy. This exemplifies how academics and researchers can use their knowledge of what people need and how to most efficiently meet those needs to influence production.


Because robotics companies, like any company, strive to make money, the purchase of robots also affects which robots get made. The International Federation of Robotics estimated that 2.5 million personal and domestic robots (e.g., those that do household chores or assist people with medical problems) and about 16,000 professional robots for use in various workplaces were sold worldwide in 2011. The professional robots consisted mostly of military (40%), and agricultural (31%) robots, but also of robots for other uses like medicine (6%) and manufacturing (13%). The Robotics Industries Association estimated during July 2013 that 230,000 robots were currently in use in United States factories, most often in the automotive industry.

Fifty years ago, we couldn’t imagine computers as part of our everyday lives. Now, we can’t imagine our lives without them. Robots are the new computers, in that they are capable of revolutionizing our society and economy. Whether this revolution is for better or for worse is up to all the aforementioned players.

-Suzannah Weiss

1 Whitby, B. (2012). Do You Want a Robot Lover? The Ethics of Caring Technologies. Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012.
2 National Science Foundation (2012). Press Release 12-166: Nearly $50 Million in Research Funding Awarded by NSF-Led National Robotics Initiative to Develop Next-Generation Robotics. Retrieved from
3 Coburn, T., Oklahoma Senator (2011). The National Science Foundation: Under the Microscope.
4 Deyle, T. and Kandan, S. Venture Capital (VC) Funding for Robotics in 2012. Hizook: Robotics News for Academics and Professionals.


Getting Robots to Behave

One of people’s biggest concerns regarding the possibility of owning robots is losing control of these robots(1). Getting robots to cooperate with humans is a challenge given the numerosityImage and complexity of the rules governing social conduct. Isaac Asimov illustrated this difficulty in his short story “Runaround,” in which the rules governing a robot’s behavior conflict, cancel one another out, and supersede one another in unpredictable ways. Whether instilling appropriate behavior in robots is the job of its designers or its owners, and whether these behaviors should be intrinsic or learned, is up for debate. What is clear is that we need a better understanding of what guides moral behavior in humans as well as how to program these codes into robots to ensure ethicality and effectiveness.

Advanced social robots may need built-in ethical guidelines for their behavior in order to avoid situations in which robots are used to manipulate or harm people (2). The first step is getting people to agree on what these guidelines are — for example, should robots merely avoid bumping into people, or should they politely evade them by saying “Excuse me”? (3). The next step is implementing these guidelines. Wendell Wallach and Colin Allen’s book Moral Machines: Teaching Robots Right from Wrong describes two approaches to robot morality: programming the robot to predict the likelihood of various consequences of various actions, or having the robot learn from experience and acquire moral capabilities from more general intelligence.

But as it stands, the social abilities of robots are currently very rudimentary (4). According to the 2013 Roadmap for U.S. Robotics, “medical and healthcare robots need to understand their user’s state and behavior to respond appropriately,” which requires a multiplicity of sensory information that current robots have trouble collecting, let alone integrating into something meaningful and utilizing to inform their own behavior. As an illustration, voice recognition features on cell phones often have trouble understanding what the user is saying, and security cameras often fail to recognize human faces (3). The roadmap also mentions the importance of empathy in healthcare, rendering socially assistive robotics (SAR) for mental health problematic. Joseph Weizenbaum, creator of the first automated psychotherapist in the 60s, balked at the idea that his creation, Eliza, could replace a human therapist (5). Today, there are more advanced automated therapist programs, but like Eliza, they mostly echo what the user says and ask generic questions. Therapeutic robots have been more effective in the realm of interactive toys for autistic children, as long as these are viewed as toys and, like any robotic invention, not as replacements for human relationships.

-Suzannah Weiss

1 Ray, C., Mondada, F., & Siegwart, R. (2008) What do people expect from robots? IEEE/RSJ International Conference on Intelligent Robots and Systems, 3816–3821.

2 Scheutz, M. The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots. Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012.

3 Murphy, R. R., & Woods, D. D. (2009). Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent Systems, 24(4), 14–20.

4 “A Roadmap for U.S. Robotics: From Internet to Robotics.” 2013 Edition. March 20, 2013.

5 Schanze, J. (Director) (2011). Plug & Pray [Documentary]: B.P.I.

Should we model robots after humans?

Children get attached to their toys. They bring them on trips, care for them, ascribe human qualities to them, and even worry about them. As it turns out, adults do too, especially when these toys lend themselves to human empathy. An object triggers an empathetic response when it moves on its own, has eyes, and/or responds to stimuli in the environment. But it doesn’t need to have all these qualities. Two-thirds of Roomba owners have named their automated vacuum cleaner (1). If you’ve ever seen a Roomba, you know it basically just looks like a big disk with some buttons.

Some designers and manufacturers aim for realism in humanoid robots, even believing that these robots will be so realistic as to be sentient. David Hanson, known for extremely lifelike and seemingly emotional robots such as Jules, writes on his website  that his robots have “the spark of soul—enabling our robots to think, feel, to build relationships with people […].” Cynthia Breazeal, a robot engineer at MIT, said that after watching science fiction films as a child “it was just kind of a given for me that robots would have emotions and feelings.” Perhaps because this is interesting to consumers the same way a very exact painting would be, the advantage of this approach is often assumed rather than examined.

Scientists who aim to make robots relatable often justify their approach by pointing out the advantages of people liking and feeling cared for by their robots. Breazeal, creator of cutest-robot-ever Leonardo, said she aimed to design a robot “that understands and interacts and treats people as people.” She believes that robots put to tasks such as household chores and childcare should be able to socialize and respond to humans, just like other humans, so that people can interact with them more easily. The Roomba, she points out, does not recognize people, so it runs into them all the time.

Making machines that treat humans as humans, however, is a slightly different task than making machines that demand to be treated as humans themselves. Wendell Wallach and Colin Allen have been thinking about how to create “moral machines.” For them, this means bestowing robots with the ability to read human emotions and understand human ethical codes. And sometimes it means expressing empathy for people. A semblance of empathy is especially important for therapeutic robots (2).

But while most people want well-behaved robots, some argue that we should not want deceptively human ones. Matthias Scheutz at Tufts is an outspoken critic of social robots. When robots are convincingly human,  they may be not only endearing but also persuasive to humans. Humans may become excessively dependent on and devoted to their robot companions, to the point of feeling and behaving as if these robots are sentient. This gives robots great power over humans, which is especially dangerous if their intentions are not honorable (1).

Science and science fiction alike have acknowledged the complications of human-robot relationships. In the 1991 novel He, She and It by Marge Piercy, a human falls in love with a robot who (spoiler alert) ultimately must self-destruct in battle to fulfill the commands of the people who created him. Renowned roboticist Joseph Weizenbaum explained in the documentary Plug and Pray that he believes love between humans and robots would ultimately be illusory because a robot cannot understand what it is like to have the personal history of a human.

As such a future grows ever nearer, we will have to draw the line between developing robots with the social skills to carry out the desired tasks and creating a deceptive appearance of conscious thought.

-Suzannah Weiss

Scheutz, Matthias. “The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots.” Robot ethics: the ethical and social implications of robotics. Ed. Lin, Patrick. Cambridge, Mass.: MIT Press, 2012.

2 “A Roadmap for U.S. Robotics: From Internet to Robotics.” 2013 Edition. March 20, 2013.

Humanoids for Human-olds: Sociable Service Robots Could Be the First Wave of the Robotics Revolution

As computer and mechanical engineering becomes more advanced, we are drawing ever closer to a revolution in robotics, and it’s coming just in time. By 2030, the populations of elderly people in the U.S., Japan, and Europe will explode. Worldwide, the number of people above the age of 80 will double. (1) Meanwhile life expectancy is getting higher every day. As fertility rates in these developed countries plunge below replacement rate, (2) the workforce will not be able to support the aging population without dramatic improvements in productivity. However, advances in the burgeoning field of medical robotics may offer a way for the elderly to sustain their health and independence indefinitely. Robots such as the inMotion ARM Interactive Robot™ are on the market today to provide sensory-motor therapy for victims of stroke who are re-learning how to move their limbs. It is not difficult to imagine sensing robots capable of detecting and recording vital signs, administering medicines, and assisting with the tasks of day-to-day living that become difficult at advanced ages. From this perspective, robots have several advantages even over human caregivers. They have more strength, more endurance, more consistency, and importantly, less pity.

However, the introduction of medical and service robots to the elderly presents its own set of challenges. This model deviates from standard paradigms of technology adoption, because the new technology will not be pitched at the younger, more adaptable generation. Because these robots might need to serve people relatively unfamiliar with computer interfaces and operating modes, they must be as understandable and intuitive as the human caregivers they could replace. These service robots must be able to interact socially with their patients.

Fortunately, the field of robotics is one that has long ensnared our narcissistic, anthropomorphic tendencies. One of the dreams of robotics engineers and researchers everywhere is to create an accurate emulation of a human. One such recently created lab devoted to this goal is GENI Lab. Founded in part by RISD alumnus David Hanson, GENI Lab states as its central goal “the creation of a life-sized humanoid robot featuring a realistic, emotional face and personality.“ (3) Hanson (who also took AI classes at Brown!) has gained fame in the robotics community for his daring and extraordinary attempts to overcome what is known in the industry as the Uncanny Valley. Coined in the 1970s, the term refers to the tendency of realistic robots and animations to suddenly become creepy as they approach a human appearance. With Hanson at the head, GENI Lab may be one of the first to achieve this Holy Grail of robotics.

However, creating humanistic robots requires more than just a pretty face. A good HRI relationship should result in emotions, engagement, and trust. The robots must be able to perceive and communicate intent, both verbally and nonverbally. Hanson’s heads might help with the latter requirement, but the former is an open problem. Robots have an unparalleled ability to fuse data streams from multiple sources, and this signature skill allows them to, through the use of Artificial Intelligence, combine various physiological data in order to interpret human emotions and actions. Imagine a robot capable of combining your medical history, current heart rate, previous interaction with your spouse, and what you had for breakfast in order to determine whether or not you are feeling angry – and then easing off on your prescribed exercise regimen for the day. The next big step in robotics is to create an all-purpose AI capable of making these inferential decisions, while simultaneously learning and interacting with humans in a dynamically changing environment.

The use of social service and medical robots to assist the elderly could provide a foothold for robots to gain more widespread acceptance. By filling an otherwise unmet need for full-time caregivers, these robots could form attachments with their patients – who would be able to remain independent longer – and with their patients’ families. By reinventing the public image of a robot as an emotive assistant, this first wave of humanized robot helpers could pave the way for robot assistance in the rest of our society.



1. “A Roadmap for U.S. Robotics: From Internet to Robotics.” 2013 Edition. March 20, 2013.


2. U.N. Economic and Social Council. “World Population to 2300.” 2004


3. “About.” GENI LAB, 2013. Web. 21 June 2013.