Could your robot hurt you? Perhaps, but not intentionally.

Because robots’ decision-making abilities are not at the level of humans’, there are liabilities that come with industrial robots, automated vehicles, caretakers, and other positions that involve life-or-death situations. More so than robots rebelling and taking over the world, people should be worrying about robots malfunctioning or falling into the hands of the wrong people.

There have been three reported incidents of industrial robots causing deaths in factories, the latter two of which involved human error at least in part. In 1979, a Michigan Ford Motor robot’s arm crushed a worker while both were gathering supplies. The jury awarded the employee’s family $10 million, the state’s largest personal injury award ever at the time. Ford Motors was blamed for the incident because of a lack of safety precautions, including an alarm that should have sounded when the robot approached.

In 1984, a die-cast operator was pinned between a hydraulic robot and a safety pole. The worker was blamed for entering a robot envelope, which was prohibited in training. The company subsequently installed a fence to keep unauthorized workers away from the robots. This incident led the National Institute for Occupational Safety and Health’s Division of Safety Research to make recommendations in “ergonomic design, training, and supervision” for industrial robots, including design measures for the robot envelope, safety precautions for workers and programmers, and instructions for training. Supervisors were advised to emphasize in training that workers should not assume that a robot will keep doing its current activity or stay still when stopped.

More recently, in 2009, a robot’s arm again crushed and killed a worker at a Golden State Foods bakery in California. According to the inspection detail, the case is not closed yet, but it appears that Golden State Foods has to pay several fines, over $200,000 total. The incident was chalked up partially to the worker’s lack of precaution. The inspection report reads: “At approximately 7:55 a.m. on July 21, 2009, Employee #1 was operating a robotic palletizer for Golden State Foods, Inc., a food processor and packager for fast food restaurants. She entered the caged robotic palletizer cell while the robotic palletizer was running. She had not deenergized the equipment. Her torso was crushed by the arms of the robotic palletizer as it attempted to pick up boxes on the roller conveyor. She was killed.”

The International Organization for Standardization (ISO) has published 10 standards for industrial robots and drafted two standards for personal care robots. The American National Standard and the Robotics Industries Association have jointly developed detailed safety regulations for industrial robots, which were recently updated to incorporate the ISO’s guidelines.

A newer development for governments to respond to is the automated car. Self-driving cars are legal in California, Nevada, and Florida for testing on roads as long as there is a human behind the wheel. The only Google car accident so far occurred when the car was controlled by a human, as opposed to a computer. When asked who would get ticketed when the Google car ran a red light, co-creator Sergey Brin responded, “self-driving cars do not run red lights.”

Currently, however, the United States National Highway Traffic Safety Administration (NHTSA) is working on a set of rules governing the use of self-driving cars. The recommendations include special training to obtain a license for an automatic car and a requirement that those testing the vehicles report all accidents. The NHTSA proposal points out that most cars already have automated features, such as brake pulses during potential skids, automatic braking during potential collisions, and cruise control.

Another liability for robots is hacking. People can hack into not only computers, but also any machine that is part of a network— including cars with the features described above. To illustrate this possibility, computer scientists Charlie Miller and Chris Valasek hacked into a Ford Escape and caused it to crash even when the driver hit the brakes. Robot hacking similarly contains the potential for physical damage.

Increasingly automated machines are also bringing up security questions.1 Some medical and domestic robots record personal information and behavioral patterns, which privacy laws do not yet address. If information is not kept between the machine and its user, the consequences for medical robots, automated vehicles, or smart houses could be dire. Apps such as Nest, which connect your phone to your house in order to control the temperature, could give a hacker information about your home — and if it were a robot the app was controlling, much more.

Current hacking laws are difficult to apply to the 21st century, let alone beyond. The Computer Fraud and Abuse Act, passed in 1986, has been used to prosecute internet users for finding loopholes in websites without even revealing the information they find.

When we ascribe blame for a crime, we usually ascribe it to an individual who has acted maliciously or carelessly. But those words don’t really apply to robots, who act based on programming, intended or not. Adapting our laws to robots may require us to rethink agency, or at least to think more about who the agents are in these situations.

–Suzannah Weiss

A Roadmap for U.S. Robotics: From Internet to Robotics. 2013 Edition. March 20, 2013. 

One thought on “Could your robot hurt you? Perhaps, but not intentionally.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s