“Malle suspects that we might actually want our robots to make different decisions than the ones we’d want other humans to make . . .[in a life or death scenario] . . . we might actually expect a robot to flip the switch. The participants in Malle’s experiment blamed the robot more if it didn’t step in and intervene. And the more machine-looking the robot was, the more they blamed it . . .”

-Kristen Clark IEEE Spectrum

“[My aim is] to construct robots that seamlessly use natural language to communicate with humans . . . In twenty years, every home will have a personal robot which can perform tasks like clearing the dinner table, doing laundry and preparing dinner…

But to achieve this aim, it is essential for robots to move beyond merely interacting with people and toward collaboration.”

-Stefanie Tellex in Wired

“We further developed our designs for telepresence robots by integrating smart phones and servo motors to help us imagine how the robot might interact with its environment.”

-Ian Gonsher in Make

“Tellex thinks the way robots will get faster and smoother at picking up unfamiliar objects is to give them programs that let them learn from experience.”

-Joe Palca, NPR

“[People are] blurring the line between AI technology of the kind that we’re familiar with, and a kind of super-intelligent, willful entity … The latter is pure fantasy and a significant distraction from attempts to develop technology to help.”

-Michael Littman in TechRadar

Malle points out that autonomous vehicles are just the tip of the robot-human interaction iceberg. While the international community has extensively discussed the ethics of autonomous weapons, artificial intelligence in health care and robots that care for the sick and elderly are proliferating — yet there has been little discussion about the moral decision-making of robots in homes and hospitals.

-Kate Allen, The Toronto Star

Other articles by HCRI faculty: