Featured Research

HCRI aims to facilitate collaboration among Brown researchers by providing access to robots, robotic research and building spaces and specialized network access. Using seed grants, awards and hackathons, it hopes to inspire new research which can be leveraged for the development of proposals and corporate partnerships. Longer term, HCRI looks to develop opportunities for corporate partnerships and the commercialization and licensing of robotic research performed at Brown.

A more detailed precis on some of the AI focussed featured research can be found in our HCRI AI Research Summary.

 


Moral Norms

NormNetwork

This project rests on the assumption that Artificial Intelligent Systems (AIS) can become safe and beneficial contributors to human communities if they—like all beneficial human contributors—are able to represent, learn, and follow the norms of the community. We label this set of abilities norm competence, and we bring together social, cognitive, and computer sciences to advance research both on how such norm competence appears in the human mind and how it might be implemented in artificial minds.

Norms are not merely a subset of an agent’s goals but rather constrain the agent’s pursuit of goals. Were it not for the norms governing a particular context, an individual (human or artificial) would pursue a variety of maximally rewarding actions that might not be rewarding for other community members. Norm-guided action, in a sense, maximizes a societal value function. We argue that AI and robotics, too, should work to maximize societal value, and they can do so if norm competence lies at the foundation of AIS behavior.

We propose an ambitious program of experimentational and computational work on three core elements of human and artificial norm systems: Representation and activation of norms; learning of norms; and action implementation of norms. In all cases, we use novel experimental paradigms to study human norm representations, develop computational models of such representations, and begin to implement these models in artificial agents that thereby acquire norm competence.

Researchers:  Bertram Malle, Michael Littman


AI Explainability

We take three approaches to the problem of creating explainable Artificial Intelligent Systems (AIS), and the approaches will increasingly intertwine to offer a systematic solution to the need for AIS to be intelligible to humans.

The first approach seeks to describe and explain the sequential behaviors of a reinforcement learning (RL) system in a Markov Decision Process (MDP). We use linear temporal logic as the task-independent language that describes these sequential behaviors, and we use counterfactuals to identify the critical features that guide the learning algorithm’s behavior, thereby helping explain its behavior.

The second approach helps a learning algorithm acquire explanations of its own behaviors by receiving models of such explanations from human teachers. The teachers draw the learner’s attention to important aspects of its input, dependencies among the input, and offer examples of what good explanations are.

The third approach builds on our longstanding work on how humans explain behavior—specifically, what conceptual framework underlies such explanations and what vocabulary appropriately expresses these explanations. There is good reason to believe that people will use this same framework and vocabulary for AIS that they experience as “agents” (e.g., social robots), but currently there is no evidence to support this assumption, and we do not know under what conditions people might use alternate frameworks. We therefore provide the first systematic study of how people explain robots’ and other artificial agents’ behaviors. In addition, if people do use this conceptual framework to explain AIS behavior, they will expect AIS to explain their own behaviors in very similar ways. And because our previous work has laid out in detail the elements of this framework, we can begin to implement it in AIS and thus shape their explanations in ways that are maximally intelligible to humans.

Researcher:  Michael Littman


Social Robotics For The Elderly

40

By 2030, health care staffing available to assist people aged 80 years and older is projected to fall to half its current level. This trend creates wide-ranging psycho-social challenges in older adults, especially in those with cognitive deficits. To meet these challenges, we will develop affordable robotic technologies that extend the reach of personal and professional caregivers, providing more care to more people. We begin with a functional analysis of deficits in daily living and design distributed artificial intelligence tools to deliver tailored support for aging individuals with such deficits. Our team will develop these tools with expertise in the fundamental psychological and psychiatric processes involved and with insight into how humans respond to robotic technology. Truly human-centered robots must interact naturally with people, support them in their life tasks, and adapt to their needs. A robot that adapts to the user’s specific needs can help overcome impairment, identify or monitor risks, and collaborate with family members and clinicians in supporting the person.

We will develop and engineer such human-centered technology with our industry partner, Hasbro, one of the world’s largest toy companies. In 2015, Hasbro successfully launched its first-generation Joy for All™ Companion Pet Cat, an animatronic agent that has basic sensors to respond with sounds and movement to a person’s motion and touch. Together we will devise next-generation agents that have advanced sensory, cognitive, and communicative capacities.

Equipped with new sensors and powerful processors the device will handle several behavioral contingencies specific to the person’s basic living tasks (e.g., structuring the day, taking medication, connecting with other people). Novel machine-learning techniques will then identify the user’s meaningful new behavior patterns. This human-centered system will fully involve the user: it will focus on tasks that are important to the user, make suggestions when the user has questions, and connect the user to significant other caregivers.

The principle of involvement also guides the cognitive and behavioral evaluation of these advanced capacities. At the outset, participatory and survey research methods will identify target persons’ deficits and needs; experimental research will evaluate prototypes with lab participants; followed by evaluation in the natural test beds of older adults’ daily living.

Researchers: Bertram Malle and Michael Littman


VR and AR Human Robot Interaction

Ros reality
rosreality2
User interfaces have been stuck in Robotics since the 90s in flat screen teleoperation. New advances in low cost Virtual Reality (VR) and Augmented Reality (AR) displays present opportunities for the development of novel control and communications systems for robots. These can be used to improve ease of use in human robot collaborations.

Researcher: Stefanie Tellex


Million Object Challenge

screenshot_2016-06-14_14-25-22

Most robots can’t pick up most objects most of the time. Tellex aim to change that by using a fleet of Baxter robots to construct a corpus of manipulation experiences for one million real-world objects. Existing collections consist of photos taken by a human photographer and may contain many examples of objects, but typically only a single view of each individual object. To bridge this gap, Tellex is building software for an industrial robot, the Baxter, to automatically collect a database of object models.

Researcher: Stefanie Tellex

 


Motion Planning in Milliseconds

The motion planning problem – how to move a robot from a start pose to a goal pose, without colliding with any obstacles – is critical to designing robots that operate in real environments. However, despite decades of research on this topic, software motion planners still take seconds to find a single plan, while recent efforts with GPUs have improved that, but can still take hundreds of milliseconds (and hundreds of Watts of power) to find a single plan.

In collaboration with computer architect Professor Dan Sorin at Duke ECE and his students, we have recently developed a new approach that builds custom circuitry that exploits the massive parallelism present in the problem to achieve real-time motion planning. This approach was able to find motion plans for real robot problems in less than one millisecond, while consuming far less power than a GPU (or even a CPU).

Researcher: George Konidaris

 

 


Making Telepresence More Human

telepresent01Telepresence robots have a reputation for being impersonal. Most have a small screen that embodies the presenter as a “talking head” incapable of full human gesture and interaction. Some people go so far as to dress Telepresence robots to hide their frames. This research explores what happens if you extend the robot’s screen size to almost the size of a full human or build telepresence into the form of a small table. How does the interaction with the robot change?

 

 

 

 

Researcher: Ian Gonsher