Large Screen Mobile Telepresence Robot (LSMTR)

In preparation for this year’s Designing Humanity Centered Robots class exhibition we’re going to share a couple of robots from past classes. Today we have the Large Screen Mobile Telepresence Robot. Most telepresence robots are little more than “Skype on a stick” or a “laptop on wheels”.LSMTR is window to another place. LSMTR makes interactive collaboration at a distance possible. LSMTR’s large screens increase the field of view, allowing users to incorporate gesture and movement, for a more immersive and embodied telepresence experience. We hope you enjoy this youtube video demonstrating some of the robot’s capabilities.

HCRI Opens New Spaces

15

 

HCRI has opened a new a new space on the 8th floor of the Sciences Library. This space contains two new areas for robotics research on campus. These spaces are as follows.

HRI Lab:

The Human-Robot Interaction (HRI) Lab is another new addition to the HCRI. The HRI lab is a simulated smart living room environment, that will be equipped with Kinect motion sensors, camera equipment, and humanoid robots, among others. This room will be used for testing everything from toy robots for the elderly to developing next generation ethical frameworks for robots.

The HCRI envisions a future in which robots in the home are a ubiquitous part of everyday life. As such, the HRI lab serves as a space to study near future human-robot interaction in household environments. We hope to use this space to better understand the types of interactions people would prefer with robots and improve the utility of robots in these types of environments.

IOT/Robotics Lab:

The new HCRI space also features the Robotics/Internet of Things (IOT) lab. Here students can take advantage of a sandbox style makerspace that gives the campus community a place to learn by creating. The lab features a variety of equipment that can be used to prototype robots and IOT devices. It joins the Brown Design Workshop and the Cogut Physical Media Lab as a campus robot building space.

The build space in the IOT/Robotics lab includes soldering stations, a 3D printer, a PCB CNC,  embedded computers, basic electronics supplies and everything needed to get a project from ideation to prototype. HCRI has sponsored the supplies in the lab giving students an area where they can easily build a low cost prototype and not have to worry about where to find the parts online.

HCRI Talk Simone Giertz

Simone Giertz drew nearly two hundred people to the Granoff Center. If you missed her talk you can find it here (Simone shows up after missing her train at 22 min 30 sec in if you want to fast forward):

 

Known worldwide as the “Queen of Sh***y Robots”, Simone Giertz is an inventor and renowned comedic YouTuber. In 2015, Simone built her first robot, the toothbrush helmet, and has since built an audience following her many more crazy robotic inventions, including the automatic lipstick machine and the wake-up machine. She brings a humorous flair to her videos showcasing her inventions, including TV Shop parodies; but most importantly, Simone mashes science and humour to explore why it’s critical to build useless things.

Simone Giertz has also previously worked in MMA sports journalism, and as an editor for Sweden’s official website Sweden.se. Giertz employs deadpan humor to demonstrate mechanical robots of her own creation to automate everyday tasks; despite working from a purely mechanical standpoint, they often fall short of practical usefulness, for comic effect. Giertz’s creations have included an alarm clock that slaps the user, a lipstick applier, and one that shampoos the user’s hair. When building her robots, Giertz does not aim to make something useful, instead coming up with excessive solutions to potentially automatable situations. She has been featured in TIME, Make, College Humor, Huffington Post and other media outlets.

The talk was Co-sponsored by the Department of Modern Culture and Media

Intriguing Article Published on Footnote: The Path To A Programmable World

We now live in a world permeated by computers. From phones to watches, home thermostats to coffee makers, and even ball-point pens, more and more of the gadgets we interact with on a daily basis are general-purpose computational devices in disguise. These “smart” devices differ from ordinary ones in that they are programmable and can therefore respond to users’ specific needs and demands.

For example, I recently bought the Jawbone Up 24, a rubber bracelet fitness monitor that tracks my daily movement. While the Jawbone is an interesting gadget on its own, it also works with a cross-device interface I can program. So now, every time the Up detects that I’ve met my daily goal for number of steps walked, a festive-looking lava lamp in my living room lights up. It’s a small event, but it’s my own personalized celebration to keep me motivated. It’s also an example of the sort of thing devices can do for you if you ask nicely.

For years, computer scientists have been envisioning a world without boundaries between cyberspace and the physical spaces people occupy, where programmable devices are integrated into larger systems, like smart houses, to make the user’s entire environment programmable. This joining of computers and objects is sometimes referred to as the Internet of Things or the Programmable World.

What will this programmable world look like? And how will we get there?

Read the full text of this intriguing article published on Footnote by Samuel Kortchmar and Michael Littman here.

Article Published on “Automation, Not Domination: How Robots Will Take Over Our World”

The first question people tend to ask when they find out you are a roboticist is, “When are robots going to take over the world and become our masters?” The answer to this question is a big “Never!”

Robots and artificial intelligence (AI) will, however, “take over” the world by invading and hopefully enhancing every aspect of people’s lives. More likely than sentient Skynet-style robot overlords are collections of robot applications that will help us increase productivity and improve our quality of life through human-robot collaboration.

Read more of this fascinating article published on Footnote by Alexandra Peseri and Chad Jenkins here.

Could your robot hurt you? Perhaps, but not intentionally.

Because robots’ decision-making abilities are not at the level of humans’, there are liabilities that come with industrial robots, automated vehicles, caretakers, and other positions that involve life-or-death situations. More so than robots rebelling and taking over the world, people should be worrying about robots malfunctioning or falling into the hands of the wrong people.

There have been three reported incidents of industrial robots causing deaths in factories, the latter two of which involved human error at least in part. In 1979, a Michigan Ford Motor robot’s arm crushed a worker while both were gathering supplies. The jury awarded the employee’s family $10 million, the state’s largest personal injury award ever at the time. Ford Motors was blamed for the incident because of a lack of safety precautions, including an alarm that should have sounded when the robot approached.

In 1984, a die-cast operator was pinned between a hydraulic robot and a safety pole. The worker was blamed for entering a robot envelope, which was prohibited in training. The company subsequently installed a fence to keep unauthorized workers away from the robots. This incident led the National Institute for Occupational Safety and Health’s Division of Safety Research to make recommendations in “ergonomic design, training, and supervision” for industrial robots, including design measures for the robot envelope, safety precautions for workers and programmers, and instructions for training. Supervisors were advised to emphasize in training that workers should not assume that a robot will keep doing its current activity or stay still when stopped.

More recently, in 2009, a robot’s arm again crushed and killed a worker at a Golden State Foods bakery in California. According to the inspection detail, the case is not closed yet, but it appears that Golden State Foods has to pay several fines, over $200,000 total. The incident was chalked up partially to the worker’s lack of precaution. The inspection report reads: “At approximately 7:55 a.m. on July 21, 2009, Employee #1 was operating a robotic palletizer for Golden State Foods, Inc., a food processor and packager for fast food restaurants. She entered the caged robotic palletizer cell while the robotic palletizer was running. She had not deenergized the equipment. Her torso was crushed by the arms of the robotic palletizer as it attempted to pick up boxes on the roller conveyor. She was killed.”

The International Organization for Standardization (ISO) has published 10 standards for industrial robots and drafted two standards for personal care robots. The American National Standard and the Robotics Industries Association have jointly developed detailed safety regulations for industrial robots, which were recently updated to incorporate the ISO’s guidelines.

A newer development for governments to respond to is the automated car. Self-driving cars are legal in California, Nevada, and Florida for testing on roads as long as there is a human behind the wheel. The only Google car accident so far occurred when the car was controlled by a human, as opposed to a computer. When asked who would get ticketed when the Google car ran a red light, co-creator Sergey Brin responded, “self-driving cars do not run red lights.”

Currently, however, the United States National Highway Traffic Safety Administration (NHTSA) is working on a set of rules governing the use of self-driving cars. The recommendations include special training to obtain a license for an automatic car and a requirement that those testing the vehicles report all accidents. The NHTSA proposal points out that most cars already have automated features, such as brake pulses during potential skids, automatic braking during potential collisions, and cruise control.

Another liability for robots is hacking. People can hack into not only computers, but also any machine that is part of a network— including cars with the features described above. To illustrate this possibility, computer scientists Charlie Miller and Chris Valasek hacked into a Ford Escape and caused it to crash even when the driver hit the brakes. Robot hacking similarly contains the potential for physical damage.

Increasingly automated machines are also bringing up security questions.1 Some medical and domestic robots record personal information and behavioral patterns, which privacy laws do not yet address. If information is not kept between the machine and its user, the consequences for medical robots, automated vehicles, or smart houses could be dire. Apps such as Nest, which connect your phone to your house in order to control the temperature, could give a hacker information about your home — and if it were a robot the app was controlling, much more.

Current hacking laws are difficult to apply to the 21st century, let alone beyond. The Computer Fraud and Abuse Act, passed in 1986, has been used to prosecute internet users for finding loopholes in websites without even revealing the information they find.

When we ascribe blame for a crime, we usually ascribe it to an individual who has acted maliciously or carelessly. But those words don’t really apply to robots, who act based on programming, intended or not. Adapting our laws to robots may require us to rethink agency, or at least to think more about who the agents are in these situations.

–Suzannah Weiss

A Roadmap for U.S. Robotics: From Internet to Robotics. 2013 Edition. March 20, 2013.