Large Screen Mobile Telepresence Robot (LSMTR)

In preparation for this year’s Designing Humanity Centered Robots class exhibition we’re going to share a couple of robots from past classes. Today we have the Large Screen Mobile Telepresence Robot. Most telepresence robots are little more than “Skype on a stick” or a “laptop on wheels”.LSMTR is window to another place. LSMTR makes interactive collaboration at a distance possible. LSMTR’s large screens increase the field of view, allowing users to incorporate gesture and movement, for a more immersive and embodied telepresence experience. We hope you enjoy this youtube video demonstrating some of the robot’s capabilities.

HCRI Opens New Spaces

15

 

HCRI has opened a new a new space on the 8th floor of the Sciences Library. This space contains two new areas for robotics research on campus. These spaces are as follows.

HRI Lab:

The Human-Robot Interaction (HRI) Lab is another new addition to the HCRI. The HRI lab is a simulated smart living room environment, that will be equipped with Kinect motion sensors, camera equipment, and humanoid robots, among others. This room will be used for testing everything from toy robots for the elderly to developing next generation ethical frameworks for robots.

The HCRI envisions a future in which robots in the home are a ubiquitous part of everyday life. As such, the HRI lab serves as a space to study near future human-robot interaction in household environments. We hope to use this space to better understand the types of interactions people would prefer with robots and improve the utility of robots in these types of environments.

IOT/Robotics Lab:

The new HCRI space also features the Robotics/Internet of Things (IOT) lab. Here students can take advantage of a sandbox style makerspace that gives the campus community a place to learn by creating. The lab features a variety of equipment that can be used to prototype robots and IOT devices. It joins the Brown Design Workshop and the Cogut Physical Media Lab as a campus robot building space.

The build space in the IOT/Robotics lab includes soldering stations, a 3D printer, a PCB CNC,  embedded computers, basic electronics supplies and everything needed to get a project from ideation to prototype. HCRI has sponsored the supplies in the lab giving students an area where they can easily build a low cost prototype and not have to worry about where to find the parts online.

Intriguing Article Published on Footnote: The Path To A Programmable World

We now live in a world permeated by computers. From phones to watches, home thermostats to coffee makers, and even ball-point pens, more and more of the gadgets we interact with on a daily basis are general-purpose computational devices in disguise. These “smart” devices differ from ordinary ones in that they are programmable and can therefore respond to users’ specific needs and demands.

For example, I recently bought the Jawbone Up 24, a rubber bracelet fitness monitor that tracks my daily movement. While the Jawbone is an interesting gadget on its own, it also works with a cross-device interface I can program. So now, every time the Up detects that I’ve met my daily goal for number of steps walked, a festive-looking lava lamp in my living room lights up. It’s a small event, but it’s my own personalized celebration to keep me motivated. It’s also an example of the sort of thing devices can do for you if you ask nicely.

For years, computer scientists have been envisioning a world without boundaries between cyberspace and the physical spaces people occupy, where programmable devices are integrated into larger systems, like smart houses, to make the user’s entire environment programmable. This joining of computers and objects is sometimes referred to as the Internet of Things or the Programmable World.

What will this programmable world look like? And how will we get there?

Read the full text of this intriguing article published on Footnote by Samuel Kortchmar and Michael Littman here.

Article Published on “Automation, Not Domination: How Robots Will Take Over Our World”

The first question people tend to ask when they find out you are a roboticist is, “When are robots going to take over the world and become our masters?” The answer to this question is a big “Never!”

Robots and artificial intelligence (AI) will, however, “take over” the world by invading and hopefully enhancing every aspect of people’s lives. More likely than sentient Skynet-style robot overlords are collections of robot applications that will help us increase productivity and improve our quality of life through human-robot collaboration.

Read more of this fascinating article published on Footnote by Alexandra Peseri and Chad Jenkins here.

Could your robot hurt you? Perhaps, but not intentionally.

Because robots’ decision-making abilities are not at the level of humans’, there are liabilities that come with industrial robots, automated vehicles, caretakers, and other positions that involve life-or-death situations. More so than robots rebelling and taking over the world, people should be worrying about robots malfunctioning or falling into the hands of the wrong people.

There have been three reported incidents of industrial robots causing deaths in factories, the latter two of which involved human error at least in part. In 1979, a Michigan Ford Motor robot’s arm crushed a worker while both were gathering supplies. The jury awarded the employee’s family $10 million, the state’s largest personal injury award ever at the time. Ford Motors was blamed for the incident because of a lack of safety precautions, including an alarm that should have sounded when the robot approached.

In 1984, a die-cast operator was pinned between a hydraulic robot and a safety pole. The worker was blamed for entering a robot envelope, which was prohibited in training. The company subsequently installed a fence to keep unauthorized workers away from the robots. This incident led the National Institute for Occupational Safety and Health’s Division of Safety Research to make recommendations in “ergonomic design, training, and supervision” for industrial robots, including design measures for the robot envelope, safety precautions for workers and programmers, and instructions for training. Supervisors were advised to emphasize in training that workers should not assume that a robot will keep doing its current activity or stay still when stopped.

More recently, in 2009, a robot’s arm again crushed and killed a worker at a Golden State Foods bakery in California. According to the inspection detail, the case is not closed yet, but it appears that Golden State Foods has to pay several fines, over $200,000 total. The incident was chalked up partially to the worker’s lack of precaution. The inspection report reads: “At approximately 7:55 a.m. on July 21, 2009, Employee #1 was operating a robotic palletizer for Golden State Foods, Inc., a food processor and packager for fast food restaurants. She entered the caged robotic palletizer cell while the robotic palletizer was running. She had not deenergized the equipment. Her torso was crushed by the arms of the robotic palletizer as it attempted to pick up boxes on the roller conveyor. She was killed.”

The International Organization for Standardization (ISO) has published 10 standards for industrial robots and drafted two standards for personal care robots. The American National Standard and the Robotics Industries Association have jointly developed detailed safety regulations for industrial robots, which were recently updated to incorporate the ISO’s guidelines.

A newer development for governments to respond to is the automated car. Self-driving cars are legal in California, Nevada, and Florida for testing on roads as long as there is a human behind the wheel. The only Google car accident so far occurred when the car was controlled by a human, as opposed to a computer. When asked who would get ticketed when the Google car ran a red light, co-creator Sergey Brin responded, “self-driving cars do not run red lights.”

Currently, however, the United States National Highway Traffic Safety Administration (NHTSA) is working on a set of rules governing the use of self-driving cars. The recommendations include special training to obtain a license for an automatic car and a requirement that those testing the vehicles report all accidents. The NHTSA proposal points out that most cars already have automated features, such as brake pulses during potential skids, automatic braking during potential collisions, and cruise control.

Another liability for robots is hacking. People can hack into not only computers, but also any machine that is part of a network— including cars with the features described above. To illustrate this possibility, computer scientists Charlie Miller and Chris Valasek hacked into a Ford Escape and caused it to crash even when the driver hit the brakes. Robot hacking similarly contains the potential for physical damage.

Increasingly automated machines are also bringing up security questions.1 Some medical and domestic robots record personal information and behavioral patterns, which privacy laws do not yet address. If information is not kept between the machine and its user, the consequences for medical robots, automated vehicles, or smart houses could be dire. Apps such as Nest, which connect your phone to your house in order to control the temperature, could give a hacker information about your home — and if it were a robot the app was controlling, much more.

Current hacking laws are difficult to apply to the 21st century, let alone beyond. The Computer Fraud and Abuse Act, passed in 1986, has been used to prosecute internet users for finding loopholes in websites without even revealing the information they find.

When we ascribe blame for a crime, we usually ascribe it to an individual who has acted maliciously or carelessly. But those words don’t really apply to robots, who act based on programming, intended or not. Adapting our laws to robots may require us to rethink agency, or at least to think more about who the agents are in these situations.

–Suzannah Weiss

A Roadmap for U.S. Robotics: From Internet to Robotics. 2013 Edition. March 20, 2013. 

The Google Driverless Car: “A Cool Thing that Matters” by Alexa Peseri

Abstract: Google is a company that prides itself on doing “cool things that matter.” One of the company’s most recent ventures is the development of a driverless car. This article explores the way in which the Google car operates, as well as the reasons for Google’s investment in such technology. Google’s vehicle is a robot-controlled Toyota Prius outfitted with several radar sensors, cameras, and laser range-finders to observe traffic. Refined software navigates routes by way of Google Maps. The car offers many potential safety benefits, has been thoroughly tested, and performed well on road tests of over 500,000 miles in length. If the car succeeds in more extensive tests, it will be feasible to reproduce and sell on the market. Although the driverless car has performed well thus far, more fine-tuning must still be done. Potential issues include accidental shut-off, flat tires, cyber security, and liability. However, these issues would be nearly outweighed by possible benefits. If proven safe, the driverless car would provide vast benefits to the automobile industry, and to consumers, in the areas of safety and time-management [1]

1. Chea, T. California governor signs driverless cars bill: [Internet] The Associated Press; 2012 Sep 26 [cited 2013 Feb 2]; Available from: http://news.yahoo.com/california-governor-signs-driverless-cars-bill-225332278–finance.html

The Driverless Car: What is it, and how does it operate?

Picture a car driven with superhuman vision, instantaneous reflexes, and up-to-date knowledge of roads across the United States. You’re probably picturing something from The Bourne Identity or The Terminator. In fact, Google is developing a car that is almost exactly that. It’s called the Google Car and is driven by a robot. With radar sensors for eyes and ears, range finders for awareness of surrounding cars, and Google maps at its fingertips, the Google Car may come to redefine the American automobile. It could enhance the safety of roadways, provide a new driving experience, and even allow the handicapped to travel independently. The car has been thoroughly tested, and performed well on road tests of over 500,000 miles in length. If the last legal and technical hurdles can be overcome, the Google Car has the immense potential to revolutionize travel on the roadways of the world [1].

The most common model of a Google car takes the form of a robotically controlled Toyota Prius. The Toyota Prius is a fully hybrid electric mid-size hatchback, which uses a combination of gasoline and electric battery for power, thus reducing carbon emissions and improving mileage. The year that Google began work on the robot car, the Toyota Prius was ranked #1 most fuel-efficient mid-size car [2]. It has continued to perform well, and has maintained its top rating in 2013 [3]. In addition to Toyota, Audi and Lexus have also embraced the “hands off” approach, as they have also partnered with Google engineers to make self-driving versions of the Audi TT and Lexus RX450h [4]. Google has outfitted these otherwise traditional vehicles with several radar sensors, cameras, and laser range-finders to observe traffic. Advanced software analyzes this data and makes decisions that control every aspect of the car, from steering and navigation to acceleration and braking [1].
Currently, a human “driver” must still be present in the vehicle, with the ability to enable a manual override at any time [1]. A blind person is capable of overriding the car in this way, assuming the fault condition is not detected automatically. If done manually, the driver has to use an emergency stop button, although systems may vary, depending on the model of automobile [5]. Cars can also be overridden by external sources. A system similar to the type used by air-traffic-controllers could be adapted to monitor safety of autonomous vehicles. Employees working in call centers would be able to anticipate issues that drivers may encounter in their future travels. In relation to autonomous cars, such issues may include impending traffic jams, dead end roads, or changes in route due to construction zones. External monitoring is advantageous in that it introduces a level of perception that the average human driver could never have. In this way, potentially unsafe situations can be avoided, and the safety of the “hands off” driver, as well as the other drivers on the road, is greatly enhanced [6].
The question arises: how comfortable will drivers of autonomous vehicles be, knowing that someone else can control their cars? Cruise control is a comparable advancement in technology. Drivers initially resented their loss of control following the advent of cruise control, but were eventually placated by its efficiency and safety benefits. More recently, researchers have been studying acceptance of adaptive cruise control, which is a step beyond the generic one, in that the system adjusts speed as a function of distance to the car in front of it. Studies have shown that fuel savings and safety considerations seem to be the major motivators to accept adaptive cruise control [7]. “Adaptive cruise control can be seen as a transition to the self-serving car,” said Betram Malle, a professor of psychology in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown University. Another example is that of the seat belt, which was not universally accepted when first released. After the public realized the safety benefits of seatbelts, however, their use increased and was eventually made mandatory by law. Indeed, safety concerns can often prompt the general public to give up some sense of individual autonomy, and if extreme enough, lead to government intervention [6]. Perhaps the safety benefits of driverless cars will one day motivate the government to draft legislation that prohibits manual driving. Gary Marcus, a writer for The New Yorker, exclaimed, “Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car” [8].

Inspiration for the Google Car:

The concept of a driverless car might seem like a novelty to many, but it has in fact existed for decades. A notable example is Mercedes-Benz’s robotic van, designed by Ernst Dickmanns and colleagues at the Bundeswehr University of Munich, in Munich, Germany. In the early 1980s Dickmanns’ team equipped a 5-ton Mercedes-Benz van with cameras and sensors, similar to the inputs in today’s driverless cars. These and other technologies made it possible for steering, acceleration, and breaking to be controlled via computer commands. Mercedes’ robot car, “VaMoRs,” drove entirely on its own in 1986 and reached speeds up to 96 km/h, or roughly 60 mph, the following year [9] Currently, Google, Bosch LLC., Continental, the Volkswagen Group, Volvo, Audi, Lexus, and others have made notable progress in the realm of autonomous vehicles [10].

Google began working on its first model of an autonomous vehicle after Sebastian Thrun, founder of Google’s Street View, won the Pentagon’s 2005 DARPA challenge with his driverless car named Stanley [11]. The DARPA challenge is a federally sponsored competition for autonomous vehicles. Initially founded with the purpose of developing technology for the military, it has since expanded to include vehicles for commercial use. The team leaders of winning vehicles from DARPA challenges in several recent years, as well as Anthony Levandowski, the man who built the first driverless motorcycle, are all working together on Google’s team [12].

Getting the Green Light:

Nevada was the first state to approve driverless cars on roadways in June 2011, which allowed Google to begin testing [4]. Florida was the next to follow suit, doing so in April of 2012 [13]. In September of 2012, Governor Jerry Brown of California signed legislation that created safety and performance regulations to test and operate driverless cars on state roads and highways [1]. State laws such as these have allowed Google to test its car thoroughly, and it has performed flawlessly on over 500,000 miles of road tests. The development team has driven cars on winding and traffic-heavy roads, such as the Golden Gate Bridge, as well as San Francisco’s infamously meandering Lombard Street [4]. The only Google Car accident recorded thus far was a minor fender-bender in 2011, which occurred while a human was manually operating the car [14].
In order for the Google Car to drive across the country, it will need the permission of all the states, or blanket permission from the federal government. Considering that the DARPA challenge is federally sponsored, the car will likely be given federal permission to be sold and driven across state-borders, contingent on its road-test performance. In 2012, the National Highway Traffic Safety Administrator, David Strickland, said, “The development of automated vehicles is a worthy goal” [15]. He explained that the federal government is beginning to look into the safety regulations that would be needed for a country in which driverless cars dominated roadways [15].

Leave It to the Robots:

In 2012, Google’s co-founder Sergey Brin stated, “Self-driving cars will be far safer than human-driven cars” [1]. Considering that human error is the cause of over ninety percent of automobile accidents, he may be correct. In addition to the Google Car’s finely tuned sense of direction and quick reflexes, it offers an escape from several common human distractions, such as text messaging, tiredness, and drunkenness [1]. Not only is the car safer, but it is also a more accessible mode of transport for the handicapped, especially the blind. The car has already been tested with a blind driver inside. Regarding his experience, Steve Mahan, who is blind, exclaimed, “Where this would change my life is to give me the independence and the flexibility, to go to the places I both want to go and need to go, when I need to do those things” [16]. As mentioned above, when operated by its internal technology, the autonomous car has kept a flawless driving record. The spread of driverless cars will create a positive feedback loop in which a greater number of such cars will result in fewer accidents, validating their safety, and subsequently increasing their popularity. This will lead to an increased number of Google driverless cars, other reliable driverless cars on roadways, and even fewer accidents. The overall result will produce a ripple effect, whereby all drivers, including those who manually operate their cars, will experience safer roadways.

Troubleshooting and Potential for Malfunction:

For all its promise, Google’s robot car is not perfect. Some potential issues revolve around accidental shut-off, flat tires, cyber security, and liability. Google’s current focus is on improving sensors and hardware failure support. Sergey Brin drew a parallel between air travel and driving—the technology in both driverless cars and airplanes may experience system failure. Unlike airplanes, however, the robot car eliminates the possibility of human operational error [17]. Currently though, air space is a more tractable domain for autonomy because it is highly controlled and regulated. The analogy between air and road travel is thus an abstract fit. Both modes of travel face the potential of exceptional errors, but these errors become magnified on public roads. Much of air travel is currently automated, with pilots and Air Traffic Controls to assist with actions such as landing, and to take care of and to prevent problems that might arise. In contrast, autonomy on public roads faces many computational problems regarding uncertainty in sensing and perception, predicting and reacting to what pedestrians and other cars might do, and other aforementioned issues. Therefore, autonomous cars are currently still far from achieving the one in ten million chances of fatal harm that have been achieved by the top 39 airlines [18]. As aforementioned, if a system similar to the type used by air-traffic-controllers could be created for driverless cars, many road and driving errors will be foreseen, eliminated, and overall, decreased.

Google’s engineering team is especially focused on preparing the car for rough conditions in which the cars may experience difficulties. For example, snow makes it difficult for the cars to “see” the lane markers and other cues needed to maintain safe positioning on the road. The team is working to equip the car for snowy terrain, construction signals, snow-covered roadways, temporary construction signals, and other tricky situations, such as detours, that many drivers encounter [19].

Another major obstacle is that of temporary changes in routes, due to construction and accidents. Such road changes may fail to be reflected in the car’s onboard “map,” causing the car to become lost [14] As mentioned above, however, humans working to monitor the driverless cars from afar would be able to foresee these road changes, and notify the cars and their respective drivers. In this case, the human driver can then switch to manual control. This issue is also being addressed is through a collective database. Google self-driving cars communicate with each other, which makes it only a matter of time before the change gets mapped, and cars no longer get lost. The more Google cars there are on the road, the quicker the issue will be fixed. Some may argue that this level of computer communication is sensitive to human hackers. “I think any system is vulnerable, at some level,” said Odest Jenkins, a Computer Science Professor at Brown University, whose research focuses on robot learning from demonstration, or Robot LfD. Such hacking has not been demonstrated yet, and if released onto the consumer market, driverless cars will be armed with software and humans to working to try to prevent such devastating interference [6].

Additional challenges may manifest themselves in the form of construction zones, accident zones, and situations involving human traffic directors [1]. Driverless cars have no issue interpreting traffic signals, speed limits, and proximity to other cars, but may have difficulty understanding a human directing traffic with hand signals. The cars may become confused when a human’s hand signals conflict with a traffic light or stop sign. To fix this, Google’s engineers are exploring ways to teach the car how to interpret a person’s hand signals, and when to obey those, rather than traffic lights. Indeed, several robotics programs across the country are working to hone a robot’s gesture and person recognition. One such program, unaffiliated with Google, exists at Brown University, where the Brown Robotics Group has partnered with i-Robot Corporation. Researchers from both places are working on enhancing gesture and person recognition [5] Various “robots,” in the form of cars as well as smaller-scale technologies, have been designed to recognize motions performed in different ways by many humans, but not with 100% accuracy. Hand signals may vary from person to person, traffic cop to traffic cop, making this is an essential obstacle to be overcome [5].

Costly Cars:

While the Google car and others like it may have mastered steep roadways, they also come with steep price tags. Approximately $150,000 in equipment is needed for each Google Car. That figure includes a $70,000 LIDAR (Light Detection And Ranging) system. LIDAR is the beam technology that equates to the car’s “eyes” on the road [19] Prior to the development of LIDAR technology, the highest performing driverless cars were said to operate autonomously for approximately ninety-nine percent of the time. Considering the many hours that an average individual drives in a year, the remaining one percent is actually somewhat significant. LIDAR’s hefty price tag thus comes with great innovation, because it increases autonomy of the vehicle from 99, up to 99.9 percent, by providing high definition, nearly three-dimensional information about the surrounding environment. The unit spins, while various lasers are emitted to collect distance-sensing data [6] Although very valuable, LIDAR systems make the car too expensive for most consumers, as well as for some of the companies trying to develop driverless cars. Several auto industry suppliers, as well as Chris Urmson, who has worked on the Google Car, have promised that reasonably priced LIDAR and other comparable technology systems are on their way [20] LIDAR systems are currently being manufactured on the scale of 100’s and 1000’s, but if this is increased to 100,000’s and greater, the cost will decrease through the economies of scale, and LIDAR will become cheaper [6]. Fortunately, the National Highway Traffic Safety Administration (NHTSA) values the potential safety benefits of driverless cars and has thus decided to invest in the accompanying technology. This will likely produce more cost-effective, and eventually, affordable systems, at least for the middle and upper socioeconomic classes [20].

Although Google’s car has received the most publicity, several other companies have been working to build driverless cars. Some of these companies have reduced input costs by choosing to outfit their cars with fewer lasers. Velodyne, the company contracting with Google and several other driverless car projects, has developed an HDL-64E system with 64 lasers, and an HDL-32E system, with 32 lasers. The latter is less precise, but still far more perceptive than humans. It is less expensive than the HDL-64E, and has been used successfully by a number of companies [21]. Continental is one such company, which has outfitted a VW Passat with more affordable technology. However, Continental’s car has only been tested on 10,000 miles of road, as compared to Google’s nearly 500,000 miles [20].

The issue of liability is one of the critical and unresolved questions about autonomous cars. If insurance companies realize how much more prone to error human drivers are than robots, they are likely to increase premiums on manually driven vehicles. As a result, those without automated cars may have higher premiums than those who do own automated cars [14] This could eventually create a problem of the haves and have-nots, in which members of lower economic class cannot afford the expensive autonomous cars, and will thus be unduly burdened with higher insurance premiums [5]. Premium adjustments such as this are consistent with other safety features that have lowered premiums, such as Anti-Lock-Braking-System, (ABS) brakes and air bags [22].

Robot Cars Displacing Human Driver Jobs:

If the technical, liability, and legal issues are all resolved, robot cars may soon take away chauffeurs’ jobs. Autonomous vehicle technology will eventually move beyond traditional SUVs and sedans, to trucks, buses, and agricultural vehicles, thus reducing and possibly eliminating the need for drivers of public transportation, school buses, and other means of transportation. Professor Jenkins explained that the military is likely to be the first to take advantage of autonomous cars, followed by “smart public transportation,” and ultimately consumer vehicles. There will most likely be a gradual phasing out of the demand for humans employed as drivers, in all sectors of automobile transportation [5]. While demand for human drivers will decrease after driverless cars permeate the world of public transportation, the need for manufacturers of autonomous vehicle inputs, as well as for people to monitor the cars and to invoke overrides, will increase. Driverless cars will likely create more jobs than they destroy [6].

The Road Ahead:

The development of the Google Car is certainly in line with the company’s overall mission—“help solve really big problems using technology” by “doing cool things that matter.” Automobile safety is a significant issue in today’s world; in 2010, the National Highway Traffic Safety Administration recorded upwards of 5.4 million police-reported motor vehicle crashes in the United States [23]. As mentioned above, over ninety percent of crashes, (4.86 million in 2010) are due to human error [1]. The Google Car has the potential to eliminate human error and drive us down a road with increased efficiency and security [12]. Google’s driverless car offers many potential safety benefits, has been thoroughly tested, and performed well on lengthy road tests. Currently, the car can flawlessly pick up its driver and passengers, commute to work, and run errands, all via highways and neighborhood streets. Eventually, the technology will likely evolve to handle a greater variety of situations. Ultimately, the Google Car may serve as a fully-automated and competent chauffeur [14]. Advancements to increase safety measures and hone person recognition abilities are under development. If realized, the driverless car is expected to permeate the market in a matter of decades, thus improving safety for all drivers.

References:

[1] Chea, T. California governor signs driverless cars bill [Internet]. The Associated Press; 2012 Sep 26 [cited 2013 Feb 2]. Available from: http://news.yahoo.com/california-governor-signs-driverless-cars-bill-225332278–finance.html

[2] United States Department of Energy. 2005 Compare Side by Side: Fuel Economy [Internet]. Environmental Protection Agency; 2005 [cited 2013 March 2]. Available from: http://www.fueleconomy.gov/feg/Find.do?action=sbs&id=20934

[3] United States Department of Energy. 2013 Most and Least Efficient Cars [Internet]. Environmental Protection Agency; 2013 April 11 [cited 2013 March 1]. Available from: http://www.fueleconomy.gov/feg/best/bestworstNF.shtml

[4] Newman, G. A Future Filled with Driverless Cars [Internet]. Insurance Journal; 2013 Feb 11 [cited 2013 Mar 2]. Available from: http://www.insurancejournal.com/magazines/features/2013/02/11/280151.htm

[5] Odest Jenkins. 2013. March 26.

[6] Bertam, M., Jenkins, O., & Littman, M. Interviewed by Peseri, Alexandra. 2013. April 11.

[7] Use patterns among early adopters of adaptive cruise control. http://www.ncbi.nlm.nih.gov/pubmed/23156618

[8] Marcus, G. Moral Machines [Internet]. The New Yorker; 2012 Nov 27[cited 2013 April 15] 2012 Nov 27. Available from: http://www.newyorker.com/online/blogs/newsdesk/2012/11/google-driverless-car-morality.html

[9] Wikipedia. Ernst Dickmanns [Internet]. Wikipedia; 2013 Mar 13 [cited 2013 April 11]. Available from: http://en.wikipedia.org/wiki/Ernst_Dickmanns

[10] Mariacher, E. 3 driverless cars trends in 2012 [Internet]. Blogger; 2013. Jan 2 [cited 2013 April 13]. Available from: http://driverless-cars.blogspot.com/2013/01/3-driverless-cars-trends-in-2012.html#.UWoAenCrJ0o

[11] Haglage, A. Google, Audi, Toyota, and the Brave New World of Driverless Cars [Internet]. The Associated Press; 2013 Jan 16 [cited 2013 Feb 22]. Available from: http://www.thedailybeast.com/articles/2013/01/16/google-audi-toyota-and-the-brave-new-world-of-driverless-cars.html

[12] Thrun, S. What We’re Driving At [Internet]..Google Official Blog; 2010 Oct 09 [cited 2012 Dec 19]. Available from: http://googleblog.blogspot.com/2010/10/what-were-driving-at.html
[13] Valdes, A. Florida embraces self-driving cars, as engineers and lawmakers prepare for the new technology [Internet]. News Channel 5: WPTV; 2012 May 7 [cited 2013 Mar 9]. Available from: http://www.wptv.com/dpp/news/state/florida-embraces-self-driving-cars-as-engineers-and-lawmakers-prepare-for-the-new-technology

[14] Blodget, H. Here Are Some Of The Problems Google Is Having With Its Self-Driving Cars [Internet] Business Insider; 2013 Mar 3 [cited 2013 Mar 9]. Available from: http://www.businessinsider.com/google-self-driving-car-problems-2013-3#ixzz2N0lv9uEc

[15] DiCaro, M. As Gov’t Considers Regulations, Autonomous Car Boosters Show Off Plans.[Internet]. Transportation Nation; 2012 Oct 23 [cited 2013 Mar 17]. Available from: http://transportationnation.org/2012/10/23/as-govt-considers-regulations-autonomous-cars-boosters-show-off-plans/

[16] Author Unlisted. Self-Driving Car Test: Steve Mahan [Internet]. Google Jobs; 2012 [cited 2013 Feb 10]. Available form: http://www.google.com/about/jobs/lifeatgoogle/self-driving-car-test-steve-mahan.html

[17] Tam, D. Google’s Sergey Brin: “You’ll ride in robot cars within 5 years” [Internet]. CNET; 2012 Sep 25 [cited 2013 Mar 1]. Available from: http://news.cnet.com/8301-11386_3-57520188-76/googles-sergey-brin-youll-ride-in-robot-cars-within-5-years/

[18] OAG Aviation. OAG Aviation & PlaneCrashInfo.com accident database, 20 years of data (1993 – 2012). [Internet]. OAG Aviation; 2013 [cited 2013 April 4]. Available from: http://planecrashinfo.com/cause.htm

[19] Urmson, C. The self-driving car logs more miles on new wheels [Internet]. Google Official Blog; 2012 Aug 07 [cited 2013 Mar 2]. Available from: http://googleblog.blogspot.co.uk/2012/08/the-self-driving-car-logs-more-miles-on.html

[20] Priddle, A. & Woodyard, C.: Google discloses costs of its driverless car tests [Internet]. USA Today; 2012 Jun 14 [cited 2013 Mar 14]. Available from: http://content.usatoday.com/communities/driveon/post/2012/06/google-discloses-costs-of-its-driverless-car-tests/1#.UVo-03BWB0p

[21] San Francisco Gate. Velodyne’s LiDAR division doubles production capacity to meet demand [Internet]. Hearst Communications Inc.; 2013 March 11 [cited 2013 April 11]. Available from: http://www.sfgate.com/business/prweb/article/Velodyne-s-LiDAR-division-doubles-production-4344316.php

[22] Sochon, P. Best Practice Workplace Driving Safety Programs in NSW. [Internet]. 65th Road Safety Congress; The Royal Society for the Prevention of Accidents (RoSPA); 2000 Mar 8 [cited 2013 May 7]. Available from: http://www.rospa.com/RoadSafety/conferences/congress2000/proceedings/sochon.pdf

[23] National Highway Traffic Safety Administration. Crashes [Internet]. 2010 [cited 2013 April 14]. Available from: http://www-fars.nhtsa.dot.gov/Main/DidYouKnow.aspx