Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
External Research and Programs: Awards

Human Robot Interaction Awards

Microsoft Research announced the eight recipients of the Human-Robot Interaction (HRI) Awards. HRI is a large field with many active research projects in universities and other labs around the world. Our intention is to focus attention on the general paradigm shift from "robots as tools" to "social robots," and consider HRI in the context of the existence of a plethora of other computing devices deployed in the modern human environment, including computers, smart phones, and the World Wide Web.

Successful research in this area leads to such results as practical cognitive models of humans that could be used by the people who are programming robots in this rich information technology environment; tools that programmers could use to assert safe interactions between humans, robots, and these other computing devices; and software design patterns for adaptive human-robot interfaces in such an environment.

We have also made available a large range of advanced software development technologies for such research. This includes the Microsoft Robotics Studio with its large library of robot services.

Human Robot Interaction Award Recipients

Snackbot: A Service Robot
Jodi Forlizzi, Sara Kiesler
Carnegie Mellon University, United States

Until now, the emerging field of human-robot interaction has been divided into two main research camps. One camp is interested in developing social robots that engage people in a natural way; most of the well-known robots of this type have humanoid features but are not mobile and do not accomplish tasks. The other camp is interested in autonomous, mobile robots that do work; these robots are generally mobile but not social. Our interest is in spanning these camps, that is, in mobile social robots that deliver services. In many applications, the Web and mobile phone connections will be critical in the delivery of these services. Our research project (Project on People and Robots, www.peopleandrobots.org) is developing a robot called Snackbot. Snackbot will roam the halls of two large office buildings at Carnegie Mellon University, selling (or in some cases, giving away) snacks and performing other services. Our project will to link our current robot prototype seamlessly to Web, e-mail, instant messaging, and mobile services. We will deploy the robot in a field study to understand the uptake of robotic products and services.

Human-Robot-Human Interface for an Autonomous Vehicle in Challenging Environments
Ioannis Rekleitis, Gregory Dudek
McGill University, Canada

The proposed work would address the problem of one or more operators interfacing with a robot (AQUA, www.aquarobot.net) operating both on land and underwater. There are many challenges to be faced due to the fact that AQUA is a vehicle that moves in a variety of terrains and is capable of providing very limited sensory feedback in the form of video footage and the state of an IMU. Current operations with the robot require a skilled operator who is capable of guiding the robot either in walking or in swimming mode. We propose to implement a user interface utilizing the strengths of the Microsoft Robotics Studio (MSRS) to provide an interface for controlling the robot as well as a visualization tool for interpreting the visual feedback. This work would also extend a new method for communicating with AQUA when a direct link to a controlling console is not available; this method is based on cue cards that are presented in AQUA�s vision sensor, instructing the vehicle to perform high level actions.�While on land operations communication between an operator and a vehicle is easy to implement in a variety of methods, for example, wireless/wired links; underwater communications are a lot more restrictive in terms of cost, bulk, energy, and bandwidth.

Personal Digital Interfaces for Intelligent Wheelchairs
Nicholas Roy
Massachusetts Institute of Technology, United States

Many countries are experiencing a shortage of health care�professionals. Robotic assistive technology is appealing as a way to alleviate the resulting strain on care-givers, since autonomous mobile robots can be�readily used in a wide range of settings.�One such example of mobile�assistive technology is an intelligent wheelchair. Unfortunately, although�robotic wheelchairs are common in computer science laboratories, they are�far from ready for daily use in health care. A key challenge for these�devices�is the need for natural, intuitive interfaces that can be easily used�by someone without special training. We propose to create a human-robot�interaction system for an intelligent wheelchair based on a hand-held�Windows Mobile PDA. The PDA will provide a remote microphone and�speech processor, and act�as a remote display, providing a single, flexible�point of interaction with the wheelchair.�Incorporating a remote device into the system will create new human-robot�interaction challenges in how the spatial context of the interaction varies�depending on the location of the wheelchair, the location of the hand-held�device and the location of the resident. In order to address these challenges�and provide natural, intuitive interaction with the wheelchair, we propose to�develop new inference and planning algorithms, and develop learning�algorithms that are robust to the uncertainty that can result from interaction�between a physically separated robot and human. We will demonstrate this�research as part of an ongoing collaboration with a specialized care residence�in Boston.

Human-Robot Interaction to Monitor Climate Change�via Networked Robotic Observatories
Dezhen Song
Texas A&M University, United States
Ken Goldberg
University of California, Berkeley, United States

Global climate changes and carbon cycles are not well understood.� Need new tools for improved monitoring of remote environments.� Our�goal is to develop a new Human-TeleRobot system that will engage the public in systematically documenting Climate Change effects on natural environments.� We propose to design, implement, and evaluate a new Human-TeleRobot system that will engage people all over the world in systematically documenting climate change effects on wildlife and provide a test bed for study of Human Robot Interaction. Currently, scientific study in situ requires vigilant observation of detailed changes over months or years. In remote and/or inhospitable locations, observation is an arduous, expensive, dangerous, and lonely experience for scientists. We propose a new type of human-robot system that will allow anyone via a browser to participate in viewing and collecting data via the Internet. The Human Robot Interface will combine telerobotic cameras and sensors with a competitive game where �players� score points by taking photos and classifying the photos of others.

FaceBots: Robots Utilizing and Publishing Social Information in FaceBook����������
Nikolaos Mavridis, Tamer Rabie
United Arab Emirates University, United Arab Emirates

Although existing robotic systems are interesting to interact with in the short term, it has been shown that after some weeks of quasi-regular encounters, humans gradually lose their interest, and meaningful longer-term human-robot relationships are not established. An underlying hypothesis driving the proposed project is that such relationships can be significantly enhanced if the human and the robot are gradually creating a pool of shared episodic memories that they can co-refer to, and if they are both embedded in a social web of other humans and robots they both know and encounter frequently. Thus, here we propose to use Facebook, a highly successful online networking resource for humans, towards enhancing longer-term human-robot relationships, by helping to address the above two prerequisites.� An existing robot equipped with face recognition, a simple dialog system, and a real-time Facebook connection will be deployed, and will encounter humans in the environment of our lab. The robot will create a personal entry for itself in Facebook. Upon meeting a human it has not encountered before, it will ask for his/her name, and search for him in Facebook. Upon finding him, the human's Facebook entries (age, home town, profession) will serve as a starting point for simple dialogs. The interactions of the robot with humans will be logged in the form of events in the robot's own Facebook entry ('today I met with John in the afternoon'), and the robot's circle of acquaintances will also be kept in Facebook, in the robot's own personal entry ('my friends are...').�Upon further future encounters, the robot will also use memories from past encounters with the human as a point of conversation ('remember last Sunday when...'). As the human and the robot are embedded in a social web, possible co-acquaintances between the robot and the human will be exploited too: encounters with and / or information about mutual friends will also be used: ('I saw Michael yesterday').� Finally, tagged photos of faces taken by the robot will be posted on Facebook, and existing human-tagged photos posted on Facebook will also be used in order to provide a starting point for and / or enhance human face recognition for the robot through a wider training set for each human. Last but not least, it is expected that in the future the circle of friends of the robot will not only include humans, but also other robots.�The system is thus expected to achieve two significant novelties: arguably being the first robot that is truly embedded in a social web, and being the first robot that can purposefully exploit and create social information that is available online. Furthermore, it is expected to provide empirical support for our main driving hypothesis, that the formation of shared episodic memories within a social web can lead to more meaningful long-term human-robot relationships.

Multi-Touch Human-Robot Interaction for Disaster Response
Holly Yanco
University of Massachusetts Lowell, United States

In 2005, the response to Hurricane Katrina exposed several technological gaps.� In a day where satellite photography is becoming ubiquitous in our digital lives, it was surprising to find that many response groups were still using hand-drawn paper maps.� Additionally, advanced technology such as robot cameras were limited to sending video only to the operators at the site and not immediately to the command staff.� These gaps are largely due to the fact that there was no common computing platform to bring all of this information to the command staff.� This common computing platform would need to be able to interact with many different personnel from different backgrounds and expertise.� Our proposed research intends to bridge the technological gaps through the use of collaborative tabletop multi-touch displays such as the Microsoft Surface.� We will develop an API between the multi-touch display and Microsoft Robotics Studio that will allow us to create a multi-robot interface for command staff to monitor and interact with all of the robots deployed at a disaster response.

Survivor Buddy: A Web-Enabled Robot as a Social Medium for Trapped Victims
Robin Murphy, Jenny Burke
University of South Florida, United States
Clifford Nass
Stanford University, United States

We believe that an important use of social robots is for the assistance of humans who will be dependent on a robot for long periods of time. One example is that of point-of-injury care, where a robot interacts with a trapped victim at a car crash, earthquake, or pinned down by snipers on behalf of rescuers and medical personnel. Other examples are shut-ins such as the elderly or the disabled. Our prior work suggests that the dependent will treat the robot as a social medium, that is, the robot will be both a medium to the �outside� world and a local, independent entity devoted to the victim (for example, a buddy). One function of the medium is to provide two-way audio communication between the survivor and the emergency response personnel, but more interesting capabilities emerge by fully exploiting Web applications. For example, responders could play therapeutic music with a beat designed to regulate heartbeats or breathing. Also consider that two trapped Australian miners requested MP3 players with a Foo Fighters Album while waiting for rescue. A Web-enabled robot with a LCD screen could permit the survivor to videoconference with responders (or family), watch live TV, movies, or listen to music. The idea is that a Web-enabled, multi-media robot allows i) the survivor to take some control over the situation and find a soothing activity while waiting for extrication and ii) responders to support and influence the state of mind of the victim. This project relies on .NET and Microsoft Robotics Studio.

Prosody Recognition for Human-Robot Interaction
Brian Scassellati
Yale University, United States

Vocal prosody is the information contained in your tone of voice that conveys affect; it's not what you say, it is how you say it.� Prosody is a critical aspect to human-human interactions and will allow na�ve, untrained users to provide social learning feedback to a robot in human-robot interactions.�Few technologies currently exist to support automatic affect recognition. Without the ability to recognize human affect from natural social behaviors, a robot or computer has no opportunity to modulate its behavior, expect when explicitly programmed or controlled by a human programmer. In order to move beyond direct control of robots toward autonomous social interaction between humans and robots, the robots must be able to construct models of human affect by indirect, social means. We propose to build a novel prosody recognition algorithm for release as a component for Microsoft Robotics Studio.

 > Collaboration > Opportunities