Our Research Projects

Interactive and Collaborative Robot-Assisted Emergency Evacuation

During an emergency, evacuees must make quick decisions, so they tend to rely on default decision-making that may put them at risk, such as exiting the way they entered, following a crowd, or sheltering in place. When a crowd attempts to exit through a single exit, choke points and crowd congestion may impede the safe flow of evacuees, potentially resulting in a stampede of people and the loss of human lives. Mobile robots have been increasingly deployed as assistants on city streets and in hotels, shopping centers and hospitals. The future ubiquity of these systems offers an unprecedented opportunity to revolutionize how people are evacuated from dangerous situations. In particular, when compared with traditional emergency infrastructure technologies such as fire alarms and smoke detectors, mobile robots can achieve better situation awareness and use this information to expedite evacuation and enhance safety. Additionally, mobile robots can be used in risky and life-threatening situations, such as chemical spills or active shooter scenarios, which present dangers to human first responders. This project will develop the first embodied multi-robot evacuation system where multiple mobile robots, originally tasked for different purposes, serve as emergency evacuation first responders leading people to safety. [Project Page]

Research Products

    • Effective Robot Evacuation Strategies in Emergencies [PDF]
    • The Future of Evacuation: Developing Robot First Responders [PDF]
    • Exploring the Effect of Explanations during Robot-Guided Emergency Evacuation [PDF]
    • Aiding Emergency Evacuations Using Obstacle-Aware Path Clearing [PDF]

Few-Shot Incremental Learning

For many applications, robots will need to be incrementally trained to recognize the specific objects needed for an application. Imagine, for example, a domestic robot tasked with locating and organizing household items. We would like for this robot to be trained on what items it should organize by its non-expert owner and recognize that the items to be organized might change over time. Although it could be possible to train the system on an enormous corpus of data containing a vast number of objects, hoping that all of the objects that the robot will one day be asked to organize will be in the dataset, this approach seems destined for failure. Ideally, the robot should be taught about important objects incrementally, and, because people will demand quick results, from only a few examples. We seek to develop a practical system that would allow a novice human to teach a robot about different object categories incrementally using only a small set of visual examples provided by the person. We refer to this problem as Few-Shot Incremental Learning. [Project Page]

Research Products

    • Cognitively-Inspired Model for Incremental Learning Using a Few Examples [PDF] [Code]
    • Tell me what this is: Few-Shot Incremental Object Learning by a Robot [PDF] [Code]

RGB-D Indoor Scene Classification

Classifying images taken from indoor scenes is an important area of research. The development of an accurate indoor scene classifier has the potential to improve indoor localization and decision-making for domestic robots, offer new applications for wearable computer users, and generally result in better vision-based situation awareness thus impacting a wide variety of applications. Yet, high intra-class variance and low inter-class variance make indoor scene classification an extremely challenging task. To cope with this problem, we propose a clustering approach inspired by the concept learning model of the hippocampus and the neocortex, to generate clusters and centroids for different scene categories. Test images depicting different scenes are classified by using their distance to the closest centroids (concepts). Modeling of RGB-D scenes as centroids not only leads to state-of-the-art classification performance on benchmark datasets (SUN RGB-D and NYU Depth V2), but also offers a method for inspecting and interpreting the space of centroids. Inspection of the centroids generated by our approach on RGB-D datasets leads us to propose a method for merging conceptually similar categories, resulting in improved accuracy for all approaches. [Project Page]

Research Products

    • Centroid Based Concept Learning for RGB-D Indoor Scene Classification [PDF] [Code]

Pedestrian Pattern Dataset

We present the pedestrian patterns dataset for autonomous driving. The dataset was collected by repeatedly traversing the same three routes for one week starting at different specific timeslots. The purpose of the dataset is to capture the patterns of social and pedestrian behavior along the traversed routes at different times and to eventually use this information to make predictions about the risk associated with autonomously traveling along different routes. This dataset contains the Full HD videos and GPS data for each traversal. Fast R-CNN pedestrian detection method is applied to the captured videos to count the number of pedestrians at each video frame in order to assess the density of pedestrians along a route. By providing this large-scale dataset to researchers, we hope to accelerate autonomous driving research not only to estimate the risk, both to the public and to the autonomous vehicle but also accelerate research on long-term vision-based localization of mobile robots and autonomous vehicles of the future. Here is the link to Pedestrian Pattern Dataset on Dropbox [Pedestrian_Pattern_Dataset]

Research Products

    • “Pedestrian Pattern Dataset.” [PDF

Velocity based Potential field for collision avoidance of autonomous aerial vehicles

(joint work with Jack Langelaan)

The projected increase in the number of uninhabited air vehicles (for civilian applications such as package delivery and emergency response) and the potential rise of personal air vehicles means that the airspace will become very crowded. Safely managing these aircraft will require scalable, safe methods for collision avoidance.

The primary focus of this work is to develop a method which is scalable to multiple vehicles and can be implemented in real time. State-of-the-art techniques for obstacle avoidance produce good results with static obstacles. However, with the increase in the number of vehicles and dynamic obstacles, the complexity of these algorithm increases. This makes these algorithms impractical for online implementation.

We propose a velocity based approach derived from potential fields for obstacle avoidance. It uses the velocity of the aerial vehicles to generate a spherical safe zone around the vehicle. Other vehicles (which act as dynamic obstacles) get repelled by the safe zone. Given the vehicles are able to follow the commanded velocities, the vehicles will safely reach their goal location.

The method is first implemented in simulation and scalability is tested for around 50 vehicles at a time. Then, the approach is tested in an indoor environment with multiple UAVs trying to avoid each other. Results for both simulation and physical experiments are presented. Parameters such as separation distance, closest approach, and the number of collisions are used to demonstrate the feasibility of this approach.

Research Products

  • “Velocity based potential field method for collision avoidance of autonomous aerial vehicles.”[pdf]
  • “Human intuitable collision avoidance for autonomous and semi-autonomous aerial vehicles.”[pdf]

Understanding Human-Robot Trust

(joint work with Ayanna Howard)

The phenomena of trust has been seriously explored by numerous researchers for decades. Moreover, the notion of trust is not limited to interpersonal interaction. Rather, trust underlies the interactions of employers with their employees, banks with their customers, and of governments with their citizens. In many ways trust is a precursor to a great deal of normal interpersonal interaction. For interactions involving humans and robots, an understanding of trust is particularly important. Because robots are embodied, their actions can have serious consequences for the humans around them. A great deal of research is currently focused on bringing robots out of labs and into people’s homes and workplaces. These robots will interact with humans-such as children and the elderly-unfamiliar with the limitations of a robot. It is therefore critical that human-robot interaction research explore the topic of trust.

In contrast to much of the prior work on trust, the research presented here focuses on trust in a robot during physically risky situations. Our prior work has demonstrated that results from situations involving the risk of money may not extend to situations involving physical risk.Our preferred application domain is emergency evacuation, but many of the results apply to a variety of different domains.

We are particular interested in exploring overtrust–situations in which people place too much trust in robotics systems. Overtrust tends to cause people misuse robotic systems, for instance believing that the robot has knowledge or abilities which it does not. We are actively exploring methods to prevent overtrust. We are also actively investigating the impact of robot trust repair–situations where a robot tries to repair trust by apologizing or promising to do better in the future. Our approach to trust is not limited to understanding when and why people trust robots. Rather we are also actively develop computational frameworks that will allow a robot to evaluate whether and when it should trust humans and other robots.

This research has been supported by the Air Force Office of Sponsored Programs.

Research Products
  • “The Effect of Robot Performance on Human-Robot Trust in Time-Critical Situations” [pdf]
  • “Towards Robots that Trust: Human Subject Validation of the Situational Conditions for Trust” [pdf]
  • “Overtrust of Robots in Emergency Evacuation Scenarios” [pdf]
  • “Timing is Key For Robot Trust Repair” [pdf]
  • “Recognizing Situations that Demand Trust” [pdf]
  • “Investigating human-robot trust in emergency scenarios: methodological lessons learned” [pdf]
  • “When should a robot apologize? understanding how timing affects human-robot trust repair” [pdf]
Videos
  • Emergency Evacuation Experiment Overview [youtube]
  • Emergency Evacuation Experiment Longer version [youtube]

Past Projects

Robot Deception

Deception has a long and important history with respect to the study of intelligent systems. Primatologists note that the use of deception serves as an important potential indicator of theory of mind. From a roboticist’s perspective, the use and detection of deception is an important area of study especially with respect to military domains. But what is deception? Bond and Robinson define deception as a false communication that tends to benefit the communicator. In this project we use both game and interdependence theory as tools for exploring the phenomena of deception. More specifically, we use an interdependence theory framework and game theoretic notation to develop algorithms which allow a robot or artificial agent to recognize situations that warrant deception and to select the best deceptive strategy given knowledge of the mark (the individual being deceived). We use both simulation and experiments involving real robots to test the hypothesis that the effectiveness of a deceiver’s strategy is related to the amount of knowledge the deceiver has concerning the mark. Our methodological approach of moving from definition, to representation, to algorithm ensures general applicability of our results to robots, agents, or possibly humans. Moreover, our exploration of the phenomena of deception suggests methods by which deception can be reduced. This project also considers the ethical ramifications of creating robot’s capable of deception.

There has been a lot of international interest in this topic. Here are several translations:  Romanian translation (courtesy of azoft), Russian translation (courtesy of Coupofy), Danish translation (courtesy of Nastasya Zemina), Vietnamese translation (courtesy of Ngoc Thao Nguyen).

Research Products
  • “Lies and Deception: Robots that use Falsehood as a Social Strategy” [pdf]
  • “Acting Deceptively: Providing Robots with the Capacity for Deception” [pdf]
  • “Robot Deception: Recognizing when a Robot Should Deceive.” [pdf]
Videos

Using Stereotypes to Reason about Interaction

Psychologists note that humans regularly use categories to simplify and speed the process of person perception (Macrae & Bodenhausen, 2000). Macrae and Bodenhausen suggest that categorical thinking influences a human’s evaluations, impressions, and recollections of the target. The influence of categorical thinking on interpersonal expectations is commonly referred to as a stereotype. For better or for worse, stereotypes have a profound impact on interpersonal interaction (Bargh, Chen, & Burrows, 1996; Biernat & Kobrynowicz, 1997). Information processing models of human cognition suggest that the formation and use of stereotypes may be critical for quick assessment of new interactive partners (Bodenhausen, Macrae, & Garst, 1998). From the perspective of a roboticist the question then becomes, can the use of stereotypes similarly speedup the process of partner modeling for a robot? This question is potentially critical for robots operating in complex, dynamic social environments, such as search and rescue. In environments such as these the robot may not have time to learn a model of their interactive partner through successive interactions. Rather, the robot will likely need to bootstrap its modeling of the partner with information from prior, similar partners. Stereotypes serve this purpose. The goal of this project is to explore the creation and use of stereotypes by a robot to bootstrap the process of learning about new interactive human partners. Moreover, we hope to learn about the type of information necessary for a robot to model a human partner and how stereotypes fail.

 

Research Products
  • “Robots that Stereotype: Creating and Using Categories of People for Human-Robot Interaction.” [pdf]
  • “Using Cluster-based Stereotyping to Foster Human-Robot Cooperation”  [pdf]
  • “The Impact of Stereotyping Errors on a Robot’s Social Development” [pdf]
  •  “Using Stereotypes to Understand Ones Interactive Partner” [pdf]
Videos
  • Overview of stereotype learning and usage work during a demo [video]
  • Slides presenting a portion of this material [video]
  • A video depicting the robot learning the occupational stereotype of a firefighter [video]
  • A video depicting the robot learning the occupational stereotype of a EMT  [video]
  • A video depicting the robot learning the occupational stereotype of a police officer [video]
  • A video demonstrating the robot’s use of stereotypes with observations of the person’s actions to predict their appearance [video]
  • A video demonstrating the robot’s use of stereotypes to determine which perceptual feature is most distinguishing [video]