Interactive and Collaborative Robot-Assisted Emergency Evacuation
Alan R. Wagner (Penn State), Minghui Zhu (Penn State), Hai Lin (Notre Dame)
Funded by the National Science Foundation
During an emergency, evacuees must make quick decisions, so they tend to rely on default decision-making that may put them at risk, such as exiting the way they entered, following a crowd, or sheltering in place. When a crowd attempts to exit through a single exit, choke points and crowd congestion may impede the safe flow of evacuees, potentially resulting in a stampede of people and the loss of human lives. Mobile robots have been increasingly deployed as assistants on city streets and in hotels, shopping centers and hospitals. The future ubiquity of these systems offers an unprecedented opportunity to revolutionize how people are evacuated from dangerous situations. In particular, when compared with traditional emergency infrastructure technologies such as fire alarms and smoke detectors, mobile robots can achieve better situation awareness and use this information to expedite evacuation and enhance safety. Additionally, mobile robots can be used in risky and life-threatening situations, such as chemical spills or active shooter scenarios, which present dangers to human first responders. This project will develop the first embodied multi-robot evacuation system where multiple mobile robots, originally tasked for different purposes, serve as emergency evacuation first responders leading people to safety.
This project is be led by investigators at Penn State University and Notre Dame. The sections below summarize each investigator’s work.
Effective Robot Evacuation Strategies in Emergencies
PSU: Mollik Nayyar, Alan R. Wagner
Recent efforts in human-robot interaction research has shed some light on the impact of human-robot interactions on human decisions during emergencies. It has been shown that presence of crowds during emergencies can influence evacuees to follow the crowd to find an exit. Research has shown that robots can be effective in guiding humans during emergencies and can reduce this ‘follow the crowd’ behavior potentially providing life-saving benefit. These findings make robot guided evacuation methodologies an important area to explore further. In this paper we propose techniques that can be used to design effective evacuation methods. We explore the different strategies that can be employed to help evacuees find an exit sooner and avoid over-crowding to increase their chances of survival. We study two primary strategies, 1) shepherding method and 2) handoff method. Simulated experiments are performed to study the effectiveness of each strategy. The results show that shepherding method is more effective in directing
people to the exit.
The figure on the left shows the map of the simulated office space with the locations of the rooms and exit marked by a red arrow. The figure on the right shows a sample track recorded during the simulation.
Evacuation strategies
- Shepherding Method: When using shepherding method, the robot physically moves in the envirobment to show the participant the exit. If the robot detects that the participants is not following, it will either stop or move closer to the participant in case the participant changes their mind and decides to follow.
- Handoff Method: When the robot uses the handoff method, three clones of the robot are spawned at waypoints along the path to the exit. Each robot is located at different junction points leading to the exit. The first robot is in front of the meeting room looking and pointing towards the next robot on the far end of the corridor on the participants right (opposite to the directions of the crowd’s exit).
Results
The results of the participant response to the robot evacuation strategies are presented below. The experiments were studied under the influence of crowds as a counter stimulus. The results show the shepherding robot is more effective in emergency scenarios compared to a handoff robot.
Results from the influence of evacuation strategy with crowds. The four conditions are divided as groups of shepherding strategy with an efficient and inefficient robot and handoff strategy with an efficient and inefficient robot. The error bars indicate a 95% confidence interval and the asterisk indicates the significance values after running a pair-wise 2 test: p < 0:01; p < 0:001.
Results from the evacuation experiment in the absence of a crowd. The results show participants that followed the robot versus did not follow the robot. All four conditions are presented. The error bars indicate a 95% confidence interval and the asterisk indicates the significance values after running a pair-wise 2 test: p < 0:01; p < 0:001.
Density Feedback Control
Notre Dame: Tongjia Zheng, Hai Lin
We study the deployment of robotic swarms, in which we want to design velocity fields for a robotic swarm such that its (probability) density evolution satisfies certain spatial and temporal requirements. We use a family of stochastic differential equations (SDEs) to model individual kinematics and a mean-field partial differential equation (PDE) to describe the density evolution. These equations have the same coefficients so that any velocity field that we design for the PDE model can be easily implemented by individuals in a distributed manner. We use spatial temporal logic (SpaTeL) to create reference densities and use density feedback control to generate velocity fields. As a top-down approach, this control strategy is user-friendly and convenient to analyze. It is closed-loop with exponentially fast convergence. The algorithm is independent of the agent population, so it is scalable. Each agent independently derives its own low-level controller to track the reference velocity command, so it is distributed. This control strategy requires the global density as a feedback signal and is robust to density estimation error. In the next slide, we introduce how to estimate this global density in a distributed manner.
Optimally deploy robots using SpaTeL to create reference densities, density feedback control based on mean-field PDEs to generate velocity fields for individual robots.
Novelty: Top-down, provable, closed-loop, distributed, scalable, robust to density estimation error.
Some example simulation videos are provided here. The descriptions for these videos are below.
Density Tracking
Density tracking: We use partial differential equations (PDEs) to model the dynamics of the swarm’s mean-field density, and use density feedback to design velocity fields that guide the swarm’s density to track a reference density sequence. In the animation, Fig (1,1) (first-row first-column) represents the agents’ states. Fig (1,3) is the reference density signal that we want to track. Fig (1,2) is the estimate of the real-time density, which is used to generate the feedback velocity field in Fig (2,1). We observe that by following this feedback velocity field, the agents’ density is able to track the reference density.
Density Filter
Density filter: We formulate the dynamic density estimation problem as a filtering problem for the PDE of density, and present (distributed) density estimation algorithms that take advantage of the dynamics to gradually improve density estimation. In the animation, Fig (1,1) represents the agents’ states, where the red circle is a representative agent randomly selected and the red dots represent the neighbors who communicate with the representative agent. Fig (1,2) is the evolution of the true density. Fig (1,3) is the estimate generated by kernel density estimation, which is used as a comparison. Fig (2,1) is the output of our centralized density filter which uses all agents’ states to estimate the real-time density. Fig (2,2) is the output of the local density filter of the representative agent, which uses only its own state and local communication to estimate the real-time density. We observe that the centralized density filter is able to quickly keep track of the evolution of the true density, and the distributed filter is also to recover the true density, however, at a slower rate due to the need for information exchange among the agents.
Multi-robot optimal motion planning
PSU: Guoxiang Zhao, Minghui Zhu
We investigate a class of multi-robot closed-loop motion planning problems where multiple robots aim to reach their respective goal regions as soon as possible. The robots are restricted to complex dynamic constraints and need to avoid the collisions with static obstacles and other robots. Pareto optimality is used as the solution notion where no robot can reduce its own travelling time without extending others’. A numerical algorithm is proposed to identify the Pareto optimal solutions. It is shown that, under mild regularity conditions, the algorithm can consistently approximate the epigraph of the minimal arrival time function. The proofs are based on set-valued numerical analysis, which are the first to point out the promise in extending set-valued tools to multi-robot motion planning problems. Experiments on an indoor multi-robot platform and computer simulations on unicycle robots are conducted to demonstrate the anytime property of our algorithm, i.e., it can quickly return a feasible control policy that safely steers the robots to their goal regions, and it keeps improving policy optimality if more time is given.
The figure on the left shows the trajectories of the robots when they apply the control policies returned by the algorithm in 1.05s. The figure on the right shows the inter-robot distances over time, indicating that no collision is caused throughout the movement of the robots.
Exploring the Effect of Explanations during Robot-Guided Emergency Evacuation
PSU: Mollik Nayyar, Zach Zoloty, Ciera McFarland, Alan R. Wagner
Humans tend to overtrust emergency robots during emergencies. Here we consider how a robot’s explanations influence a person’s decision to follow the robot’s evacuation directions when those directions differ from the movement of the crowd. The experiments were conducted in a simulated emergency environment with an emergency guide robot and animated human looking Non-Player Characters (NPC). Our results show that explanations increase the tendency to follow the robot, even if these messages are uninformative. We also perform a preliminary study investigating different explanation designs for effective interventions, demonstrating that certain types of explanations can increase or decrease evacuation time. This paper contributes to our understanding of human compliance to robot instructions and methods for examining human compliance through the use of explanations during high risk, emergency situations.
Effect of Message Type
This experiment was focused on the impact of different types of messages with increasing explainability. We hypothesized that as the explainabilty of the message increased the percentage of participants that follow the robot’s guidance would also increase. We also hypothesized that the use of explanations would result in an increase in evacuation time.
Message Types
- Excuse me, would you like to follow me?
- Excuse me, would you like to follow me because I am a robot?
- Excuse me, would you like to follow me because I am an emergency robot?
- Excuse me, would you like to follow me because I know the closest exit?
Results
The results of participant response to the different message conditions and the impact of the message conditions on evacuation time are presented below. The influence of robot message is studied during emergencies under the influence of a counter stimulus of a crowd running in a different direction than the robot. The results depict a clear trend with each message type. As the message’s explainability increases, an increasing number of participants choose to follow the robot, supporting the first hypothesis. The number of participants that follow the robot increases significantly from the NoMsg (M=11.86 , SD=4.2) condition to the EmgRobotMsg message condition (M=31.67 , SD=6.00) and the ExitMsg Condition (M=44.07 , SD=6.46), (χ2(2,119) = 8.64,p= 0.013) and (χ2(2,118) = 15.88,p= 0.00035) respectively. The number of subjects following the robot also increases significantly between the FollowMeMsg condition (M=22.03 , SD=5.39) and the ExitMsg Condition (M=44.07 , SD=6.46), (χ2(2,118) = 7.99,p= 0.018). Other pairwise comparisons were not significantly different.
Results of the explanation experiment. The NoMsg condition is the baseline where the robot displays no message. The four experimental conditions are FollowMeMsg, RobotMsg, EmgRobot and ExitMsg. The message explainability increases with each condition. The error bars indicate a 95% confidence interval and the asterisk indicates the significance values after running a pair-wise chi-squared test:∗p<0.05,∗∗p<0.001.
Results of the explanation on evacuation times. The NoMsg condition is the baseline where the robot displays no message. The different messages did not significanly impact the time needed to evacuate. We did not record a significant difference in the evacuation time between any message comparisons. The error bars indicate a 95% confidence interval and the asterisk indicates the significance values after running a pair-wise chi-squared test:∗p<0.05,∗∗p<0.001.
Effect of Message Length
This experiment studied the impact of different message lengths on participant behavior. We predicted that verbose explanations would increase evacuation time versus concise explanations, thus potentially offsetting the positive impact of additional information.
Message Types
- Verbose explanation: Excuse me, would you like to follow me? An emergency has occurred in another part of the building. People are quickly movingto exit the building. I know the location of the emergency taking place andcan safely guide you to an exit. I have been taught all of the building’sexitsand can use my camera to figure out the closest unblocked exit.
- Concise explanation: Excuse me. Would you like to follow me, because I know the closest exit?
Results
The results of participant compliance on message length and its impact on participant evacuation time is presented below. The verbose message does not result in significantly more people following the robot, (M=38.60, SD=6.44) versus (M=42.37, SD=6.43), (χ2(1,116) = 0.171,p= 0.678). This suggests thatthe additional information provided by the long message does not entice individuals to follow the robot. On the other hand, the verbose message condition does significantly increase the time to evacuate (t(29) = 2.04,p= 0.004). Theverbose message increases time to evacuate by 7.47 seconds
Comparison of the verbose and concise message conditions. The error bars indicate a 95% confidence interval and the asterisk indicates the significance values after running a pairwise chi-squared test:∗p<0.05,∗∗p<0.001.
Aiding Emergency Evacuations Using Obstacle-Aware Path Clearing
PSU: Mollik Nayyar, Alan R. Wagner
We seek to develop robots capable of helping people evacuate. Some evacuation environments, however, may have obstacles blocking the person’s path to the closest exit.This paper therefore explores the possibility of creating robots that detect and move obstacles in order to open evacuation pathways. Experiments were conducted using a simulated autonomous robot in photorealistic indoor environments. The system uses computer vision algorithms to gather information from the environment. We show that the gathered information can be used to decide whether an obstacle can or needs to be moved in order to open a new evacuation pathway. We then use simple push manipulations to successfully remove obstacles from a path. We show that the system decreases evacuation time for evacuees in simulations of indoor environments.
Environments
The experiments were designed to study the impact of the robot’s ability to clear obstacles from evacuation pathways on the evacuation time of simulated human evacuees. We designed three different environments using the Unity game engine, an office, a hospital and a school. Each environment had different configurations and layouts. The environments were populated with objects relevant to the environment.
Office
An office environment was designed with two rooms, a long corridor, and a variety of obstacles. The rooms were obstructed which, when removed, could offer shorter paths to the exit as compared to the longer path along the corridor as shown in figure below. The environment was designed with only one exit, one unblocked path and two blocked paths, the intermediate path (shown in yellow) and then the shortest path (green line). The robot was tasked to remove the obstructions from both the blocked paths. This was done to observe the effect of each of the two paths on evacuation time.
Hospital
The hospital environment contains four rooms with a long corridor around the rooms. The evacuees are spawned at the yellow dot as shown in figure below and move along the unblocked red (longer) path. The robot (shown as white dot) begins next to close to the exit point (shown as red dot). The environment has narrower corridors and unique obstacles and layout when compared to the office environment.
School
As was the case with hospital environment, the school environment also has narrow corridors and unique obstacles that block the characters paths. The exit points and spawn points are shown in figure 6. A table is used as the obstacle for blocking the shorter path.
The figure shows an overhead view of our office environment. The yellow dot and white dot indicates the human evacuees and the robot start position respectively. The red dot indicates the exit point. The red line is the longest path, the yellow line is the intermediate path and green line represents a shorter path available to evacuees once the blocking obstacle was removed. The yellow arrow shows the obstacles to be moved.
The figure shows the overhead view of the hospital environment. The yellow dot indicates the evacuee’s spawn point. The white dot indicates the robot start position. The red dot indicates the exit point. The red line is the longer path taken by the evacuees to the exit whereas the green line represents the shorter path taken by the evacuees after the robot removes the blocking obstacle. The yellow arrow shows the obstacles to be moved.
The figure shows the overhead view of the school environment. The yellow dot indicates the human evacuees spawn point. The white dot indicates the robot start position. The red dot indicates the exit point. The red line is the longer path taken by the human evacuees to the exit whereas the green line represents the shorter path taken by the human evacuees after successful robot manipulation. The yellow arrow shows the obstacles to be moved.
Obstacle Awareness
To enable the robot to move obstacles, two skills are essential. First, the robot must be able to identify a target blockage and determine whether or not it is movable and second, it must be capable of moving the blockage to a more desirable location. We introduce the concept of obstacle awareness where the robot is able to use sensors to identify objects and determine some of the key properties that influence movability. Here the movability of an object is defined as a true or false value indicating whether or not the object can be displaced by the robot by force. In this study, we limit our actions to non-prehensile push actions and we assume that the robot is provided with some knowledge of the movability properties of common indoor objects such as tables, chairs etc.
The figure shows a first person view from the robot’s camera perspective and the object classifications. As can be seen, the robot miss classifies a table as a sofa. The green box is the estimated bounding box of the object in pixel values.
Planner
A grid search based path planning algorithm called A* with a diagonal distance heuristic was used to generate candidate paths. A* is solved on the initial map without any information about the obstacles in the environment. As the robot moves through the environment and more previously unseen obstacles are detected, the map is updated and the planner automatically takes into account the objects that need to avoided or moved for a cost effective path. A* allows paths to be planned ‘through’ the movable objects, which allows our system to calculate the displacement required to move an object to unblock a path.
State Machine
The state machine breaks down robot behaviors into lowlevel action states. The global planner uses these states to describe
high-level tasks for the robot. Here we define behavior as the immediate action being performed by the robot where as a task comprises of a set of actions combined together to achieve a goal. The primitive action states implemented in the state machine are planning, moving, pushing, detection and idle.
The figure shows a high level description of the action states that the global planner executes. This flow chart details the events in each of the primitive states. The logic and transitions between the states is controlled by the global planner. The states that idle is switches to are shown on the left side. The Lidar scan runs concurrent to all other states and will allow a state change to ’Detect’ in case of any event. We have omitted the ’Detect’ state for simplicity.
The figure shows a high level description of the tasks that the global planner executes. As can be seen, the system switches between many states to complete a task. This flow chart details the events of ‘navigate to goal’ task. The logic and transitions between the states is controlled by the global planner. Note that ‘navigate to goal’, ‘navigate to subgoal’ and ‘move obstacle’ are tasks that comprise of their own flow of events.
Results
The experiments in this research were designed to study the impact of robot path clearing operations on emergency evacuations. The dependent variable for this work was evacuation time. We also wish to study the impact of grid resolution on the success of the planner, where unblocking the desired path such that it becomes available for evacuation was considered a successful trial. The evacuees speed ranged from 6 to 10 miles per hour. Evacuees move towards the exit as soon as they are spawned. The evacuation time is calculated from the moment the evacuees start to move towards the exit.
Fine Grid Resolution
Shown below are the results of the experiment in all three environments using a fine grid resolution.
The figure shows the results of the experiment conducted in the office environment and fine resolution. The robot removed a blockage paths from the intermediate path and shortest path respectively. The trend of decreasing evacuation times can be seen clearly. The upper dashed line, referred to as ‘ideal longest time’, represents average time taken by the evacuees along the longest path and the lower dashed line, referred to as the ‘ideal shortest time’ represents the average time taken by the evacuees along the shortest path. These lines represent the ideal time taken by the evacuees if they start evacuating along the corresponding path without any deviations or obstructions.
Coarse Grid Resolution
Shown below are the results of the experiment in all three environments using a coarse grid resolution.
The figure shows the results of the experiment conducted in the office environment and coarse resolution. The trend of decreasing evacuation times can be seen clearly. The upper dashed line, referred to as ‘ideal longest time’, represents average time taken by the evacuees along the longest path and the lower dashed line, referred to as the ‘ideal shortest time’ represents the average time taken by the evacuees along the shortest path. These lines represent the ideal time taken by the evacuees if they start evacuating along the corresponding path without any deviations or obstructions.
Impact of Grid Resolution
It was observed that the resolution of the grid map had an impact on the ability of the algorithm to unblock the path. The resolution units are in meters. To further investigate this, the office environment was run multiple times with different resolution settings to measure the number of successes of the algorithm. The results are shown below.
The figure shows the impact of grid resolution on the performance of the algorithm. 20 trials were conducted per grid resolution.
Future Work
PSU: Mollik Nayyar, Alan R. Wagner
In-person user studies based on these simulation experiments are planned with the objective of validating the behavior of the participants when placed in an emergency situation in which a guidance robot is available. We have designed and built several guidance robots based on our prior work. Additionally, an example of a floor plan in which we intend to conduct these experiments is shown below.
Figure of robots designed for in-person evacuation experiments.
Plan of the office floor space that will be constructed for the experiments.
Publications
- Wagner, A. R. (2019). The Future of Evacuation: Developing Robot First Responders. In Living with Robots: Emerging Issues on the Psychological and Social Implications of Robotics Elsevier. Richard Pak, Ewart de Visser, Ericka Rovira (Eds.). Peer-reviewed/refereed.
- Nayyar, M., & Wagner, A. R. (2019). Effective Robot Evacuation Strategies in Emergencies. In 28th IEEE International Symposium on Robot and Human Interactive Communication. (pp. 6). New Delhi, India [PDF]
- Nayyar, M., Zoloty, Z., McFarland, C., & Wagner, A. R. (2020). Exploring the Effect of Explanations during Robot-Guided Emergency Evacuation. The 12th International Conference on Social Robotics (ICSR 2020). Golden, Colorado, USA [PDF]
- Nayyar, M., & Wagner, A. R. (2021). Aiding Emergency Evacuations Using Obstacle-Aware Path Clearing. The 17th IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO 2021). Virtual Conference, Japan [PDF]