Ph.D. Dissertation Defense

Paul Robinette

Ph.D. Dissertation Defense
Title: Developing Robot Behaviors that Impact Human-Robot Trust in Emergency Evacuations

Student:
Paul Robinette
Robotics Ph.D. Candidate
Electrical and Computer Engineering
Georgia Institute of Technology

Date: Wednesday November 4, 2015
Time:  1:00-3:00PM EST.

Place: TSRB 509

Committee:
Dr. Ayanna Howard (Advisor, School of Electrical and Computer Engineering, Georgia Institute of Technology)
Dr. Alan Wagner (Co-Advisor, Georgia Tech Research Institute)
Dr. Henrik Christensen (School of Interactive Computing, Georgia Institute of Technology)
Dr. Karen Feigh (School of Aerospace Engineering, Georgia Institute of Technology)
Dr. Andrea Thomaz (School of Interactive Computing, Georgia Institute of Technology)

Abstract:
High-risk, time-critical situations require trust for humans to interact with other agents even if they have never interacted with the agents before. In the near future, robots will perform tasks to help people in such situations, thus robots must understand why a person makes a trust decision in order to effectively aid the person. High casualty rates in several emergency evacuations motivate our use of this scenario as an example of a high-risk, time-critical situation. Emergency guidance robots can be stored inside of buildings then activated to search for victims and guide evacuees to safety. In this dissertation, we determined the conditions under which evacuees would be likely to trust a robot in an emergency evacuation.

We began by examining reports of real-world evacuations and considering how guidance robots can best help. We performed two simulations of evacuations and learned that robots could be helpful as long as at least 30% of evacuees trusted their guidance instructions. We then developed several methods for a robot to communicate directional information to evacuees. After performing three rounds of evaluation using virtually, remotely and physically present robots, we concluded that robots should communicate directional information by gesturing with two arms. Next, we studied the effect of situational risk and the robot's previous performance on a participant's decision to use the robot during an interaction. We found that higher risk scenarios caused participants to align their self-reported trust with their decisions in a trust situation. We also discovered that trust in a robot drops after a single error when interaction occurs in a virtual environment. After an exploratory study in trust repair, we have learned that a robot can repair broken trust during the emergency by apologizing for its prior mistake or giving additional information relevant to the situation. Apologizing immediately after the error had no effect.

Robots have the potential to save lives in emergency scenarios, but could have an equally disastrous effect if participants overtrust them. To explore this concept, we created a virtual environment of an office as well as a real-world simulation of an emergency evacuation. In both, participants interacted with a robot during a non-emergency phase to experience its behavior and then chose whether to follow the robot’s instructions during an emergency phase or not. In the virtual environment, the emergency was communicated through text, but in the real-world simulation, artificial smoke and fire alarms were used to increase the urgency of the situation. In our virtual environment, we confirmed our previous results that prior robot behavior affected whether participants would trust the robot or not. To our surprise, all participants followed the robot in the real-world simulation of an emergency, despite half observing the same robot perform poorly in a navigation guidance task just minutes before. We performed additional exploratory studies investigating different failure modes. Even when the robot pointed to a dark room with no discernible exit the majority of people did not choose to exit the way they entered.

The conclusions of this dissertation are based on the results of fifteen experiments with a total of 2,168 participants (2,071 participants in virtual or remote studies conducted over the internet and 97 participants in physical studies on campus). We have found that most human evacuees will trust an emergency guidance robot that uses understandable information conveyance modalities and exhibits efficient guidance behavior in an evacuation scenario. In interactions with a virtual robot, this trust can be lost because of a single error made by the robot, but a similar effect was not found with real-world robots. This dissertation presents data indicating that victims in emergency situations may overtrust a robot, even when they have recently witnessed the robot malfunction. This work thus demonstrates concerns which are important to both the HRI and rescue robot communities.

Tags: