12:00 PM-2:00 PM on September 26, 2014
Location: TSRB 509
Title: Developing Robot Behaviors That Impact Human-Robot Trust in Emergency Evacuations
Paul Robinette
Robotics PhD Student
School of Electrical and Computer Engineering
College of Engineering
Georgia Institute of Technology
Committee:
Dr. Ayanna M. Howard (Advisor), School of Electrical and Computer
Engineering, Georgia Tech
Dr. Alan R. Wagner (Co-Advisor), Aerospace, Transportation and Advanced
Systems Laboratory, Georgia Tech Research Institute
Dr. Henrik I. Christensen, School of Interactive Computing, Georgia
Tech
Dr. Karen M. Feigh, School of Aerospace Engineering, Georgia Tech
Dr. Andrea L. Thomaz, School of Interactive Computing, Georgia Tech
Abstract:
High-risk, time-critical situations require trust for humans to interact
with other agents even if they have never interacted with the agents
before. Robots must understand why a human makes a trust decision in order
to effectively aid the human in these situations. Currently, robots are
prone to commit errors that negatively impact their ability to be trusted.
Even when the robot is performing correctly, nearby humans may interpret
its actions as strange, and thus less trustworthy.
We use emergency evacuations as our example of high-risk, time-critical
situations. We begin by examining actual evacuations and determining how
guidance robots can best help. We then develop and evaluate several methods
for a robot to communicate directional information to evacuees. Next, we
study the effect of situational risk and the robot's previous performance
on a participant's decision to use the robot in a future interaction.
Finally, we describe an algorithm to affect a participant's trust in the
robot after an initial interaction and propose two experiments to test the
algorithm. The first proposed experiment tests methods to maintain trust
after the robot has made a correct decision that seems incorrect to the
participant. The second proposed experiment tests methods to rebuild trust
after the robot has made an error.