Title: Planning in Constraint Space for Multi-body Manipulation Tasks
Date: March 27, 2015
Time: 1pm-3pm EST
Location: MiRC 102A
Dr. Frank Dellaert, School of Interactive Computing, Georgia Tech
Dr. Aaron Bobick, School of Interactive Computing, Georgia Tech
Dr. Henrik Christensen, School of Interactive Computing, Georgia Tech
Dr. Magnus Egerstedt, School of Electrical and Computer Engineering, Georgia Tech
Dr. Tomás Lozano-Pérez, Department of Electrical Engineering and Computer Science, MIT
Dr. James Kuffner, Robotics Institute, CMU, Google
Robots are inherently limited by physical constraints on their link lengths, motor torques, battery power and structural rigidity. To thrive in situations that push these limits, such as in search and rescue scenarios, intelligent agents can use the available objects in their environment as tools. Reasoning about arbitrary objects and how they can be placed together to create useful structures such as ramps, bridges or simple machines is critical to push beyond one’s physical limitations. Unfortunately, the solution space is combinatorial in the number of objects, and the combined configuration space of the chosen objects and the robot is high dimensional. To address these challenges, we propose using constraint satisfaction as a means to express the feasibility of candidate structures and adopt search algorithms in the classical planning literature to find sufficient designs.
The key idea is that the interactions between the components of a structure can be encoded as simple equality and inequality constraints on the configuration spaces of the respective objects. Subsequently, a classical planning search algorithm can reason about which set of constraints to impose on the available objects, iteratively creating a structure that satisfies the task goals and the robot constraints. To demonstrate the effectiveness of this framework, we present both simulation and real robot results with static structures such as ramps, bridges and stairs, and quasi-static structures such as lever-fulcrum simple machines, using humanoid robots Golem Hubo and Golem Krang. We propose to extend this work to exploit the dynamic properties of objects, to help robots achieve tasks more efficiently and beyond their limited workspaces.