Task and Motion Planning (TAMP) frameworks show remarkable capabilities in scaling to long action sequences, many objects and a variety of tasks. However, TAMP usually assumes perfect knowledge, relies on simplified (kinematic) models of the world, requires long computation time and most of the time yields open-loop motion plans, all of which limit the robust and practical applicability of TAMP in the real world.
On the other end of the spectrum, reinforcement learning (RL) techniques have demonstrated, also in real world experiments, the ability to solve manipulation problems with complex contact interactions in a robust and closed-loop fashion. The disadvantage of most of these approaches is that they work for a single goal only, require huge amounts of trials and have trouble showing the same long-term sequential planning behaviors of classical TAMP frameworks.
The goal of this workshop is to investigate if and how learning can address the challenges imposed by TAMP problems to develop (novel) methods that achieve both the generality of TAMP approaches and the complex interaction capabilities of RL policies.
To discuss this, we are trying to bring together experts from the fields of
|07:30 - 07:35||Introduction|
|07:35 - 07:55||Weiwei Wan|
|07:55 - 08:15||Dieter Fox|
|08:15 - 08:35||Jeannette Bohg|
|08:35 - 08:50||Discussion I|
|08:50 - 09:00||Spotlight presentations I|
|09:00 - 09:15||Break|
|09:15 - 09:35||Sergey Levine|
|09:35 - 09:55||Russ Tedrake|
|09:55 - 10:10||Discussion II|
|10:10 - 10:15||Break|
|10:15 - 10:35||Tomas Lozano-Perez|
|10:35 - 10:55||Lydia Tapia|
|10:55 - 11:10||Discussion III|
|11:10 - 11:20||Spotlight presentations II|
|11:20 - 11:30||Discussion and wrap-up|