Learning (in) Task and Motion Planning

RSS 2020 Virtual Workshop

July 12, 2020

Recorded videos - youtube channel


Overview

Task and Motion Planning (TAMP) frameworks show remarkable capabilities in scaling to long action sequences, many objects and a variety of tasks. However, TAMP usually assumes perfect knowledge, relies on simplified (kinematic) models of the world, requires long computation time and most of the time yields open-loop motion plans, all of which limit the robust and practical applicability of TAMP in the real world.

On the other end of the spectrum, reinforcement learning (RL) techniques have demonstrated, also in real world experiments, the ability to solve manipulation problems with complex contact interactions in a robust and closed-loop fashion. The disadvantage of most of these approaches is that they work for a single goal only, require huge amounts of trials and have trouble showing the same long-term sequential planning behaviors of classical TAMP frameworks.

The goal of this workshop is to investigate if and how learning can address the challenges imposed by TAMP problems to develop (novel) methods that achieve both the generality of TAMP approaches and the complex interaction capabilities of RL policies.

To discuss this, we are trying to bring together experts from the fields of

  • TAMP
  • Meta/multi-goal reinforcement learning
  • Feedback motion planning (with special interest in contact/force based planning)
  • Perception for planning
as well as experts that examine the boundaries between these.

This workshop continues past RSS workshops on Task and Motion Planning (2016, 2017, 2018, 2019) with a focus on learning this year.


Invited Talks

  • If, what and how to learn in Task and Motion Planning
    Jeannette Bohg, Stanford. Talk
  • Connecting TAMP to the Real World
    Chris Paxton, NVIDIA. Talk
  • Long-Horizon Control with Learned Models
    Sergey Levine, UC Berkeley. Talk
  • Incremental Learning for TAMP
    Tomas Lozano-Perez, Massachusetts Institute of Technology. Talk
  • Deep Planning Solutions
    Lydia Tapia, University of New Mexico. Talk
  • Output Feedback Motion Planning
    Russ Tedrake, Massachusetts Institute of Technology. Talk
  • From human demonstration to automatic planning for robotic manipulation and assembly
    Weiwei Wan, Osaka University. Talk


Accepted Contributions

Playlist


Organizers