Marc Toussaint


Marc Toussaint
Uni Stuttgart
Universitätsstraße 38
70569 Stuttgart, Germany
tel: +49 711 685 88376
room: 2.225; lab tel: 88262

Carola Stahl
tel: +49 711 685 88385
fax: +49 711 685 88250

Projects for MSc students
Please see this list of MSc projects that I am interested in supervising. Students need to have visited some of our courses (e.g. Robotics, ML, Maths).
Max Planck Fellow
From Nov 1st I am Max Plack Fellow with the MPI for Intelligent Systems.
RSS'18 best paper
M. Toussaint, K. R. Allen, K. A. Smith, and J. B. Tenenbaum: Differentiable Physics and Stable Modes for Tool-Use and Manipulation Planning. In Proc. of Robotics: Science and Systems (R:SS 2018), 2018. [Accompanying Video] [Source Code].
some video lectures
since 11/18 Max Planck Fellow with the MPI for Intelligent Systems
08/17-07/18 Visiting Scholar at CSAIL, MIT (LIS group)
04/17-07/17 Lead of the ML-Robotics lab at Amazon, Berlin
since 12/12 Full Prof. at University of Stuttgart; head of the Machine Learning and Robotics Lab.
10/10-11/12 Prof. (W1) at the Department of Math and Computer Science, FU Berlin; head of the Machine Learning and Robotics Lab at FU Berlin
3/07-10/10 head of the Machine Learning and Robotics group (Emmy Noether Programme) at the IDA lab (Klaus-Robert Müller), TU Berlin.
8/06-2/07 guest scientist at the Honda Research Institute, Offenbach.
6/04-6/06 post doc at the Machine Learning group (Chris Williams) and the Statistical Machine Learning and Motor Control group (Sethu Vijayakumar), University of Edinburgh.
4/00-5/04 PhD student (& brief post doc) at the Adaptive Systems group, Institut für Neuroinformatik (Werner von Seelen), Ruhr-Universität-Bochum.
6/98-3/00 student at the Cologne gravity group (Friedrich W. Hehl), Institute for Theoretical Physics, U Cologne.
current research interests
  • Our research focusses on the combination of decision theory and machine learning, motivated by applications in robotics. The goal are learning systems that are able to reason about their own state of knowledge (e.g., in a Bayesian way) and decide which actions might yield the most informative future data, make them learn even better and eventually solve problems. We address this in the form of Reinforcement Learning, Planning and Active Learning in probabilistic relational domains. Further, a growing focus of our lab are real-world robotic systems and joint symbolic and geometric planning, including trajectory optimization and optimal control methods.
  • Research in the intersections of modern AI (probabilistic reasoning, learning & planning), robotics and machine learning
  • Probabilistic approaches to planning, on symbolic (relational) as well as motion & control level
  • (Constrained) Optimization methods for robotics, reinforcement learning and machine learning in general
  • Active learning, experimental design and UCB/UCT type methods for autonomous (e.g.\ robot) exploration of complex domains
  • general Machine Learning: learning representations, Bayesian networks & graphical models