Deep Reinforcement Learning (SS 18)

Instructor: Daniel Hennes
Secretary:  Carola Stahl
Sessions: Tuesday, 15:45 – 17:15 (0.447)
Office hours: by appointment
Communication: Announcements, such as change of schedule, will be communicated here and via the mailing list.


Schedule

Slides and references to additional material will be posted as the course proceeds.


Description

In this seminar, we will discuss how reinforcement learning can be combined with deep learning. Reinforcement learning is a general purpose framework for artificial intelligence. The key is learning optimal behavior through interaction with the environment: a reinforcement learning agent improves over time through a process of trial & error. Scaling reinforcement learning requires powerful representations as many complex real-world domains feature high-dimensional state and observation spaces as well as continuous action spaces. Deep learning is the state-of-the-art for many machine learning tasks such as image classification, speech recognition, and language translation. Deep learning provides powerful function approximation and representation learning. Deep neural networks learn compact low-dimensional representation (features) from data.

Topics

We will discuss current trends and methods in deep reinforcement learning, for instance:
(Crossed-out topics are already taken!)

Organization

The first lecture(s) will provide a recap of the fundamentals and an overview of recent topics. In the subsequent weeks, students will present papers in the field of deep reinforcement learning.

The seminar also includes a final project that will be based on a recent publication and demonstrate the approach/technique in simulation (e.g. using OpenAI Gym). A short report and final presentation are to be delivered at the end of the semester.

Prerequisites

We assume familiarity with reinforcement learning and machine learning, in particular:

  • Reinforcement Learning
    • Definition of MDPs
    • Policy and value iteration
    • Q-learning / SARSA
  • Machine Learning
    • Classification and regression
    • Fitting of linear and non-linear models
    • Loss functions
    • Stochastic gradient descent
    • Training/test error, overfitting

Relevant textbooks