Autonomous Learning workshop @ NIPS 2014

The workshop Autonomously Learning Robots will be held on December 12th, 2014, at the Montreal Convention and Exhibition Centre as part of the conference of the Neural Information Processing Systems (NIPS) Foundation.

The workshop is supported by the German Research Foundation through the Priority Programme Autonomous Learning. More information at the workshop’s website.

Abstract:

To autonomously assist human beings, future robots have to autonomously learn a rich set of complex behaviors. So far, the role of machine learning in robotics has been limited to solve pre-specified sub-problems that occur in robotics and, in many cases, off-the-shelf machine learning methods. The approached problems are mostly homogeneous, e.g., learning a single type of movement is sufficient to solve the task, and do not reflect the complexities that are involved in solving real-world tasks.

In this workshop, we want to bring together people from the fields of robotics, reinforcement learning, active learning, representation learning and motor control. The goal in this multi-disciplinary workshop is to develop new ideas to increase the autonomy of current robot learning algorithms and to make their usage more practical for real world applications. In this context, among the questions which we intend to tackle are

More Autonomous Reinforcement Learning
– How can we automatically tune hyper-parameters of reinforcement learning algorithms such as learning and exploration rates?
– Can we find reinforcement learning algorithms that are less sensitive to the settings of their hyper-parameters and therefore, can be used for a multitude of tasks with the same parameter values?
– How can we efficiently generalize learned skills to new situations?
– Can we transfer the success of deep learning methods to robot learning?
– How do learn on several levels of abstractions and also identify useful abstractions?
– How can we identify useful elemental behaviours that can be used for a multitude of tasks?
– How do use RL on the raw sensory input without a hand-coded representation of the state?
– Can we learn forward models of the robot and its environment from high dimensional sensory data? How can these forward models be used effectively for model-based reinforcement learning?
– Can we autonomously decide when to learn value functions and when to use direct policy search?

Autonomous Exploration and Active Learning
– How can we autonomously explore the state space of the robot without the risk of breaking the robot?
– Can we use strategies for intrinsic motivation, such as artificial curiosity or empowerment, to autonomously acquire a rich set of behaviours that can be re-used in the future learning progress?
– How can we measure the competence of the agent as well as our certainty in this competence?
– Can we use active learning to acquire improve the quality of learned forward models as well as to probe the environment to gain more information about the state of the environment?

Autonomous Learning from Instructions
– Can we combine learning from demonstrations, inverse reinforcement learning and preference learning to make more effective use of human instructions?
– How can we decide when to request new instructions from a human experts?
– How can we scale inverse reinforcement learning and preference learning to high dimensional continuous spaces?
– Can we use demonstrations and human preferences to identify relevant features from the high dimensional sensory input of the robot?

Autonomous Feature Extraction
– Can we use feature extraction techniques such as deep learning to find a general purpose feature representation that can be used for a multitude of tasks.
– Can recent advances for kernel based methods be scaled to reinforcement learning and policy search in high dimensional spaces?
– What are good priors to simplify the feature extraction problem?
– What are good features to represent the policy, the value function or the reward function? Can we find algorithms that extract features specialized for these representations?