UP COMMING TALKS:
|Juan Carlos Saborío
@ Tue, Dec 17, room 2.013, 15:00-16:00
TITLE: “Relevance-based online planning in complex POMDPs”
Planning in AI is the process of deliberating about the results of actions, in order to produce purposeful behavior in goal-driven agents. The challenge increases in the presence of uncertainty, due to non-deterministic action outcomes and incomplete or noisy information. The decision-making and information gathering processes underlying these problems are commonly represented as Partially Observable Markov Decision Processes (POMDPs), a rich modeling framework that becomes intractable very quickly, particularly in realistic scenarios with many possible interactions. Based on the intuitive notion of “relevance”, we propose an approach to POMDP planning that exploits contextual information in 1) action selection, through a PBRS bias and a Monte-Carlo rollout policy, and 2) dimensionality reduction, by introducing a feature-value function and selectively focusing on useful opportunities. We also designed a series of POMDPs that capture many of the challenges of high-level task-planning in robots, and show that relevance-based planning allows agents to plan faster and obtain larger rewards, without the need for extensive heuristics or state factoring.
|Hyosang Lee, Ph.D.
@ Tue, Dec 03, room 0.018, 15:00-16:00
TITLE: “A Scalable Robotic Skin Inspired by Biological Hyperacuity”
Robots would benefit from a soft skin-like covering that detects both the location and strength of contacts. However, implementing such a system is challenging due to the deployment difficulties of a large number of sensing elements, which requires robust, easy to manufacture, and reliable electric connections across a compliant substrate. From a practical point of view, fabrication simplicity must often be compromised to achieve high spatial resolution. Interestingly, biological systems seem to compensate for this dilemma from the perception stage. Biological skins have an interesting attribute called tactile hyperacuity, which is enabled by overlapping receptive fields of mechanoreceptors. This presentation introduces a scalable tactile sensor design inspired by this biological feature. The tactile sensor injects electrical current into a pair of electrodes and measures the corresponding electrical potentials formed around the current pathway, which can be considered as a receptive field. For the demonstration, a fabric-based tactile sensor with only 24 electrodes in an area of 200 mm x 200 mm is developed. The sensor can localize point contact with an error of 8.13 mm, while the sensor’s minimum two-point discrimination distance is nearly 35 mm. This performance is comparable to that of the stomach region of human skin. Although the proposed approach is not yet comparable to biological skins in many aspects, it showed a remarkable possibility to be used as a whole-body tactile skin.
@ Tue, Jul 9, room 2.013, 15:00-16:00
TITLE: “Long-term motion trajectory prediction: methods and challenges”
With growing numbers of intelligent systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are important tasks for intelligent vehicles, service robots and advanced visual surveillance systems. What makes this task challenging is that human motion is influenced by a large variety of factors, including the person’s intention, the presence, attributes and actions of other surrounding agents, the geometry and semantics of the environment. In this talk, I will present our current results on surveying, analyzing and addressing the human motion prediction problem. First part of the talk summarizes a comprehensive analysis of the literature, where we categorize existing methods, propose a new taxonomy, discuss limitations of the state-of-the-art approaches and outline the open challenges. In the second part of the talk, I will present our method to predict long-term motion of people in public spaces, which combines the MDP-based globally optimal stochastic motion model with handling of local interaction using Groups Social Forces. In the end, I will outline some of the ongoing research.
@ Wed, Jun 19, room 2.013, 10:30-11:30
TITLE: Human Intelligence Assisted Robot Learning
Embedding learning ability in robotic systems is one of the long sought-after objectives of artificial intelligence research. Despite the recent advancements in hardware, large-scale machine learning algorithms and theoretical understanding of deep learning, it is still quite unrealistic to deploy an end-to-end learning agent in the wild, hoping it could learn everything from scratch. Instead, we identify the importance of imposing strong human knowledge on capable robotic systems. We verify our theories through analyses of sample efficiency and a robotic system that combines learning and planning. The new approaches integrate human-designed prior knowledge seamlessly with statistical machine learning methods to achieve state-of-the-art performance on complex long-horizon robot manipulation tasks.
|Andrea Del Prete
@ Tue, May 28, room 2.013, 15:00-16:00
TITLE: Motion control for legged robots: robustness, viability and hardware designWebpage: https://andreadelprete.github.io/Abstract:
In the last 10 years optimization-based control (e.g., trajectory optimization) has become the most common technique for motion control of legged robots. Despite the remarkable ability to account for the constraints of these systems, current optimization-based techniques are still brittle and require lots of tuning to work on real robots. I believe that to overcome these issues we should look at robustness, viability, and hardware design. In this presentation I will preset my recent work on these subjects, and discuss what I plan to do in the coming years.
@ Tue, May 14, room 2.013, 15:00-16:00
TITLE: Learning-based ControlWebpage: http://ics.is.mpg.deAbstract:
Modern technology allows us to collect, process, and share more data than ever before. This data revolution opens up new ways to combine feedback control with machine learning and data technology, and thus lay an algorithmic foundation for future intelligent systems acting autonomously in the physical world. Starting from a discussion of the special challenges in learning-based control, I will present some of our recent research in this area. In particular, I plan to talk about (i) event-triggered learning as a new concept to systematically decide “when to learn” in resource-limited settings, (ii) learning of approximate model predictive controllers with guarantees, and (iii) novel methods for fast and reliable control over multi-hop wireless networks. Whenever possible, I will illustrate the developed theory through experimental results on hardware.
@ Tue, Apr 23, room 2.013, 15:00-16:00
“Probabilistic Modeling for Sequential Decision Making under Uncertainty Problems”Webpage: https://vienngo.github.io/index.htmlAbstract:
In this talk, I will present my recent research results on planning and learning under uncertainty for sequential decision making problems. In particular, I will show how to tackle both the curse of dimensionality and the curse of history. To tackle those issues in planning under uncertainty, my researches resort to three principled techniques that i) realize and integrate temporal abstraction to scale up planning, ii) use Monte-Carlo simulations for complex and intractable computations, and iii) exploit model-based trajectory optimization to deal with smooth dynamics and differential constraints in (non-linear) dynamical systems. In the second part of the talk, I will show my recent results on model-based learning. In particular, I will describe how to achieve a better data-efficiency and generalization by integrating future predictions and using Bayesian optimization.
@ Tue, Apr 16, room 2.013, 15:00-16:00
“Safe Active Learning”Webpage: https://www.ias.informatik.tu-darmstadt.de/Member/DuyNguyen-TuongAbstract:
In this study, we consider the problem of actively learning time-series models while taking given safety constraints into account. For time-series modeling we employ a Gaussian process with a nonlinear exogenous input structure. The proposed approach generates data appropriate for time series model learning, i.e. input and output trajectories, by dynamically exploring the input space. The approach parameterizes the input trajectory as consecutive trajectory sections, which are determined stepwise given safety requirements and past observations. The results show the effectiveness of our approach in a realistic, industrial setting.
@ Tue, Apr 09, room 2.013, 15:00-16:00
“A Sober Look at Neural Network Initializations”Webpage: https://www.isa.uni-stuttgart.de/institut/team/Steinwart-00002/Abstract:
Initializing the weights and the biases is a key part of the training process of a neural network. Unlike the subsequent optimization phase, however, the initialization phase has gained only limited attention in the literature. In the first part of the talk, I will discuss some consequences of commonly used initialization strategies for vanilla DNNs with ReLU activations. Based on these insights I will then introduce an alternative initialization strategy, and finally, I will present some large scale experiments assessing the quality of the new initialization strategy.
@ Wed, April 3, room 2.013, 13:00-14:00
“Neural Network implementations of the Probabilistic Inference Tasks Planning, MPC and online Adaptation”Webpage: https://rob.ai-lab.scienceAbstract:
The challenges in controlling anthropomorphic robots, understanding human motor control, and in brain-machine interfaces are currently converging. Modern anthropomorphic robots with their compliant actuators and various types of sensors (e.g., depth and vision cameras, tactile fingertips, full-body skin, proprioception) have reached the perceptuomotor complexity faced in human motor control and learning. While outstanding robotic and prosthetic devices exist, current algorithms for autonomous systems and robot learning methods have not yet reached the required autonomy and performance needed to enter daily life.
In my talk, I discuss state of the art probabilistic inference implementations in neural models that can be used to plan and predict complex motions of humans and robots. The models can handle partial observable, missing data and are robust to sensor noise, which is demonstrated in challenging robotic planning studies. In robot motion planning experiments, I demonstrate how motion trajectories can be trained through reinforcement learning and how state transition models can be learned through kinesthetic teaching. The model can adapt in real time within few seconds to obstacles or changes in the dynamical system through intrinsic motivation learning and enables powerful model-predictive control implementations in real robots.
@ Tue, Mar 12, room V38.03, 14:00-15:00
“Animal locomotion biomechanics and neuromuscular control – how can legged robots help gaining insights into nature”Webpage: https://www.is.mpg.de/person/sprowitzAbstract:
The underlaying principles of animal legged locomotion are only sparsely understood. Interdisciplinary research indicates the existence of common mechanical and neurocontrol mechanisms, in very different animals. Biomechanical examples are found in the morphological design of mammalian legs, as leg segmentation ratios, pantographic leg structures, multiarticulate muscles-tendons, and compliant muscle-tendon structures. Neuromuscular control blueprints are for example pattern generators responsible for gait rhythm generation.
We assume that such blueprints could have evolved to counter performance and mechanical limitations of biological tissue and to simplify and manage ‘online’ locomotion control in animals.
We implement biomechanical and control blueprints into legged robots. I.e. Cheetah-cub robot is the first quadruped robot between 0.5kg and 30kg to reach a dynamic speed of Froude 1.3, while freely trotting, and controlled feed-forward. We apply bioinspired robot- and controller-designs, to produce rich and biomechanically relevant locomotion data. These robot experiments help us analyzing robotic and biological legged systems, also by comparing data. We will discuss examples from Biology and Biomechanics indicating the existence of dynamic legged locomotion modes which can rely on feed-forward control patterns, in combination with bioinspired robot and controller designs.
@ Tue, Feb 26, room 2.013, 16:00-17:00
“Gaits and Natural Dynamics in Robotic Legged Locomotion”Webpage: https://www.inm.uni-stuttgart.de/en/research_robotics_and_locomotion/index.htmlAbstract:
In my research, I seek to systematically exploit mechanical dynamics to make future robots faster, more efficient, and more agile then today’s kinematically controlled systems. Drawing inspiration from biology and biomechanics, I design and control robots whose motion emerges in great part passively from the interaction of inertia, gravity, and elastic oscillations. Energy is stored and returned periodically in springs and other dynamic elements, and continuous motion is merely initiated and shaped through the active actuator inputs. In this context, I am particularly interested in questions of gait selection. Should a legged robot use different gaits at different desired speeds? If so, what constitutes these gaits, what causes their existence, and how do they relate to gaits observed in biology?
I study these questions in conceptual models, in hardware implementations, and through biomechanical experiments. In the long term, my research will allow the development of systems that reach and even exceed the agility of humans and animals. It will enable us to build autonomous robots that can run as fast as a cheetah and as enduring as a husky, while mastering the same terrain as a mountain goat. And it will provide us with novel designs for prosthetics, orthotics, and active exoskeletons that help restoring the locomotion skills of the disabled and can be used as training and rehabilitation devices for the injured.
@ Thu, Jan 17, room 2.013, 14:00-15:00
“Direct Methods for Vision-Based Perception, Control and Learning”Webpage: https://www.is.mpg.de/person/jstuecklerAbstract:
Intelligent robots require physical scene understanding in order to purposefully act in their environment. In this talk, I will present our recent work on direct and self-supervised learning approaches that achieve 3D simultaneous localization and mapping, semantics and monocular depth estimation. I will discuss the importance of embodiment in physical agents for visual perception and outline current research of my group towards learning vision-based control and planning in the physical world.
Organization: Jim Mainprice & Marc Toussaint
Machine Learning & Robotics Lab, U Stuttgart