Projects

Phase 2 Projects

Autonomous and efficiently scalable deep learning


Project leaders: Jörg Lücke (Oldenburg)

Researchers: Abdul-Saboor Sheikh (Berlin), Dennis Forster (Oldenburg)

Administration:

Associates:

Summary:

The nervous systems of humans and animals are equipped with sophisticated neural circuits for the processing of sensory information. Sound waveforms or light are received by biological receptors and are translated into electro-chemical neural signals. Different stages of neural processing are able to extract increasing high-level information from low-level sensory signals. Machine Learning research seeks to replicate the far-reaching functional capabilities of such biological information processing.

In recent years, processing and learning across different processing stages has been a major research focus in the fields. Starting with observed data as input (e.g., sound or image data) each processing stage produces an output that the next stage receives as an input, until the last stage produces a meaningful high-level output. Usually this output is a classification signal for the given input. The transfer of information through the processing stages is parametrized and these parameters are modified to optimize the performance of the system in a given task. Algorithms for parameter optimization are referred to as learning algorithms and their application to several stages of processing is referred to as Deep Learning.

Deep Learning approaches have attracted much attention in the Machine Learning community and beyond because of their many successes in different domains. In computer hearing, for instance, deep learning now represents the state-of-the-art in tasks such as speech recognition (Hinton et al., 2012) or speech segregation (Hinton et al., 2006). Similarly, and probably still more saliently, deep learning became the standard and state-of-the-art method for tasks such as image classification. The successful applicability to different domains demonstrates the approach’s generality, which is further underlined by successful applications to general pattern recognition tasks such as hand-written digit or character recognition.

The reduction of free parameters, i.e. the increase of autonomy, is not only an important goal towards more neurally plausible learning but also crucial from the perspective
of functional performance and computational complexity. As over fitting effects increase
with the size of Deep Learning architectures, autonomy is also one of the key points to address for efficient scalability of Deep Learning approaches.

Phase 1 Projects

Autonomous and efficiently scalable deep learning


Project leaders: Jörg Lücke (Oldenburg)

Researchers: Abdul-Saboor Sheikh (Berlin), Dennis Forster (Oldenburg)

Administration:

Associates:

Summary:

The nervous systems of humans and animals are equipped with sophisticated neural circuits for the processing of sensory information. Sound waveforms or light are received by biological receptors and are translated into electro-chemical neural signals. Different stages of neural processing are able to extract increasing high-level information from low-level sensory signals. Machine Learning research seeks to replicate the far-reaching functional capabilities of such biological information processing.

In recent years, processing and learning across different processing stages has been a major research focus in the fields. Starting with observed data as input (e.g., sound or image data) each processing stage produces an output that the next stage receives as an input, until the last stage produces a meaningful high-level output. Usually this output is a classification signal for the given input. The transfer of information through the processing stages is parametrized and these parameters are modified to optimize the performance of the system in a given task. Algorithms for parameter optimization are referred to as learning algorithms and their application to several stages of processing is referred to as Deep Learning.

Deep Learning approaches have attracted much attention in the Machine Learning community and beyond because of their many successes in different domains. In computer hearing, for instance, deep learning now represents the state-of-the-art in tasks such as speech recognition (Hinton et al., 2012) or speech segregation (Hinton et al., 2006). Similarly, and probably still more saliently, deep learning became the standard and state-of-the-art method for tasks such as image classification. The successful applicability to different domains demonstrates the approach’s generality, which is further underlined by successful applications to general pattern recognition tasks such as hand-written digit or character recognition.

The reduction of free parameters, i.e. the increase of autonomy, is not only an important goal towards more neurally plausible learning but also crucial from the perspective
of functional performance and computational complexity. As over fitting effects increase
with the size of Deep Learning architectures, autonomy is also one of the key points to address for efficient scalability of Deep Learning approaches.