Michael J. Black, Max Planck Institute for Intelligent Systems, Tübingen, Germany

Talk title: 
Modernizing Muybridge: From 3D Models of the Body to Decoding the Brain
Abstract:
In the late 1800's a revolution started.  Photography allowed the capture and study of human and animal motion.  At the same time electrical signals were recorded from the surfaces of living brains. Today modern computer vision and neuroscience are coming together to reveal clues as to how the brain controls the complex movements of our bodies.  I will review recent work on video-based human body shape and motion estimation that uses statistical models of 3D body shape learned from thousands of laser range scans of the human body.
I will also describe how markerless motion capture is leading to a new understanding of the neural control of natural movement.  Building on these insights it is now possible to restore or improve lost function in people with central nervous system injury by directly coupling brains with computers.  I will summarize our recent work on developing brain-machine interfaces that allow paralyzed individuals to control movement in the world with only their thoughts.
 

Daniel Braun, Max Planck Institute for Biological Cybernetics, Tübingen, Germany

Talk title: 
Risk-sensitivity in motor control
Abstract:
Recent advances in theoretical neuroscience suggest that motor control can be considered as a continuous decision-making process in which uncertainty plays a key role. Decision-makers can be risk-sensitive with respect to this uncertainty in that they may not only consider the average payoff of an outcome, but also consider the variability of the payoffs. Although such risk-sensitivity is a well-established phenomenon in psychology and economics, it has been much less studied in motor control. In fact, leading theories of motor control, such as optimal feedback control, assume that motor behaviors can be explained as the optimization of a given expected payoff or cost. Here we discuss evidence that humans exhibit risk-sensitivity in their motor behaviors, thereby demonstrating sensitivity to the variability of “motor costs.” Furthermore, we discuss how risk-sensitivity can be incorporated into optimal feedback control models of motor control. We conclude that risk-sensitivity is an important concept in understanding individual motor behavior under uncertainty. 

Opher Donchin, Ben Gurion University of Negev, Be’er Sheva, Israel

Talk title:
The cerebellum and models of control.
Abstract:
I will explore the origins of the "canonical model" of cerebellar function with a specific focus on the idea that the cerebellum plays a role in internal modelling. I will explore how this idea can influence our understanding of the way the cerebellum contributes to our ability to perform controlled behaviors. I will specifically address two questions. First, how can we use what we know about the anatomy of neural connections to make sense of conflicting for a cerebellar role in forward and inverse modeling. Second, how can we use our ideas of cerebellar function and its role in control loops to model a simple motor system such as the compensatory eye movement (CEM) system. In the second part, I will present a model of the  CEM system that succesfully reproduces a large quantity of behavioral and
electrophysiological data. My conclusion from this exercise will be that good models are ones that we don't take too seriously. If we view the model and the data with the proper skepticism, then we can leverage the "canonical model" of cerebellar function to provide great insight into the real meaning of the constraints of anatomy and data. 

Dominik Endres, Eberhard Karls University of Tübingen, Tübingen, Germany

Talk title: 
Exploring Semantic Structure of the Neural Code with Formal Concept Analysis
Abstract:
Unravelling the neural basis of perception hinges on an understanding of the neural code. In the sensory system, the neural code defines what pattern of neural activity corresponds to a represented information item, e.g. a visual stimulus. Neural decoding is the attempt to reconstruct the stimulus from the observed pattern of activation.
We propose a novel application of Formal Concept Analysis (FCA) to neural decoding: instead of just trying to figure out which stimulus was presented, we demonstrate how to explore the semantic relationships in the neural representation of large sets of stimuli. FCA provides a way of displaying and interpreting such relationships via concept lattices. We analyse neurophysiological data from high-level visual cortical area STSa, using a Bayesian approach to construct the formal context needed by FCA. 
Prominent features of the resulting concept lattices are discussed, including hierarchical face representation and indications for a product-of-experts code in real neurons. The robustness of these features is illustrated by studying the effects of conceptual scaling: increasing the resolution of the feature (or attribute, in FCA terms) computed from the neural response could distort the sematic relationships. We find that the main semantic components (i.e. ordering relationships between stimuli) are preserved under scaling.
Furthermore, we apply FCA to fMRI BOLD signals recorded from a human subject while viewing realistic images (animals, tools, vehicles etc). While the achievable resolution is smaller than in neurophysiological data, FCA discovers basic semantic relationships here, too.

Adrienne Fairhall, University of Washington, Washington, USA - Bernstein lecture

Talk title: 
Optimal timescales of adaptation
Abstract:
Neural systems can adjust their gain to better encode the statistics of the environment. A range of processes over several timescales contribute to this process. Under certain conditions, the ability of a neural system to adjust its coding strategy to a stimulus’ time-varying variance can be implemented at the level of single neurons. These conditions are attained by developing cortical neurons in the course of their first week.  The idea that neurons track time-varying statistics also implies that the timescales of adaptation may be limited by the time required for inference. In the retina, optimal inference can reproduce the phenomenology of adaptive dynamics and makes new predictions for experiments.  

Peter Földiak, University of St. Andrews, St. Andrews, UK

Talk title: 
Explicit coding and categories
Abstract:
A code, such as the neural code, is a mapping of items to codewords. The neural codewords, i.e. the neural activity patterns have interesting properties, such as the density/sparseness, and the breadth of tuning of the individual neurons. These 'internal' properties have important implications themselves, such as the storage capacity of an associative network receiving such input. However, the 'semantic' aspects of the code, which refer to the connection between codewords and the things in the world to which they refer, are at least as important. Such semantic aspects include the explicitness of the code (i.e. whether neurons divide the world into meaningful subsets), selectivity, invariance, and categorisation. An important goal of sensory processing is to form meaningful categories, and such categories should be related to the semantic properties of the neural code itself. I will discuss a simple hypothesis about the way in which this could be achieved by overlaps of the codewords alone, mapping an arbitrary semantic net into a code and vice versa. Models of visual invariance learning can also be interpreted in this framework.

Moritz Grosse-Wentrup, Max Planck Institute for Intelligent Systems, Tübingen, Germany

Talk title:
What are the Neurophysiological Causes of Performance Variations in Brain-Computer Interfacing?
Abstract:
When a subject operates a non-invasive brain-computer interface (BCI), the system correctly infers the subject's intention in some trials, yet fails to make the right decision in other trials. As the algorithm used to decode brain signals is typically fixed, the reason for this variation in performance has to be found in the subject's brain states. In this talk, I argue that distributed gamma-range oscillations play a major role in determining BCI-performance. In particular, I present empirical evidence that gamma-range oscillations modulate the sensorimotor-rhythm [1], and may be used to predict BCI-performance on a trial-to-trial basis [2]. I further present preliminary evidence that feedback of fronto-parietal gamma-range oscillations may be used to induce a state-of-mind beneficial for operating a BCI [3].

References:
1. Grosse-Wentrup, M., B. Scholkopf and J. Hill. Causal Influence of Gamma Oscillations on the Sensorimotor Rhythm. NeuroImage 56(2), pp. 837-842, 2011.
2. Grosse-Wentrup, M., Fronto-Parietal Gamma-Oscillations are a Cause of Performance Variation in Brain-Computer Interfacing. Proceedings of the 5th International IEEE EMBS Conference on Neural Engineering (NER 2011), pp. 384-387, 2011.
3. Grosse-Wentrup, M. Neuro-Feedback of Fronto-Parietal Gamma-Oscillations. 5th International BCI Conference, Graz, Austria, 2011.

Konrad Körding, University of Chicago, Illinois, USA

Talk title: 
Generalization of Uncertainty
Abstract:
A wide range of studies has demonstrated the ability of the nervous system to take into account the uncertainty associated with state and feedback in line with the predictions of Bayesian statistics. However, we never find ourselves in exactly the same situation twice, making it necessary to generalize uncertainty from past experiences to the current situation. We extended movement generalization experiments to ask how uncertainty is generalized. For typically studied situations we find that the results are well predicted by models that spatially generalize probability distributions. However, we can construct experimental situations where the generalization becomes truly surprising, revealing evidence for supervised learning mechanisms underlying Bayesian computations.

Peter Latham, University College London, London, UK

Talk title:
Probabilistic inference of odors from the activity of odorant receptor neurons.
Abstract:
Inferring what odors are in the air is a hard problem, for at least two reasons: the number of odorant receptor neurons (the first neurons in the olfactory pathway) is smaller than the number of possible odors, and there can be more than one odor at any time. Consequently, even if there is a simple mapping from odors to odorant receptor neuron activity, that mapping cannot be uniquely inverted. Presumably, the brain solves this problem by computing the probability that any particular odor is present. We present a biologically plausible model of how the olfactory system might do this, and discuss how it maps onto the architecture of the olfactory bulb and cortex.

Jörg Lücke, Frankfurt Institute for Advanced Studies, Goethe Universität Frankfurt, Germany

Talk title: 
Non-Linear Components in Sensory Data and the Requirement for Bayesian Inference and Learning
Abstract:
In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. An example for visual data is occlusion but non-linearities are also well-known for other sensory stimuli, e.g., spectrogram data as processed by the auditory cortex.
To study the effect of non-linearities for the example of low-level visual encoding, we contrast a standard linear sparse coding model with a strongly non-linear version (MCA). In applications to image patches the predicted properties of V1 neural responses of both models are compared and their consistency with neurophysiological data is discussed.
Inference and learning based on non-linear models is challenging because of the inherent multi-modality of the resulting posterior representations. Non-linear models, therefore, provide direct evidence for the necessity of Bayesian inference already at the sensory level. Furthermore, posterior representations have to be encoded efficiently over very high dimensional latent spaces for sensory coding. We show how such representations can be obtained by combining fast feed-forward processing and recurrent Bayesian inference. While both mechanisms have served as alternative models of neural processing, we demonstrate that their combination results in the required efficiency and accuracy for non-linear stimulus encoding.

Laurence Maloney, University of New York, New York, USA

Talk title: 
Mixing memory and desire: decision-theoretic approaches to modeling perception and action.
Abstract:
Bayesian decision theory (BDT) is a method for computing optimal decision rules. It is the mathematical framework for modeling economic decision making under risk. It is also an appropriate model for modeling how organisms compensate for their motor uncertainty in planning movement. I’ll first describe recent experiments that explore how human subjects plan movements in tasks where good performance requires that the subject take into account his own temporal motor uncertainty. Subjects’ performance in these experiments was typically close to the performance that would maximize expected gain as predicted by BDT. In contrast analogous tasks involving planning of eye movements indicate that planning of eye movements was far from optimal.
These tasks are mathematically equivalent to decision making under risk and subjects in economic decision making experiments typically fail to maximize expected gain. In particular, they show characteristic distortions of probability information, exaggerating small probabilities. I’ll describe experiments that allow direct comparison of decision making under risk and planning of movement in equivalent tasks. We find that probability information is distorted in both decision making and movement planning but the patterns of distortion are very different in the two kinds of tasks. I’ll discuss the implications of these differences for modeling how the nervous system compensates for 
uncertainty in perception, action, and cognition.

References
1. Dean, M., Wu, S.-W. & Maloney, L. T. (2007), Trading off speed and accuracy in rapid, goal directed movements, Journal of Vision, 7(5):10, 1-12.
2. Maloney, L. T. & Zhang, H. (2010), Decision-theoretic models of visual perception and action. Vision Research, 50, 2362-2374.
3. Morvan, C. & Maloney, L. T. (2011), Human visual search does not maximize the post-saccadic probability of identifying targets. PLoS Computational Biology, in press, 12/2/2011.
4. Trommershäuser, J., Maloney, L. T.  & Landy, M. S. (2008), Decision making, movement planning and statistical decision theory. Trends in Cognitive Science, 12(8), 291-297.
5. Wu, S.-W., Delgado, M. & Maloney, L. T. (2009), Economic decision-making compared to an equivalent motor task, Proceedings of the National Academy of Sciences, USA, 106(15), 6088-6093.
6. Wu. S.-W., Delgado, M. R. & Maloney, L. T. (2011), The neural correlates of probability in decision making under risk and in an equivalent motor task. Journal of Neuroscience, 31, 8822-8831.
7. Zhang, H., Morvan, C. & Maloney, L. T.  (2010), Gambling in the visual periphery: a conjoint-measurement analysis of human ability to judge visual uncertainty.   PLoS Computational Biology, 6(12): e1001023, 1-10.

Uta Noppeney, Max Planck Institute for Biological Cybernetics, Tübingen, Germany

Talk title: 
Multisensory integration: From human behavior to neural systems
Abstract:
To interact effectively with our environment, the human brain integrates information from multiple senses. While multisensory integration was traditionally assumed to be deferred until later processing stages in higher order association cortices, more recent studies have revealed multisensory integration even in putatively unisensory cortical areas. Given this multitude of multisensory integration sites, characterizing their functional similarities and differences is of critical importance. Combining functional imaging (fMRI), effective connectivity analyses and psychophysics in humans, our studies highlight three main aspects: First, the locus of multisensory integration depends on the type of information being integrated and the specific relationship between the auditory and visual signals. Second, in terms of functional brain architectures, effective connectivity analyses suggested that audiovisual interactions in low level sensory areas are mediated by multiple mechanisms including feedforward thalamocortical, direct connections between sensory areas and top down influences from higher order association areas. Third, the regional response profile and activation patterns depend on the relative reliability of the unisensory signals. Paralleling behavioural indices of multisensory integration, multivariate pattern analyses revealed that multisensory integration increased the discriminability and hence reliability of multisensory representations already at the primary cortical level. From the macroscopic perspective of regional BOLD signals, our data provide further evidence for ‘Bayesian-ish’ integration of signals from multiple senses.

Jan Peters, TU Darmstadt, Darmstadt, Germany

Talk title: 
Towards Motor Skill Learning for Robotics
Abstract:
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent "hyperparameters" of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. 

Josh Tenenbaum, Massachusetts Institute of Technology, Massachusetts, USA

Talk title: 
Bayesian inference, scene understanding, and common sense
Abstract:
Over the last two decades we have made great progress in understanding learning, perception and other aspects of intelligence as statistical inference on a grand scale.  Statistical models underlie successful AI technologies that achieve human-like performance in limited domains, such as face detection, pedestrian detection, IBM's Watson system for playing Jeopardy!, or Apple's new Siri voice interface for the iPhone. Yet none of these AI technologies achieves anything like human "common sense", and the computational basis of people's common sense remains elusive.  Why? 
I will argue that statistical approaches to intelligence have paid insufficient attention to how human beings -- adults, children and even young infants -- parse the world in terms of a basic core of common-sense concepts: physical objects, intentional agents, and their interactions.  Abstract knowledge of how objects move, an "intuitive mechanics", and how agents plan and act, an "intuitive psychology", are crucial for how we understand visual scenes and talk about our experience in the world.  I will show how these abstract systems of knowledge, or intuitive theories, can be formalized as probabilistic programs: probabilistic generative models built from compositional systems of stochastic functions that can capture complexly unfolding causal processes.  These models can be conditioned on data (partial program outputs) to support approximate Bayesian inferences about program inputs, unobserved or latent variables, or future outputs. Relatively simple probabilistic programs can describe intuitive mechanics and intuitive psychology, giving surprisingly precise accounts of human judgments about dynamic scenes.  I will close with a challenge for researchers interested in the neural mechanisms of Bayesian inference: how to implement, approximate or emulate these models in brain-like circuits? 

 
Sponsored by the Federal Ministry of Education and Research