The Bernstein Center for Computational Neuroscience was funded in 2010 thanks to generous funding provided by the German Ministry of Science and Education through the Bernstein Initiative. The goal of this initial phase of the center was to establish lasting structures in computational neuroscience in Tübingen and build a lively community bridging the gap between experimental neuroscience and machine learning.
The projects of this first phase were targeted to uncover the neural mechanisms of perceptual inference. Our perception is not simply a copy of the sensory stimuli we receive, but an abstract interpretation of the world. Complex processing mechanisms, which combine the information of the sensory stimuli with specific knowledge about the physical properties of the world, enable this interpretation. An impressive illustration for this are Magic Eye pictures. From these abstract 2D patterns the brain reconstructs a 3rd dimension which we experience as depth perception. Such artificial examples demonstrate the specific inference capacities of the brain that are performed continuously in our everyday life, without us ever noticing. For example, we recognize objects and their properties independently of light condition or arrangement. In order to perform such inferences, relevant and persistent structures and patterns must be extracted from very complex datasets. The fact that our brain is able to do this apparently effortlessly is even more remarkable when one considers that no computer algorithm exists with nearly that capacity.
The selection of which information is relevant for perceptual inference begins at the early stages of the sensory periphery. Many perceptual impairments such as low vision originate from a dysfunction of the peripheral stages in the sensory pathways. Therefore, identifying the coding principles at the periphery is of great importance for understanding perceptual inference as well as for clinical applications. Furthermore, it is less affected by feedback from other brain regions which makes it more suitable for developing accurate quantitative models of the neural response properties and computations. The central research questions are:
- How do different retinal ganglion cell types collectively encode the visual information?
- To what extent is retinal processing adapted to statistical regularities of natural images?
- How does neural processing in the retina interact with tuning mechanisms controlling the optics of the eye?
- Which features of retinal ganglion cell population responses are critical for visual perception?
- How can we use the obtained insights for clinical applications?
The cortex is thought to be critical for integrating inputs from all primary sensory areas and for carrying out important computations underlying perceptual inference. Although there is extensive knowledge about how single cortical neurons operate in isolation, we do not know how to place this back into the context of an interconnected, operating system. Moreover, we do not know which algorithms are suitable to describe how the sensory input has been processed up to this stage. Here, we will work towards identifying these algorithms. To this end, we will investigate early cortical processing of perceptually relevant stimuli at the level of populations rather than single cells in awake, behaving animals making use of new technologies that allow simultaneous recordings from neuronal populations. As cortical representations are likely to share generic principles across sensory modalities, we will furthermore compare coding properties in primary visual, somatosensory, and auditory cortex. The central research questions are:
- Which interactions among cortical cells are critical for stimulus encoding and which are critical for triggering behaviour?
- To what extent can we predict the animals’ choice from the primary sensory cortex on a single trial basis?
- How stable is the neuronal representation of a stimulus from one trial to the next?
- How is prior knowledge about the natural sensory input stored and recalled in cortical representations?
To study perceptual inference mechanisms we start from the functional level and ask how inferences concerning particular features such as colour, shape or motion are computed by neural populations. How these properties are encoded in the activity of single cells has been extensively studied in multiple brain areas, but it is still unclear how they are extracted and what computations among populations of neurons are necessary to do this. The central research questions are:
- Can we identify neural mechanisms for specific types of prior knowledge and its combination with the sensory input?
- How are different perceptual interpretations of an image represented and selected by the brain?
- By what mechanisms can population activity resolve ambiguities to arrive at unique percepts and actions?
- How are the responses of lower-level cells pooled to make higher-level inferences and predictions about the world?
Higher-level perceptual inferences result from the combination of various kinds of information that may also originate from different sensory modalities. Here, we investigate the principles underlying the organization of higher-level neural representations and study how neural populations serve to integrate information from multiple sources. The central research questions are:
- Is prior knowledge encoded in an amodal or modality-specific fashion? What are the changes when moving from basic multi-modal cue integration tasks to spatial cognition or representation of numerosity?
- How is knowledge about priors acquired and generalised to novel conditions?
- What are the relevant time-scales for learning priors?
- How is information from sources with different reliability integrated?