The encoding of information in the nervous system has no perfect solution, and involves a tradeoff between different benefits. I will discuss tradeoffs and solutions in the retinal neural code the context of global motion produced by eye movements. It has long been known that distant peripheral motion has strong effects on retinal responses, but the functional benefits of these effects have not been fully studied. I will show that distant peripheral motion generates a common synchronization signal that allows neurons to switch between different versions of the scene, including an energy conserving mode, and a high-throughput mode that approaches maximal information transmission. Switching between complementary representations of the same scene enables single units to convey diverse information, and avoids the drawbacks inherent to each neural code.
Opponent processing is one of the oldest and best established principles in sensory neuroscience, but there are still surprises to be found in this area. Color opponency is one of the best established facts in perception, but I found a reliable way to make it break down by retinally stabilizing equiluminous red/green or blue/yellow bipartite fields. The border perceptually melts away and the colors flow and mix into one another, creating forbidden colors in a variety of multistable percepts (Scientific American, 2010). Making the colors equiluminant is crucial; if the luminances of the retinally-stabilized colors are not properly equated, subjects see multistable color switching or hallucinatory colored textures instead. The results can be understood if color opponency is softwired, like a winner-take-all network, with interactions that are disabled under the same conditions that disable perceptual binding (TINS, 2004). In addition to disabling a perceptual opponency, it is also possible to find hidden opponencies in spatial vision. Flicker-induced hallucinations are normally chaotic, but we found ways to bias and stabilize these hallucinations. Interestingly, this unveils a geometric opponency: concentric circular geometries bias photopic hallucinations to illusory fan-shapes and vice versa; and similarly for clockwise and counter-clockwise spirals (PNAS, 2007; Psychological Bulletin, 2012). These phenomena obey a variety of familiar perceptual principles. Forbidden colors and biased hallucinations are examples of ordinary neural mechanisms stimulated in extraordinary ways.
In addition to visual information from thalamus, neurons in primary visual cortex (V1) receive inputs from other V1 neurons, as well as from higher cortical areas. This "non-classical" input to V1 neurons, which can be inferred in part from the local field potential, can modulate the â€śclassicalâ€ť feed-forward responses of V1 neurons to visual stimuli. Using multielectrode recordings in awake primate, we can characterize this modulation in a variety of stimulus contexts. Because this network activity is by definition shared, it can serve to coordinate single neuron responses across a given region of cortex. Such network modulation plays a clear role during natural viewing, where saccadic eye movements result in stereotyped network activity. Thus these network influences to V1 neuron activity, which likely represent both coordinated processing within V1 and top-down influences, play a fundamental role in natural visual processing.
The vestibular system is vital for maintaining an accurate representation of our motion and orientation as we move through the world. As one moves (or is moved) toward a new place in the environment, signals from the vestibular sensors encode head direction and velocity, and are relayed through the vestibular system to premotor areas and higher-order centers. It is generally assumed the vestibular system provides a veridical representation of sensor output for the control of reflexes vital for maintaining stable gaze and balance, the perception of self-motion and orientation, as well as the generation of spatial-memory processes.
This lecture will consider progress that has recently been made towards understanding how the brain encodes and processes self-motion encoded by the vestibular otoliths and semicircular canals during every day life. First, recent findings challenge the traditional notion that the vestibular system uses a linear rate code to transmit information. Instead, nonlinear integration of afferent input extends the coding range of central vestibular neurons and enables them to better extract the high frequency features of self-motion when embedded with low frequency motion during natural movements. Next, under natural conditions, the behavioral context governs how vestibular is encoded at the first central stage of processing. Not only is vestibular (self-motion) processing inherently multimodal, but the manner in which multiple inputs are combined is adjusted to meet the needs of the current behavioral goal. Finally, consideration is given to the mechanisms that underlie these computations, and the functional significance of the information that is ultimately encoded.
Perception is dependent on context, but whether and how sensory areas encode the context is debated. We used a bistable auditory stimulus - a tritone pair - to investigate the trace left by a preceding bias sequence, which reliably switches the tritone pair's perception between an ascending and descending step in pitch.
We find the bias sequence to induce localized adaptation in neural recordings from the auditory cortex of ferrets. Human MEG recordings show that this adaptation is present and sustained over several seconds under behavioral conditions as well. Sustained adaptation thus appears to encode a memory-like trace of the stimulus history. Using a neural population decoder we show that a classical pitch-difference estimator cannot account for the percept, since the local adaptation leads to an opposite prediction. Instead, we propose a decoder based on a differential adaptation of local pitch-direction selective cells, which correctly predicts the percept.
These results suggest that the stimulus context may be encoded by sustaining the adapted, negative afterimage of the preceding stimulus, and that this mechanism may generate global pitch-direction judgements from local pitch-direction selectivity.
Understanding how the brain processes sensory information in real-time to generate meaningful behaviors is one of the outstanding contemporary challenges of neuroscience. Visually guided collision avoidance behaviors are nearly universal in animals endowed with spatial vision and offer a favorable opportunity to address this question. This talk will summarize the current understanding of their generation at the level of neural networks, single neurons and their ion channels. The focus will be on a model system that has proven particularly suitable for this purpose, the locust brain, but will also tie the results learned in this preparation to studies carried out in a wide range of other species.
We study why whiskers of land mammals are approximately conical by considering a tapered whisker under contact with an object. We convert the Euler-Bernoulli quasi-static equation into a boundary-value equation and analyze it using dynamical system theory. The equation has two solutions, one stable and one unstable, that coalesce in a saddle-node bifurcation. Beyond the bifurcation, the whisker slips-off. Slip-off does not occur for cylindrical hairs for realistic parameters. We suggest that slip-off events code radial distances of objects far from the whisker base. Experimental results show that conical whiskers can sweep pass textures in a series of stick-slip events, but cylindrical hairs are stuck.
The posterior parietal cortex (PPC) has an important role in many cognitive behaviors; however, the neuronal circuit dynamics underlying PPC function are not well understood. We have studied circuit activity dynamics in the PPC of mice during navigation-based choice tasks using a combination of a virtual reality system and two-photon microscopy. We find that during working memory tasks the PPC activity dynamics are best characterized as choice-specific sequences of neuronal activation, rather than long-lived stable states, implemented using anatomically intermingled microcircuits. I will also discuss on-going work to test if sequence-based circuit dynamics may underlie computations necessary for decision-making.
Tactile object recognition depends on the integration of cutaneous inputs from the skin with proprioceptive inputs from the skin and muscles and depends on the attentional state of the animal. While the cutaneous inputs provide information about the spatial form, texture and motion of stimulus patterns on the skin, the proprioceptive inputs provide information about where these inputs are located in three-dimensional space and information about whether the hand or object is moving. Object recognition then is based on matching the inputs from each of the contact points where the skin touches the object with previously stored representations of objects. There are two parts of this talk in the first, we will discuss how cutaneous features are processed in peripheral afferents, neurons in primary and secondary cortex and show how the cutaneous inputs are modified by hand conformation. In the second part of the talk we will discuss the mechanisms of feature selection by attention. Specifically we will show that when animals attend to a specific feature of a stimulus, like the orientation of a bar, neurons with similar tuning functions show increased firing rates and there is an increase in the degree if spike synchrony between neurons.
Low frequency neocortical rhythms are among the most prominent activity measured in human brain imaging signals such as electro- and magneto-encephalography (EEG/MEG). Elucidating the role that these dynamics play in perception, cognition and action is a key challenge of modern neuroscience. We have recently combined human brain imaging, computational neural modeling, and electrophysiological recordings in rodents to explore the functional relevance and mechanistic underpinnings of rhythms in primary somatosensory cortex (SI), containing Alpha (7-14Hz) and Beta (15-29Hz) components. In this talk, I will review our findings showing this rhythm impacts tactile detection, changes with healthy aging and practice, and is modulated with attention. Constrained by the human imaging data, our biophysically principled computational modeling work has led to a novel prediction on the origin of this rhythm predicting that it emerges from the combination of two stochastic ~10 Hz thalamic drives to the granular/infragranular and supragranular cortical layers. Relative Alpha/Beta expression depends on the strength and delay between the thalamic drives. This model is able to accurately reproduce numerous key features of the human rhythm and proposes a specific mechanistic link between the Beta component of the rhythm and sensory perception. Further, initial electrophysiological recordings in rodents support out hypotheses and suggest a role for non-lemniscal pallidal thalamus in coordinating Beta rhythmicity, with relevance to understanding disrupt Beta in Parkinson's Disease.
Populations of neurons jointly drive behavior. Thus, understanding how population activity is coordinated is a key challenge. Novel recording techniques allow for the simultaneous recording from many cells revealing the joint activity of neuronal population during sensory, motor, and cognitive tasks. This has prompted widespread measurement of pairwise correlations. However, the magnitude, the interpretation, and the underlying neural mechanisms of such neural correlations are being vigorously debated. I will start by reviewing our current understanding of the biological mechanisms that control the correlation between the spiking activity of cortical neurons. In particular, I will discuss the potential pitfalls in simple mechanistic explanations of modulations in the coherence in network activity.
In the second part of the talk I will discuss the role of correlations in neural coding. I will first examine the role of coupling between the neurons of the Vertical System (VS) in the lobula plate of the fly. These 20 non-spiking neurons code for the azimuth of the axis of rotation of the fly during flight. The electrical coupling between the cells is relatively large, and the activity of VS cells is strongly correlated. I will discuss the potential role this coupling plays in the processing of optical flow information. I will end with a comment on the impact of noise correlation in models used in psychophysics.
How do we know where objects are relative our body? How do we use touch information to plan the next motor act? I will discuss experimental results that address these and related issues in active sensation, using the rodent vibrissa sensorimotor system as a model.
Spiking activity in cortex is coordinated on a range of spatial and temporal scales. Numerous studies have shown that external events and internal states can alter this coordination, and suggested that this affects encoding by neuronal populations. Much less explored is how coordinated activity influences the relaying of signals between cortical areas and the computations they perform. To tackle this issue, we recorded simultaneously from populations of neurons in the superficial layers of primary visual cortex (V1) of macaque monkeys, and from their downstream targets in the middle layers of V2. We find that spiking activity in V2 neurons is associated with a brief increase in V1 spiking correlations. Stimulus manipulations that enhance brief timescale V1 synchrony lead to stronger coupling between these networks. Our results suggest that the coordination of spiking activity within a cortical area influences its coupling with downstream areas.
Neuro-sensory systems encode their functionality into persistent spatio-temporal patterns of neuron activity, or so-called neural codes. Networks of neurons in the antennal lobe (AL) of moths form non-local neural codes that compete dynamically with each other through lateral inhibition, thus producing a robust signal-processing unit that increases signal-to-noise and enhances the contrast between neural codes. More broadly, many high-dimensional complex systems often exhibit dynamics that evolve on a slow-manifold and/or a low-dimensional attractor. Thus we propose a data-driven modeling strategy that encodes/decodes the dynamical evolution using compressive (sparse) sensing (CS) in conjunction with machine learning (ML) strategies for constructing the observed low-dimensional manifolds. The integration of ML and CS techniques also provide an ideal basis for applying control algorithms to the underlying dynamical systems, thus revealing a method of how robust flight control, for instance, can be accomplished.
The size and complexity of neural data is increasing at a dramatic pace due to rapid advances in experimental technologies. As a result, the data analysis techniques are shifting their focus from single-units to neural populations. We use projection methods, such as Principal Component Analysis PCA and Multiple Discriminant Analysis MDA, to facilitate the understanding and monitoring of the dynamics of neural populations recorded in the hippocampus and olfactory bulb. For the hippocampal date, we examine representation of startle episodes, in order to differentiate between somato-sensory and memory components of the hippocampal representations. For the olfactory data, we focus on how dynamics of odor responses in the olfactory receptor neurons of awake rats are shaped by the temporal features of the active odor sniffing. Our analyses indicate that the dynamics of neural representations depend non-linearly on odor identity and concentration, as well as breathing rhythms of the rats. These results include work done with graduate students Jun Xia and Jie Zhang.
Barlow's "efficient coding hypothesis" asserts that neurons should maximize the information they convey about stimuli. This idea has provided a guiding theoretical framework for the study of coding in neural systems, and has sparked a great many studies of decorrelation and efficiency in early sensory areas. A more recent theory, the "Bayesian brain hypothesis", asserts that neural responses encode posterior distributions in order to support Bayesian inference.
However, these two theories have not yet been formally connected. In this talk, I will introduce a Bayesian theory of efficient coding, which has Barlow's framework as a special case. I will argue that there is nothing privileged about information-maximizing codes: they are ideal when one wishes minimize entropy, but they can be substantially suboptimal in other cases. Moreover, codes optimized for information transfer may differ strongly from codes optimized for other loss functions. Bayesian efficient coding substantially enlarges the family of normatively optimal codes and provides a general framework for understanding the principles of sensory encoding. I will derive Bayesian efficient codes for a few simple examples and show an application to neural data.
The tonotopic organization of neurons in auditory cortex is postulated to be the substrate for representing acoustic information. In this scheme, the characteristic frequency (CF) of each neuron changes systematically along one cortical axis so that the location of active neuron indicates the frequency of the tone. However, there are accumulating evidence from experiments and simulations that are inconsistent with the main predictions of this simple place code. In vivo recordings demonstrate the gradient in CF is weak such that adjacent neurons may have very different preferred frequencies. Moreover, even a moderate intensity tone is likely to activate a large number of neurons, making it difficult to determine frequency using a place code. Here, we use mathematical analyses to derive bijective (one-to-one, onto and hence invertible) maps from the acoustic space to cortical space. We show that populations of, rather than individual, neurons best represent frequency in cortex. We further show that classical columnar organization increase discontinuities in the maps. .
The olfactory bulb exhibits substantial turn-over of the dominant interneuron population, even in adult animals. It is observed that with neurogenesis suppressed the animals' capacity for perceptual learning is impaired. We have developed a simple network model in which the connectivity adapts to the odor environment through the experimentally observed dependence of the survival of the interneurons on their activity. Due to the reciprocity of the connections between the principal neurons and the interneurons this restructuring of the network allows it to reduce the correlation of the representations of similar stimuli.
Neuronal oscillations reflecting synchronous, rhythmic fluctuation of neuron ensembles between high and low excitability states, dominate ambient activity in the sensory pathways. Because excitability determines the probability that neurons will respond to input, a top-down process like attention can use oscillations as "instruments" to amplify or suppress the brain's representation of external events. That is, by tuning the frequency and phase of its rhythms to those of behaviorally and/or cognitively-relevant event streams, the brain can use its rhythms to parse event streams and to form internal representations of them. In doing this, the brain is making temporal predictions. I will discuss findings from parallel experiments in humans and non-human primates that outline specific structural and functional components of this temporal prediction mechanism. I will also discuss its possible generalization across temporal scales. Finally, I will discuss motor system contributions to sensory systems' dynamics.
In this talk I will describe a set of computational tools for characterizing responses of high level sensory neurons. The goal is to describe in as simple as possible ways how the responses of these neurons signal the appearance of conjunctions of different features in the environment. The focus will be on computational methods that are designed to work with stimuli derived from the natural sensory environment. Some of the new methods that I will discuss characterize neural feature selectivity while assuming that the neural responses exhibit a certain type of invariance, such as position invariance for visual neurons. Other methods do not require one to make an assumption of invariance, and instead can determine the type of invariance by analyzing relationship between the multiple stimulus features that affect the neural responses. I will discuss the relative advantages and limitations of these computational tools and illustrate their performance using model neurons as well as recordings from the visual system.
Sensory systems exist to extract relevant information from our sensory environment. One such piece of relevant information concerns the arrival of novel, salient sensory features, for which we may want to simply detect the presence. Another such piece of relevant information concerns the discrimination of fine details of the sensory scene. It has long been posited that these two processes place competing demands on our sensory systems, but this has not been explored in detail. I will present recent work related to these issues, and the role that neuronal synchrony may play in the information coding, using examples from both the visual and somatosensory pathways.
Sensory systems must extract important information from background noise in a constantly changing environment. For example, blinking or moving objects in a static background are more likely to attract attention. The odor landscape, like the visual world, is also highly cluttered and noisy. Odor information is parsed out by sensory neurons expressing different odorant receptors into glomerular activity. How does olfactory information about one object become more behaviorally salient than other odors in the environment? How does odor context influence the saliency of certain olfactory features? How does internal physiological state such as hunger and satiety modulate olfactory circuit to generate flexible behavioral response? I will present our recent unpublished data from the fruit fly Drosophila to support the idea that the mushroom body, a higher olfactory center, is important for perceptual saliency. Food odor and conspecific social cue, represented by separate glomeruli in the antennal lobe, are integrated in the mushroom to enhance behavioral attraction to food. Hunger-dependent neuropeptide signal modulates neural activity in the mushroom body to control the intensity of foraging behavior in Drosophila.
I will report on recent work which proposes that the network dynamics of the mammalian visual cortex are neither homogeneous nor synchronous but highly structured and strongly shaped by temporally localized barrages of excitatory and inhibitory firing we call `multiple-firing events' (MFEs).
Our proposal is based on careful study of a network of spiking neurons built to reflect the coarse physiology of a small patch of layer 2/3 of V1.
When appropriately benchmarked this network is capable of reproducing the qualitative features of a range of phenomena observed in the real visual cortex, including orientation tuning, spontaneous background patterns, surround suppression and gamma-band oscillations. Detailed investigation into the relevant regimes reveals causal relationships among dynamical events driven by a strong competition between the excitatory and inhibitory populations. Testable predictions are proposed; challenges for mathematical neuroscience will also be discussed. This is joint work with Aaditya Rangan.
Olfactory circuits in both insects and rodents are wired according to a pattern of convergence in which axons from olfactory sensory neurons expressing a single receptor type (SRT) project into the same glomerulus. This permits the representation of individual olfactory receptors in a topographical map in the brain (, , ). The functional advantage offered by a SRT convergence pattern is an open question. We analyze a simple mathematical model based on the anatomy of the olfactory bulb. The model contains two interconnected layers of glomeruli and mitral cells. We use the mathematical model to qualitatively explore the impact on odor coding and discrimination by changing the projection pattern from SRT to multiple receptor type (MRT) input patterns. We predict that for odors activating similar patterns, the MRT network is not able to separate response patterns as well as the SRT network. The mathematical prediction is in good qualitative agreement with experimental findings. Our predictions also indicate that sparseness of input patterns and overall network size are key parameters in determining how well the SRT network performs in discrimination of odors. Future work will focus on verification of these additional predictions through biological experiments and also investigating the role of more realistic network models in influencing odor discrimination abilities of the SRT case.
 Mombaerts P, Wang F, Dulac C, Chao SK, Nemes A, Mendelsohn M, Edmonson J, Axel R: Visualizing an olfactory sensory map. Cell 1996, 87: 675-686.
 Resseler KJ, Sullivan SL, Buck LB: Information coding in the olfactory system: evidence for a stereotyped and highly organized epitope map in the olfactory bulb. Cell 1994, 79: 1245-1255
Vassar R, Chao SK, Sitcheran R, Nunez JM, Vosshall LB, Axel R: Topographic organization of sensory projections to the olfactory bulb. Cell 1994, 79: 981-991
At the earliest stages of processing, linear motion self-motion information is encoded by the otolith afferents of the vestibular system. The resting discharge regularity of vestibular afferents has long been known to span a wide range, suggesting an important role in sensory coding. Yet to date, the question of how this regularity alters the coding of translational motion is not understood. We recorded from single otolith afferents in macaque monkeys during linear motion along each afferent's preferred directional axis over a wide range of frequencies (up to16 Hz) corresponding to physiologically relevant stimulation. We used signal detection theory to directly measure neuronal thresholds and found that values for single afferents were substantially higher than those observed for human perception even when a Kaiser filter was used to provide an estimate of firing rate. Surprisingly, we further found that neuronal thresholds were independent of both stimulus frequency and resting discharge regularity. This was because increases in trial-to-trial variability were matched by increases in sensitivity: a coding strategy that markedly differs from that used by canal afferents to encode rotations. Finally, using Fisher information, we show that pooling the activities of multiple otolith afferents gives rise to neural thresholds that are comparable with those measured for perception. Taken together, our results strongly suggest that higher order structures integrate inputs across afferent populations to provide our sense of linear motion, and provide unexpected insight into the influence of variability on sensory encoding.
V1 neuron responses to stimuli in their receptive field (RF) are suppressed by stimuli in the RF surround. This suppression is orientation specific, being strongest for iso-oriented stimuli in the RF and surround, and weakest for orthogonal stimuli. We previously suggested that the surround consists of two regions: a near surround generated by geniculocortical and intra-V1 horizontal connections, and a far surround generated by inter-areal feedback1.
Here we have compared the orientation tuning of near and far surround suppression across V1 layers. We recorded single units (n=106) in parafoveal V1 of 3 anesthetized macaques in response to an optimally oriented center grating fitted to the cell’s RF, surrounded by an annular grating of changing orientation confined to the cell’s near or far surround. The difference in suppression index at iso- vs. ortho-orientations, circular variance and orientation bandwidth were used as metrics to calculate the orientation tuning of suppression.
We find that through all V1 layers, far-surround suppression shows broader orientation tuning than near-surround suppression, and this is due to non-optimal stimulus orientations exerting stronger suppression in the far than in the near surround. Near surround suppression is more sharply tuned in layers 3B, 4B and 4C?. Far surround suppression is more sharply tuned in layer 4B and poorly tuned in the layers below 4B. These results suggest different orientation–specificities of the circuits underlying near and far surround suppression. The sharpness of tuning of near surround suppression across V1 laminae correlates with the V1 laminar location of patterned and orientation-specific horizontal connections, suggesting these connections are the underlying substrate. The orientation-organization of V2-to-V1 feedback connections is controversial, and unknown for feedback arising from other areas. However, broader tuning of far surround suppression suggests that feedback circuits are more broadly orientation-tuned than horizontal circuits. Moreover, sharper tuning of far surround suppression in layer 4B suggests that feedback connections to this layer are more orientation-specific than feedback to other V1 layers2,3.
The different tuning of near and far surround suppression may reflect the statistical bias observed in the distribution of orientations in natural images4. Sharply orientation-tuned near surround suppression may serve to detect small orientation differences in nearby edges, while broadly tuned far surround suppression may serve to direct saccades/attention to salient distant locations.
Angelucci & Bressloff. Contribution of feedforward, lateral and feedback connections to the classical receptive field and extra-classical receptive field surround of V1 neurons. Prog. Brain Res.(2006), 154; 93-121.
 S Shushruth et al. Different Orientation-Tuning of Near and Far Surround Suppression in Macaque Primary Visual Cortex mirror their tuning in Human Perception. Journal of Neuroscience (2013) 33; 106-119.
M.Bijanzadeh, et al. Laminar differences in orientation tuning of near and far surround suppression in the macaque primary visual cortex. Proceedings of the Society for Neuroscience 2012.
Geisler et al. Edge co-occurrence in natural images predicts contour grouping performance. Vision research (2001). 41; 711-724.
Many studies have underscored the potential impact of correlation in collective coding performed by a population of neurons. In particular, [Averbeck et al 2006] shows that having negative noise correlations when signal correlations are positive (and vice versa) can improve coding compared to independent shuffled cases. We mathematically proved this result when quantifying coding by linear Fisher information. For cases beyond around independent noise, we further ask what is the optimal pairwise noise correlation structure, when tuning, or mean responses to stimulus of neurons are fixed. We show that the optimal noise correlation structure must lie on the boundary of the allowed region in the parameter space of pairwise correlations. Intriguingly, a special case of such optimal solution demonstrate a ``noise-cancelling" coding -- having the same estimation error as what would be when the neurons has no noise in their responses. Importantly, we do not assume any homogeneity or symmetry of the system, enabling a chance to provide insights to the active debates about heterogeneity and correlations [Ecker et al 2011].
If one presents the same stimulus repeatedly to an animal, the responses of its sensory neurons will typically vary from trial-to-trial. How does this variability affect sensory coding? Some recent, and highly influential work by Tkacik and colleagues demonstrates that, when neural responses are highly variable, the optimal encoding takes place when the neuronal network reinforces any correlations present in the stimulus: this confers error-correcting ability to the neural code. Conversely, when neural responses are highly reliable, the optimal network opposes any stimulus correlations, reminiscent of the old redundancy reduction arguments. It is not immediately obvious that these conclusions should depend on the source of neuronal variability - are the neurons noisy functions of their synaptic inputs, or are the neurons deterministic functions of noisy synaptic inputs? - although the physiologically literature indicates that the latter is true. In our presentation, we will discuss why the Tkacik model (and GLM models with exponential linkage functions) correspond to intrinsically noisy neurons, not deterministic neurons responding to noisy inputs. We will then demonstrate that, when one considers the physiologically realistic case of noisy inputs to deterministic neurons, one reaches different conclusions about the optimal population code. Counterintuitively, the optimal neural population code depends on the source of response variability.