Information is believed to be represented by excitation patterns of populations of neurons in the brain. Neurons fire stochastically, depending on inputs from the outside and mutual interactions within the population. The present talk addresses some mathematical aspects underlying the scheme of population coding.
A wide variety of visual tasks and psychophyical phenomena depend on the identification of a previously captured pattern which appears in part of the current retinal image transformed by translation, orientation, scaling and perspectivity. Realtime performance of biological circuits precludes a serial search of any sort, and to date all attempts to conceive robust solutions based on "invariances" have fallen short. By exploiting a simple ordering property of superpositions a class of simple, elegant circuits can concurrently discover a correct memory match and correct composition of transformations to parts of an input image in the midst of clutter or distractors. Termed map-seeking circuits, they have isomorphic biological, analog electronic and algorithmic implementations, and are capable of realtime performance in any of those realizations. Various recognition and shape-from-viewpoint-displacement tasks are demonstrated. As a general purpose forward/inverse transformation solver the map-seeking circuit may be applied to other biological computational problems. Application to limb inverse kinematics is demonstrated.
Building a direct, artificial, connection between the brain and the world, requires answers to the following questions:
This is joint work with John Donoghue, Elie Bienenstock, Yun Gao, Mijail Serruya, and Wei Wu.
Web page: http://www.cs.brown.edu/people/black/
Donoghue Lab home page: http://donoghue.neuro.brown.edu/
Overview of neural prosthetics project: http://www.cs.brown.edu/people/black/Papers/capriOverviewDraft.pdf
Kalman filter decoding paper: http://www.cs.brown.edu/people/black/Papers/nips02draft.pdf
Particle filtering paper: http://www.cs.brown.edu/people/black/Papers/NIPS14.pdf
I will present a brief discussion of a method for decoding recorded neural activity, both spikes and local field potentials, and report results of its performance on a comprehensive collection of recordings from area LIP of macaque monkeys performing a memory saccade task. Special emphasis will be given to comparing decodes from spikes and LFPs. Finally, I will present a real-time implementation of the method.
Recent theoretical advances allow for the determination of the information rate inherent in the spike trains of nerve cells. However, up to now, the dependence of the information rate on stimulus parameters has not been studied in any neuron in a systematic way. Here, I investigate the information carried by the spike trains of H1, a motion-sensitive visual interneuron of the blowfly (Calliphora vicina) using a moving grating as a stimulus. One might expect that, up to a certain limit, the information rate becomes the larger the richer the stimulus entropy. This, however, is not the case: Increasing either the dynamic range of the stimulus or the maximum velocity has little or no influence at all on the information rate. In contrast, the information rate steeply increases when the size or the contrast of the stimulus is enlarged. It appears that, regardless of the stimulus entropy, the neuron covers the stimulus with its whole response repertoir, with the information rate being limited by the noise of the stimulus and the neural hardware.
Neural codes are highly adaptive and context dependent. Some results will be reviewed indicating the functional aspects of adaptive coding in sensory systems. Information theory can help in providing a quantitative understanding of these aspects. From a mechanistic point of view, maintaining an adaptive code requires both space and time flexibility of neural responses. Experiments will be described on random networks, indicating that some features of sensory adaptation arise from neural network structure with no anatomy.
When we consider the representation of information in sensory systems, it is important to understand how the different aspects of patterns of neural activity affect the transmission of information to the next layer of processing. We will discuss the role that temporal patterns play in evoking a response in the postsynaptic cell. In particular, we will outline conditions where the arrival time of a spike plays a significant role in a postsynaptic cell that is selective for certain frequencies.
A major challenge in studying sensory processing is to understand the meaning of the neural messages encoded in the spiking activity of neurons. In the visual cortex, the majority of neurons have nonlinear response properties, making it difficult to characterize their stimulus-response relationships. I will discuss two nonlinear methods to analyze the input-response relationship of these cortical neurons: training of artificial neural networks with the back-propagation algorithm and the second-order Wiener Kernel analysis. Both methods can capture much of the input-response transformation in the classical receptive fields of the cortical complex cells.
Cortical neurons are usually thought to operate in a highly unreliablemanner: A neuron can signal the same stimulus with a variable number of action potentials. Here we describe a novel mode in which each neuron generates exactly 0 or 1 action potentials, but not more, in response to a stimulus. We used cell-attached recording, which ensured single-unit isolation, to record responses in rat auditory cortex to brief tone pips. Surprisingly, the majority of neurons exhibited binary behavior; several dramatic examples consisted of exactly one spike on 100% of trials, with no trial-to-trial variability in spike count. Many neurons were tuned to stimulus frequency. These binary units allow for some forms of dendritic computation that are not possible with conventional Poisson coding units, and are consistent with a model of cortical processing in which synchronous packets of spikes propagate stably from one neuronal population to the next.
We present an analytical approach through which the relevant stimulus space and the corresponding neural symbols of a neuron or neural ensemble can be discovered simultaneously and quantitatively, making few assumptions about the nature of the code or relevant features. The basis for this approach is to conceptualize a neural coding scheme as a collection of stimulus-response classes akin to a dictionary or 'codebook', with each class corresponding to a spike pattern 'codeword' and its corresponding stimulus feature in the codebook. The neural codebook is derived by quantizing the neural responses into a small reproduction set, and optimizing the quantization to minimize an information-based distortion function. This approach uses tools from Rate Distortion Theory for the analysis of neural coding schemes. Its success prompted us to consider the general framework of signal quantization with minimal distortion as a model for the functioning of early sensory processing. Evidence from behavioural and neuroanatomical data suggested that symmetries in the sensory environment need to be taken into account as well. We suggest two approaches - implicit and explicit - which can incorporate the symmetries in the quantization model.
Charles Anderson and I have recently proposed a unified framework for generating large-scale neurally plausible models that relies on integrating recent advances in neural coding with modern control theory (in our book 'Neural Engineering'). I will briefly describe this framework, including our approach to unifying population and temporal coding for scalar, vector, and function representation.
Over the last 15 years, a range of insights into visual coding have developed out of a deeper understanding of the statistics of the natural environment. The structure arising from correlations in pixel values as well as the sparse edge related structure of natural scenes have helped to provide an account of the processing of information along the visual pathway from retinae to cortex. However, the statistical dependencies in natural images occur at all levels of analysis. One can not assume that any method would be capable of finding descriptions where the units of description are independent. Independent components are simply impossible with most natural environments. Then how does one handle redundancy when independence is either not possible or impractical given the number of neurons? One insight may come from the lateral connections between oriented neurons in primary visual cortex. Here, we find conditions where small collections of neurons appear to be representing the redundant structure (e.g., the continuity of edges), rather than single neurons. Do insights from these modes of representation provide insights into higher levels of representing redundancy? This talk will probe some of the possible limits of what we can learn by understanding the redundancy of the natural world.
A recent method introduced by Dimitrov, Miller and Tishby et. al.uses information distortion to find approximation of the neural coding scheme. One way to numericaly find opimum of the information distortion function is using annealing approach. We describe symmetry properties of the information distortion function and numerical methods used to effectively evaluate this function.
Given a large scale model of interacting cells, such as the one in the turtle visual cortex, three questions arise. How does these cells sustain a propagating wave of activity? What does these wave encode in terms of parameters from the visual space? Finally, is it possible to replicate the waves using dynamic models. These questions would form the basic framework of my presentation.
Common to most correlation analysis techniques for neuronal spiking activity are assumptions of stationarity with respect to various parameters. However, experimental data may fail to be compatible with these assumptions. This failure can lead to falsely assigned significant outcomes. Here we study the effect of non-stationarity of spike rate across trials in a model based approach. Using a two rate-state model where rates are drawn independently for trials and neurons, we show in detail that non-stationarity across trials induces apparent co-variation of spike rates identified as the generator of false positives. This finding has specific implications for the 'shuffle predictor'. Within the framework developed for our model, co-variation of spike rates and the mechanism by which the shuffle predictor leads to wrong interpretation of the data can be discussed. Corrections for the influence of non-stationarity across trials by improvements of the predictor are presented .
1. Gruen, Riehle, & Diesmann (in press). Biol. Cybern.
[work with Barry Richmond (LN-NIMH), Pauline Ruffiot (Nordita and Univ Joseph Fourier, Grenoble) Cristina Ursta (Nordita, Niels Bohr Institute, and West Univ, Timisoara), Gustaf Sterner (KTH Stockholm), Mandana Ahmadi (Ahvaz Univ, Iran) and Alexander Lerchner (DTU)]
The observed spike count distributions of V1 neurons are non-Poissonian: The variance generally exceeds the mean, and the variance-vs-mean relation is well-fit by a power law with an exponent greater than 1. In this work we find that the spike statistics of neurons in a model network with dynamically balanced excitation and inhibition show the same features. Our model, intended to represent a generic cortical column, comprises randomly connected excitatory and inhibitory leaky integrate-and-fire neurons driven by excitatory input from a large population of neurons external to the model. We take this input to vary in time like typical thalamic input to cortex. The synaptic strengths are chosen to produce asynchronous irregular firing at rates up to 200 Hz, depending on the strength of the input. Random variability among neurons in both firing thresholds and the strengths of external input currents is also included. The high degree of connectivity permits a mean-field description in which all input currents, both external and recurrent, can be treated as Gaussian noise, the mean and autocorrelation function of which are calculated self-consistently from the firing statistics of single model neurons.
I will report on two problems under current study: (1) Balanced networks with conductance-based synapses. Here the firing statistics are controlled by the synaptic dynamics. (2) A balanced net model for a visual cortical hypercolumn. The firing statistics vary systematically with orientation: The Fano factor is largest at orientations away from the optimal one.
To determine whether or not neural populations work in concert to code information has defied conventional analysis. New techniques using information theory principles seem to hold the best promise. Using them requires defining a baseline performance against which to judge population coding. Using an information processing theoretic approach, we show that the conventional baseline is misleading. We show that stimulus-induced dependence alone is sufficient to encode information perfectly, and we propose that this standard should serve as the baseline. When using this baseline, we show that cooperative populations, which exhibit both stimulus- and connection-induced dependence, can only perform better than the baseline for relatively small population sizes.
My main aim in this presentation is to motivate the use of probability models in the statistical analysis of neuronal data. Probability models offer efficiency, flexibility, and the ability to make formal statistical inferences. I illustrate by considering estimation of instantaneous firing rate, variation in firing rate across many neurons, decoding for movement prediction, within-trial firing rate (non-Poisson spiking), and correlated spiking across pairs of neurons.
Correlations among action potentials, both within spike trains from single neurons and across spike trains from multiple neurons, are ubiquitous. They are observed in many species, from the common house fly to the primate. The role of these correlations is unclear and has long been the subject of debate. Do correlations carry extra information -- information that can't be extracted from the uncorrelated responses -- or don't they? Part of the reason this question has been hard to answer is that it's not clear how to separate correlated from uncorrelated responses. Here we sidestep this issue, and instead rephrase the question as follows: Is it possible to extract all the information from a set of neuronal responses without any knowledge of the correlational structure? If the answer is "yes", then correlations are not important; otherwise, they are. This provides us with a rigorous method for assessing the role of correlations. We provide several examples to clarify the method, and then compare it to other approaches.
The receptive fields of simple cells in the primary visual cortex have been modeled in terms of Gabor wavelets, and derived theoretically from efficient coding principles. In this talk, first, I will report findings of a neurophysiological experiment that demonstrate signals with naturalistic power spectrum provide not only a more efficient but a more accurate means for identifying the kernels (receptive fields) of V1 neurons. The reason is that the neurons have been tuned to functionbest in the regime of natural stimuli rather than in other regimes.Second, I will report findings from another experiment that showsthat different stages of the neural responses in V1 are actually codingdifferent aspects of the visual scenes. While the early stage ofthe responses to a static image reflects the filtering propertiesof the neurons, the later stage of the response reflect the outcomeof perceptual inference, which is in turn influenced by top-downfeedback of the prior statistical experience of the animals intheir environment.
Some articles related to this talk are available could be found in http://www.cnbc.cmu.edu/~tai.
Recent work has shown that neurons are often optimized towards certain statistical properties of an animal's natural environment. Usually, these conclusions have been drawn by a combined analysis of natural stimuli and neural response properties. Alternatively, the stimulus statistics that a given system ``expects'' might be extracted directly from the system in online experiments. We demonstrate the feasibility of this idea in electrophysiological experiments on locust auditory receptor neurons. Using a recently developed algorithm (Phys. Rev. Lett. 88:228104), we adapt the parameters of an initial stimulus ensemble so as to maximize the mutual information between stimulus and neural response. We show that the concept of optimality cannot be treated in isolation but rather depends on further assumptions about the system. Here, we present the optimal stimulus ensemble for the case of a rate code and a spike timing code. [joint work with Tim Gollisch, Olga Kolesnikova, and Andreas V. M. Herz]
The images that fall upon our retinae contain certain statistical regularities over space and time. In this talk I will discuss a method for modeling this structure based upon sparse coding in time. When adapted to time-varying natural images, the model develops a set of space-time receptive fields similar to those of simple-cells in the primary visual cortex. A continuous image sequence is thus re-represented in terms of a set of punctate, spike-like events in time. The suggestion is that *both* the receptive fields of V1 neurons and the spiking nature of neural activity go hand in hand---i.e., they are part of a coordinated strategy for producing sparse representations of sensory data.
It is well-known that the firing rate of neurons in primary motor cortex (MI) is correlated with hand position and velocity. To our knowledge, all previous models of this tuning a) are linear in position or velocity, b) are "static" in the sense that the temporal dynamics of the encoding process are not modelled independently of general behavioral state, and/or c) do not incorporate the effects of interneuronal interactions on a given cell's firing rate. Here we introduce a simple model for MI tuning that does not suffer from any of these three limitations, and show that this model explains the firing rate of MI cells better than any previous model. Our two main results are that 1) the firing rate of most MI cells is in fact a nonlinear function of the dynamic hand position signal (not just of position or velocity), and 2) the state of the MI neural network, as measured by simultaneous recording of multiple isolated units, has a significant effect on the firing rate of MI cells, in that one can better predict the firing rate of a given cell after observing the network state and the hand position signal together, rather than the hand position signal alone.
In this talk I will briefly present a new method to quantify the impact of correlated firing on the information transmitted by neuronal populations. This new method  considers in an exact way the effects of high order spike train statistics, with no approximation involved, and it generalizes our previous work that was valid for short time windows and small populations [2,3]. The new technique permits to quantify the information transmitted if each cell were to convey fully independent information separately from the information available in presence of synergy-redundancy effects. Synergy-redundancy effects are shown to arise from three possiblecontributions: a redundant contribution due to similarities in the meanresponse profiles of different cells; a stimulus independent correlationalcontribution term that reflects interactions between the distribution ofrates of individual cells and the average level of cross-correlation, and asynergistic stimulus-dependent correlational contribution quantifying theinformation content of changes of correlations with stimulus. The latterstimulus-dependent correlational term is shown to be equal to the measurerecently proposed by Nirenberg et al  to quantify the information lost bydecoders that ignore correlations [1,5]. I will finally presentapplications of this method to data simultaneously recorded fromsomatosensory and visual cortices, and demonstrate that our formalism can beused in experimental situations to provide precise constraints on the roleof correlations in encoding and decoding.
There has been considerable interest in Bayesian networks and probabilisitic "graphical models" in the artificial intelligence community in recent years. Simultaneously, a large number of human psychophysical results have been successfully explained using Bayesian and other probabilistic models. A central question that is yet to be resolved is how such models can be implemented neurally. In this talk, I will show how a network architecture commonly used to model the cerebral cortex can implement probabilistic (Bayesian) inference for an arbitrary Markov model. The suggested approach is illustrated using a visual motion detection task. The simulation results show that the model network exhibits direction selectivity and correctly computes the posterior probabilities for motion direction. When used to solve the well-known random dots motion discrimination task, the model generates responses that mimic the activities of evidence-accumulating neurons in cortical areas LIP and FEF. In addition, the model predicts reaction time distributions that are similar to those obtained in human psychophysical experiments that manipulate the prior probabilities of targets and task urgency.
In the brain, spike trains are generated in time, and presumably also interpreted as they unfold. Recent work suggests that in several areas of the monkey brain, individual spike times carry information because they reflect underlying rate variation. Constructing a model based on this stochastic structure allows us to apply order statistics to decode spike trains instant by instant, as spikes arrive or do not. Order statistics are time-consuming to compute in the general case. We demonstrate that data from neurons in V1 are well-fit by a mixture of Poisson processes; in this special case, our computations are substantially faster. In these data, spike timing contributed information beyond that available from spike count throughout the trial. At the end of the trial, a decoder based on the mixture of Poissons model correctly decoded about three times as many trials as expected by chance, compared to about twice as many as expected by chance using spike count only. If our model perfectly described the spike trains, and enough data were available to estimate model parameters, then our Bayesian decoder would be optimal. For 4/5 of the sets of stimulus-elicited responses, the observed spike trains were consistent with the mixture of Poissons model. Most of the error in estimating stimulus probabilities is due to not having enough data to specify the parameters of the model rather than to misspecification of the model itself.
We elucidate a new approach for analyzing neuronal systems using non-classical (non-equilibrium) information theory. Specifically, we consider the irreversible processes behind neural information transmission, using the tools of non-equilibrium, nonclassical thermodynamics. This contrasts to other current studies in neuroscience, which use the Shannon model of information theory, based on the classical equilibrium thermodynamic model of Boltzman. Based on this new approach, we are delineating an experimental and theoretical infrastructure aimed at elucidating the control, communication and computation processes in neural systems. Using Nyquist theorem and generalized temperature concept (Nyqiust temperature), we compute a non-equilibrial entropy production and neurodynamic temperature equivalent during neural information processing. A trans-information/temperature plot implies a zero neurodynamic temperature (at 0 N, degrees nyquist), as an informational counterpart of third law of thermodynamics (at 0 K). Multi-unit electrophysiological data derived from the cricket cercal sensory system is used to test, refine and generalize this new framework. A model is being developed of this simple sensory-motor system within this new framework. This novel approach may be of general utility to neuroscientists interested in determining the neural basis of computation.
The elemental symbol manipulated by cortical neurons is the spike, or action potential. Spikes are not independent, however, and interactions between them - whether spikes from a different cell, or from the same cell but at a different time - may affect the way in which information is coded. We have developed procedures for separating out the contribution of interactions to the Shannon information content of the spike trains. In this talk I will discuss the application of information theory to a number of experiments which have lead to insight about how interactions between spikes affect the neural coding of visual information. The first experiment concerns how information quantities change over the course of development of the visual system. The second concerns the effect of correlations in the spiking activity of pairs of suitably related V1 neurons - do these correlations result in synergistic or redundant pooling of information across cells? In the third experiment we examine the dynamic responses of cells in an extrastriate visual area (MT), looking for synergistic and redundant interactions between spikes. In all of these cases, we see that the spike trains cannot be approximated by Poisson processes - the amount of information represented depends upon correlations between the spikes.
The talk will begin by describing the new National Research Center (NBRC). This is a new Institute which has been set up in India and the first of its kind for the development of Neuroscience Research. Next I shall talk briefly about the Modulation Spectrum which analyses the joint spectral and temporal characteristics of sound. I shall indicate how the modulation spectrum is set up and the interesting characteristics of natural sounds it has revealed to us.
How does our brain generate the tremendous diversity of human precognitive and cognitive phenomena and how does it deal with the combinatorial complexity of our environment? Distributed representations of different aspects of our environment - for example orientation, color, motion, shape and spatial relationships in the visual modality, context-dependent working memory, conflict monitoring and many others - can help to avoid the combinatorial explosion, but they raise the binding problem. What else can we gain from a distributed representation in global brain activity patterns?
I want to briefly discuss the hypothesis that human-like cognitive processes might arise as emergent phenomena from the recurrent dynamics, by which different aspects of a large scale distributed code affect each other and mutually guide each others' local dynamics such as to form a final coherent brain state. Following the "biased competition hypothesis", the state of each brain area is influenced by a bottom-up component (driven by the stimulus) and a top-down component (driven by the states of all other areas): the mutual bias mediated by inter-areal pyramidal cell axons, adjusts the different aspects established by the local competition in the different areas such as to match best each other's and the environment's states. Neurodynamical multiareal models [2-4] based on biased competition can unify apparently serial and parallel processing in object- and feature-based visual attention.
A striking feature of the activity of cortical neurons is that the spike trains are irregular, and responses to repeated presentations of the same stimulus can be quite variable. Moreover, neighboring neurons have generally similar response properties, but the variability is largely independent. It is often assumed that an inhomogeneous Poisson process is an adequate description cortical response variability. Were this the case, then individual spikes, at best, serve as estimators of a firing rate, and decoding of a population of similar neurons is optimally done by a population average.
We test these ideas with two kinds of experiments carried out on clusters of neurons in the primary visual cortex of the macaque monkey. In the first set of experiments, we record responses to repeated presentations of pseudorandom (m-sequence) patterns, which allows for both an analysis of average response properties and a direct test of the inhomogeneous Poisson hypothesis. In the second set of experiments, we record responses to more traditional visual stimuli (spatial grating patterns), and analyze these responses via a metric-space approach. The latter approach provides a means to formalize and test a wide variety of coding hypotheses, especially as they relate to temporal representation of information.
The experiments are complementary, and converge on the conclusion that spike trains are more than estimators of a firing rate, and that the detailed pattern of neural activity within individual spike trains and across neurons cannot be ignored.