Workshop 3: Neural Coding

(February 10,2003 - February 14,2003 )

Organizers


Emery Brown
Department of Anesthesia, Massachusetts General Hospital
John Miller
Center for Computational Biology, Montana State University

How is information about the external world and about animals's internal states represented within their nervous systems? Although a great deal is known about the relationships between the stimulus/response properties of nerve cells in a variety of systems, we are in many cases far from having a detailed understanding of the correspondence between neural activity patterns and the information represented by those patterns. We will not be able to understand the operation of any nervous system rigorously until we decipher the neural code, i.e., the system of symbols used to represent and convey information within that system. A sound, rigorous understanding of neural coding will also be essential from the standpoint of developing sophisticated models of nerve cells and systems. What aspects of neural ensemble activity patterns should be measured experimentally and incorporated into models?

There is probably no such thing as THE neural code, universal across all animals or even between different subsystems in a single animal, in the same sense as there exists a universal genetic code. However, general principles of neural encoding are starting to emerge. Much recent work in this area involves the application of sophisticated statistical approaches to the analysis of neural spike train data, and applied mathematicians have made substantial contributions to this area of research. Numerous approaches to the estimation of information-theoretic quantities from spike trains have been proposed and applied in a variety of systems. However, many of the approaches are based on very different sets of assumptions. Some significant differences have emerged in the interpretations of these information theoretic analyses, and it is unclear how much of these differences can be explained by differences in what is actually being measured, to the biases or hidden assumptions in the methodologies, or to real differences in the biological coding schemes. The whole field is ripe for a rigorous examination, comparison and normalization of the different approaches. Neuroscience would benefit greatly from an increased involvement of mathematicians and statisticians in extending the analytical framework, and from their direct involvement in designing and interpreting the experiments.

Three general aims of the workshop include the following:

  • to inspire collaborative interactions between experimentalists, mathematicians and statisticians in the development of more powerful algorithms for the analysis of neural encoding, with a strong focus on refining current hypotheses for ensemble spike train coding;
  • to establish a sound, rigorous basis for examining the differences in findings within and across preparations;
  • to consider the very challenging problems associated with extending information-theoretic analysis to networks.

 

Examples of organizing questions to be considered in this workshop are as follows:

  1. What is a channel, in the Shannon sense, within the neural processing architecture? Are single nerve cells the elemental computational units, or some larger-scale neural ensembles? This may depend on the level of analysis (e.g., whether the system is being studied with respect to the operations being carried out within single-cells, all the way up to systems consisting of millions of neurons distributed between several brain areas.
  2. What is the nature and quantity of information represented at each processing stage of a neural subsystem? What is being represented? (i.e., what is the relevant stimulus world for the system under study?)
  3. What is the code with which that information is represented, transmitted and operated upon across those channels? A variety of encoding schemes have been proposed, ranging from simple linear rate codes to complex nonlinear ensemble codes. What are rigorous criteria for identifying linear and non-linear codes? For static vs. dynamic temporal codes? What algorithms should we develop and apply to identify these different schemes?
  4. Are nerve cells and networks noisy or deterministic? What are the principle sources of noise, from the biophysical level of macromolecules and ion channels to the dynamics of large networks of synaptically interconnected cells? To what extent must stochastic behavior be incorporated into neural models, and at what phenomenological level, in order to insure validity of those models?

Accepted Speakers

Shun-ichi Amari
Riken Brain Science Institute
Charles Anderson
Department of Anatomy & Neurobiology, Washington University School of Medicine
David Arathorn
Montana State University
Michael Black
Department of Computer Science, Brown University
Hemant Bokil
Lucent Technologies Bell Laboratories
Naama Brenner
Department of Chemical Engineering, Technion - Israel Institute of Technology
Emery Brown
Department of Anesthesia, Massachusetts General Hospital
Sharon Crook
Department of Mathematics, University of Maine
Yang Dan
University of California, Berkeley
Mike DeWeese
Cold Spring Harbor Lab
Alexander Dimitrov
Center for Computational Biology, Montana State University
Chris Eliasmith
Department of Philosophy, University of Waterloo
David Field
Psychology Department, Cornell University
Peter Foldiak
Psychological Laboratory, University of St. Andrews
Bijoy Ghosh
Department of Systems Science & Mathematics, Washington University
Sonja Gruen
Neurobiology - Inst for Biology, Freie Universitat Berlin
John Hertz
Nordic Institute for Theoretical Atomic Physics (NORDITA)
Don Johnson
Department of Electrical & Computer Engineering, Rice University
Robert Kass
Department of Statistics, Carnegie-Mellon University
Peter Latham
Department of Neurobiology, University of California, Los Angeles
Tai Sing Lee
Computer Science/Ctr for Neural Basis of Cognition, Carnegie-Mellon University
Christian Machens
Cold Spring Harbor Laboratory
John Miller
Center for Computational Biology, Montana State University
Bruno Olshausen
Center for Neuroscience, University of California, Davis
Liam Paninski
Center for Neural Science, New York University
Stefano Panzeri
Department of Optometry & Neuroscience, University of Science and Technology in Manchester (UMIST)
Rajesh Rao
Computer Science and Engineering, University of Washington
Barry Richmond
Laboratory of Neuropsychology, National Institute of Mental Health
Simon Schultz
Center for Neural Science, New York University
Tatyana Sharpee
Department of Physiology, University of California, San Diego
Nandini Singh
National Brain Research Centre, India
Martin Stetter
Corporate Technology, Siemens AG
Jonathan Victor
Department of Neurology & Neuroscience, Cornell University
Monday, February 10, 2003
Time Session
09:15 AM
10:00 AM
Emery Brown - Neuroscience Data: Dynamic and Multivariate

Neuroscience Data: Dynamic and Multivariate

10:15 AM
11:00 AM
Jonathan Victor - Representation of Visual Information by Cortical Neurons: Are spikes merely estimators of a firing rate?

A striking feature of the activity of cortical neurons is that the spike trains are irregular, and responses to repeated presentations of the same stimulus can be quite variable. Moreover, neighboring neurons have generally similar response properties, but the variability is largely independent. It is often assumed that an inhomogeneous Poisson process is an adequate description cortical response variability. Were this the case, then individual spikes, at best, serve as estimators of a firing rate, and decoding of a population of similar neurons is optimally done by a population average.


We test these ideas with two kinds of experiments carried out on clusters of neurons in the primary visual cortex of the macaque monkey. In the first set of experiments, we record responses to repeated presentations of pseudorandom (m-sequence) patterns, which allows for both an analysis of average response properties and a direct test of the inhomogeneous Poisson hypothesis. In the second set of experiments, we record responses to more traditional visual stimuli (spatial grating patterns), and analyze these responses via a metric-space approach. The latter approach provides a means to formalize and test a wide variety of coding hypotheses, especially as they relate to temporal representation of information.


The experiments are complementary, and converge on the conclusion that spike trains are more than estimators of a firing rate, and that the detailed pattern of neural activity within individual spike trains and across neurons cannot be ignored.



  1. Lab web page: http://www-users.med.cornell.edu/~jdvicto/labonweb.html

  2. Background material on the spike metric method: http://www-users.med.cornell.edu/~jdvicto/metricdf.html

  3. Review article on temporal aspects of early visual processing, Victor, J.D. (1999). Temporal aspects of neural coding in the retina and lateral geniculate: a review. Network, 10, R1-66. http://www-users.med.cornell.edu/~jdvicto/vict99r.html

  4. Selected publications on neural coding (additional related publications at http://www-users.med.cornell.edu/~jdvicto/jdvpubsc.html)

  5. Victor, J.D., & Purpura, K. (1996). Nature and precision of temporal coding in visual cortex: a metric-space analysis. J. Neurophysiol., 76, 1310-1326. http://www-users.med.cornell.edu/~jdvicto/vipu96.html 6. Victor, J.D., & Purpura, K.P. (1997). Metric-space analysis of spike trains: theory, algorithms, and application. Network, 8, 127-164. http://www-users.med.cornell.edu/~jdvicto/vipu97.html

  6. Reich, D.S., Mechler, F., & Victor, J.D. (2001). Independent and redundant information in nearby cortical neurons. Science, 294, 2566-2568.

  7. Mechler, F., Reich, D. S., & Victor, J.D. (2002). Detection and discrimination of relative spatial phase by V1 neurons. J. Neurosci., 22, 6129-6157. http://www-users.med.cornell.edu/~jdvicto/merevi02.html

  8. Victor, J.D. (in press). Binless strategies for estimation of information from neural data. Phys. Rev. E. http://www-users.med.cornell.edu/~jdvicto/vict03.html

11:30 AM
12:15 PM
Don Johnson - Information Processing Performance Limits of Neural Populations

To determine whether or not neural populations work in concert to code information has defied conventional analysis. New techniques using information theory principles seem to hold the best promise. Using them requires defining a baseline performance against which to judge population coding. Using an information processing theoretic approach, we show that the conventional baseline is misleading. We show that stimulus-induced dependence alone is sufficient to encode information perfectly, and we propose that this standard should serve as the baseline. When using this baseline, we show that cooperative populations, which exhibit both stimulus- and connection-induced dependence, can only perform better than the baseline for relatively small population sizes.

02:00 PM
02:45 PM
Robert Kass - Statistical Modeling of Temporal Evolution in Neuronal Activity

My main aim in this presentation is to motivate the use of probability models in the statistical analysis of neuronal data. Probability models offer efficiency, flexibility, and the ability to make formal statistical inferences. I illustrate by considering estimation of instantaneous firing rate, variation in firing rate across many neurons, decoding for movement prediction, within-trial firing rate (non-Poisson spiking), and correlated spiking across pairs of neurons.

Tuesday, February 11, 2003
Time Session
08:45 AM
09:30 AM
David Field - Visual Coding, Redundancy, and the Statistics of the Natural World

Over the last 15 years, a range of insights into visual coding have developed out of a deeper understanding of the statistics of the natural environment. The structure arising from correlations in pixel values as well as the sparse edge related structure of natural scenes have helped to provide an account of the processing of information along the visual pathway from retinae to cortex. However, the statistical dependencies in natural images occur at all levels of analysis. One can not assume that any method would be capable of finding descriptions where the units of description are independent. Independent components are simply impossible with most natural environments. Then how does one handle redundancy when independence is either not possible or impractical given the number of neurons? One insight may come from the lateral connections between oriented neurons in primary visual cortex. Here, we find conditions where small collections of neurons appear to be representing the redundant structure (e.g., the continuity of edges), rather than single neurons. Do insights from these modes of representation provide insights into higher levels of representing redundancy? This talk will probe some of the possible limits of what we can learn by understanding the redundancy of the natural world.



  1. Field, D. J. (1987). Relations between the statistics of natural images and the response profiles of cortical cells. Journal of the Optical Society of America A, 4, 2379-2394.

  2. Field, D. (1994). What is the goal of sensory coding & Neural Computation. 6, 559-601.

  3. Olshausen, B.A., & Field, D.J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381, 607-609.

  4. Hess, R. F., & Field, D. J. (2000). Integration of contours: New insights & trends in cognitive sciences. 3, 480-486.

09:45 AM
10:30 AM
Yang Dan - Analysis of Visual Coding with Nonlinear Methods

A major challenge in studying sensory processing is to understand the meaning of the neural messages encoded in the spiking activity of neurons. In the visual cortex, the majority of neurons have nonlinear response properties, making it difficult to characterize their stimulus-response relationships. I will discuss two nonlinear methods to analyze the input-response relationship of these cortical neurons: training of artificial neural networks with the back-propagation algorithm and the second-order Wiener Kernel analysis. Both methods can capture much of the input-response transformation in the classical receptive fields of the cortical complex cells.

11:00 AM
11:45 AM
Alexander Dimitrov - Analysis and modeling of sensory systems with Rate Distortion Theory

We present an analytical approach through which the relevant stimulus space and the corresponding neural symbols of a neuron or neural ensemble can be discovered simultaneously and quantitatively, making few assumptions about the nature of the code or relevant features. The basis for this approach is to conceptualize a neural coding scheme as a collection of stimulus-response classes akin to a dictionary or 'codebook', with each class corresponding to a spike pattern 'codeword' and its corresponding stimulus feature in the codebook. The neural codebook is derived by quantizing the neural responses into a small reproduction set, and optimizing the quantization to minimize an information-based distortion function. This approach uses tools from Rate Distortion Theory for the analysis of neural coding schemes. Its success prompted us to consider the general framework of signal quantization with minimal distortion as a model for the functioning of early sensory processing. Evidence from behavioural and neuroanatomical data suggested that symmetries in the sensory environment need to be taken into account as well. We suggest two approaches - implicit and explicit - which can incorporate the symmetries in the quantization model.

02:00 PM
02:45 PM
Naama Brenner - Adaptive Neural Codes: Function and Mechanism

Neural codes are highly adaptive and context dependent. Some results will be reviewed indicating the functional aspects of adaptive coding in sensory systems. Information theory can help in providing a quantitative understanding of these aspects. From a mechanistic point of view, maintaining an adaptive code requires both space and time flexibility of neural responses. Experiments will be described on random networks, indicating that some features of sensory adaptation arise from neural network structure with no anatomy.


References:



  1. Brenner, N., Bialek, W., & de Ruyter van Steveninck, R. (2000). Adaptive rescaling maximizes information transmission. Neuron, 26(3), 695-702.

  2. Fairhall, A., Lewen, G., Blalek, W., & de Ruyter van Steveninck, R. (2001). Efficiency and ambiguity in an adaptive neural code. Nature (London, U. K.), 412(6849), 787-792.

Wednesday, February 12, 2003
Time Session
08:45 AM
09:30 AM
Bruno Olshausen - Sparse Coding of Time-varying Natural Images

The images that fall upon our retinae contain certain statistical regularities over space and time. In this talk I will discuss a method for modeling this structure based upon sparse coding in time. When adapted to time-varying natural images, the model develops a set of space-time receptive fields similar to those of simple-cells in the primary visual cortex. A continuous image sequence is thus re-represented in terms of a set of punctate, spike-like events in time. The suggestion is that *both* the receptive fields of V1 neurons and the spiking nature of neural activity go hand in hand---i.e., they are part of a coordinated strategy for producing sparse representations of sensory data.



  1. Olshausen, B.A. (in press). Principles of image representation in visual cortex. In The Visual Neurosciences, L.M. Chalupa, J.S. Werner, eds. MIT Press. (Currently available at ftp://redwood.ucdavis.edu/pub/papers/visn-preprint.pdf)

  2. Olshausen, B.A., & Field, D.J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381, 607-609.

  3. Dong, D.W., & Atick, J.J. (1995). Temporal decorrelation: a theory of lagged and nonlagged responses in the lateral geniculate nucleus. Network: Computation in Neural Systems, 6, 159-178.

  4. Simoncelli, E.P., & Olshausen, B.A. (2001). Natural image statistics and neural representation. Annual Reviews of Neuroscience, 24, 1193- 1215.

  5. van Hateren, J.H., & Ruderman, D.L. (1998). Independent component analysis of natural image sequences yields spatio-temporal filters similar to simple cells in primary visual cortex. Proc.R.Soc.Lond. B, 265, 2315-2320.

09:45 AM
10:30 AM
Chris Eliasmith - Neural Engineering

Charles Anderson and I have recently proposed a unified framework for generating large-scale neurally plausible models that relies on integrating recent advances in neural coding with modern control theory (in our book 'Neural Engineering'). I will briefly describe this framework, including our approach to unifying population and temporal coding for scalar, vector, and function representation.

11:00 AM
11:45 AM
David Arathorn - Map-Seeking Circuits in Visual Cognition

A wide variety of visual tasks and psychophyical phenomena depend on the identification of a previously captured pattern which appears in part of the current retinal image transformed by translation, orientation, scaling and perspectivity. Realtime performance of biological circuits precludes a serial search of any sort, and to date all attempts to conceive robust solutions based on "invariances" have fallen short. By exploiting a simple ordering property of superpositions a class of simple, elegant circuits can concurrently discover a correct memory match and correct composition of transformations to parts of an input image in the midst of clutter or distractors. Termed map-seeking circuits, they have isomorphic biological, analog electronic and algorithmic implementations, and are capable of realtime performance in any of those realizations. Various recognition and shape-from-viewpoint-displacement tasks are demonstrated. As a general purpose forward/inverse transformation solver the map-seeking circuit may be applied to other biological computational problems. Application to limb inverse kinematics is demonstrated.



  1. Arathorn, D.W. (2002). Map-Seeking Circuits in Visual Cognition. Stanford University Press.

  2. Arathorn, D.W. (2001). Recognition under transformation using superposition ordering property. Electronics Letters IEE, 37:3-164.

02:00 PM
02:45 PM
Rajesh Rao - Probabilistic Computation in Recurrent Cortical Circuits

There has been considerable interest in Bayesian networks and probabilisitic "graphical models" in the artificial intelligence community in recent years. Simultaneously, a large number of human psychophysical results have been successfully explained using Bayesian and other probabilistic models. A central question that is yet to be resolved is how such models can be implemented neurally. In this talk, I will show how a network architecture commonly used to model the cerebral cortex can implement probabilistic (Bayesian) inference for an arbitrary Markov model. The suggested approach is illustrated using a visual motion detection task. The simulation results show that the model network exhibits direction selectivity and correctly computes the posterior probabilities for motion direction. When used to solve the well-known random dots motion discrimination task, the model generates responses that mimic the activities of evidence-accumulating neurons in cortical areas LIP and FEF. In addition, the model predicts reaction time distributions that are similar to those obtained in human psychophysical experiments that manipulate the prior probabilities of targets and task urgency.



  1. Rajesh, P. N. R., Olshausen, B.A., & Lewicki, M.S. (Eds.). (2002). Probabilistic models of the brain: Perception and neural function. Cambridge: MIT Press.

  2. Pouget, A., Dayan, P., & Zemel, R.S. (2000). Information processing with population codes. Nature Reviews Neuroscience, 1, 125-132.

  3. Anderson, C. H., & Van Essen, D. C. (1994). Neurobiological computational systems. In Zurada, J. M., Marks II, R. J., & Robinson, C. J., eds., Computational Intelligence: Imitating Life. New York, NY: IEEE Press, pp. 213-222.

  4. Rajesh, P.N.R. (1999). An optimal estimation approach to visual perception and learning, . Vision Research, 39(11), 1963-1989.

Thursday, February 13, 2003
Time Session
08:45 AM
09:30 AM
Simon Schultz - Spikes and the Coding of Visual Information in the Cortex

The elemental symbol manipulated by cortical neurons is the spike, or action potential. Spikes are not independent, however, and interactions between them - whether spikes from a different cell, or from the same cell but at a different time - may affect the way in which information is coded. We have developed procedures for separating out the contribution of interactions to the Shannon information content of the spike trains. In this talk I will discuss the application of information theory to a number of experiments which have lead to insight about how interactions between spikes affect the neural coding of visual information. The first experiment concerns how information quantities change over the course of development of the visual system. The second concerns the effect of correlations in the spiking activity of pairs of suitably related V1 neurons - do these correlations result in synergistic or redundant pooling of information across cells? In the third experiment we examine the dynamic responses of cells in an extrastriate visual area (MT), looking for synergistic and redundant interactions between spikes. In all of these cases, we see that the spike trains cannot be approximated by Poisson processes - the amount of information represented depends upon correlations between the spikes.



  1. Panzeri, S., & Schultz, S.R. (2001). A unified approach to the study of temporal, correlational and rate coding. Neural Computation, 13(6), 1311-1349.

  2. chultz, S.R., & Panzeri, S. (2001). Temporal correlations and neural spike train entropy. Physical Review Letters, 86(25), 5823-5826.

  3. Rust, N.C., Schultz, S.R., & Movshon, J.A. (in press). A reciprocal relationship between reliability and responsiveness in macaque striate cortex neurons during development. J. Neurosci.

09:45 AM
10:30 AM
Alexander Borst - Noise, not Stimulus Entropy, Determines Neural Information Rate

Recent theoretical advances allow for the determination of the information rate inherent in the spike trains of nerve cells. However, up to now, the dependence of the information rate on stimulus parameters has not been studied in any neuron in a systematic way. Here, I investigate the information carried by the spike trains of H1, a motion-sensitive visual interneuron of the blowfly (Calliphora vicina) using a moving grating as a stimulus. One might expect that, up to a certain limit, the information rate becomes the larger the richer the stimulus entropy. This, however, is not the case: Increasing either the dynamic range of the stimulus or the maximum velocity has little or no influence at all on the information rate. In contrast, the information rate steeply increases when the size or the contrast of the stimulus is enlarged. It appears that, regardless of the stimulus entropy, the neuron covers the stimulus with its whole response repertoir, with the information rate being limited by the noise of the stimulus and the neural hardware.

11:00 AM
11:45 AM
Peter Latham - Decoding Spike Trains: Are correlations important?

Correlations among action potentials, both within spike trains from single neurons and across spike trains from multiple neurons, are ubiquitous. They are observed in many species, from the common house fly to the primate. The role of these correlations is unclear and has long been the subject of debate. Do correlations carry extra information -- information that can't be extracted from the uncorrelated responses -- or don't they? Part of the reason this question has been hard to answer is that it's not clear how to separate correlated from uncorrelated responses. Here we sidestep this issue, and instead rephrase the question as follows: Is it possible to extract all the information from a set of neuronal responses without any knowledge of the correlational structure? If the answer is "yes", then correlations are not important; otherwise, they are. This provides us with a rigorous method for assessing the role of correlations. We provide several examples to clarify the method, and then compare it to other approaches.



  1. Nirenberg, S., Carcieri, S.M., Jacobs, A.L., & Latham, P.E. (2001). Retinal ganglion cells act largely as independent encoders. Nature, 411, 698-701.

  2. Dan, Y., Alonso, J.M., Usrey, W.M., & Reid, R.C. (1998). Coding of visual information by precisely correlated spikes in the lateral geniculate nucleus. Nat Neurosci., 1, 501-7.

  3. Oram, M.W., Hatsopoulos, N.G., Richmond, B.J., & Donoghue, J.P. (2001). Excess synchrony in motor cortical neurons provides redundant direction information with that from coarse temporal measures. J Neurophysiol., 86, 1700-16.

  4. Pola, G., Thiele, A., Hoffmann, K.P., & Panzeri, S. (in press). An exact method to quantify the information transmitted by different methods of correlational coding. Network.

02:00 PM
02:45 PM
Tai Sing Lee - Neural Adaptation to Environmental Statistics

The receptive fields of simple cells in the primary visual cortex have been modeled in terms of Gabor wavelets, and derived theoretically from efficient coding principles. In this talk, first, I will report findings of a neurophysiological experiment that demonstrate signals with naturalistic power spectrum provide not only a more efficient but a more accurate means for identifying the kernels (receptive fields) of V1 neurons. The reason is that the neurons have been tuned to functionbest in the regime of natural stimuli rather than in other regimes.Second, I will report findings from another experiment that showsthat different stages of the neural responses in V1 are actually codingdifferent aspects of the visual scenes. While the early stage ofthe responses to a static image reflects the filtering propertiesof the neurons, the later stage of the response reflect the outcomeof perceptual inference, which is in turn influenced by top-downfeedback of the prior statistical experience of the animals intheir environment.


Some articles related to this talk are available could be found in http://www.cnbc.cmu.edu/~tai.

Friday, February 14, 2003
Time Session
09:45 AM
10:30 AM
Shun-ichi Amari - Mathematical Aspects of Population Coding

Information is believed to be represented by excitation patterns of populations of neurons in the brain. Neurons fire stochastically, depending on inputs from the outside and mutual interactions within the population. The present talk addresses some mathematical aspects underlying the scheme of population coding.



  1. Orthogonal decomposition of a firing patter into firing rates, pairwise correlations and higher-order interactions of neural firing in a population.

  2. Synfiring and higher-order interactions in a population of neurons.

  3. Fisher information and encoding/decoding accuracy in a neural field.

  4. Algebraic singularities when multiple targets are presented in a neural field, and their resolution by synfiring


References:



  1. Nakahara, H., & Amari, S. (2002). Information-geometric measure for neural spikes. Neural Computation, 14, 2269-2316.

  2. Wu, S., Nakahara, H., & Amari, S. (2001). Population coding with correlation and an unfaithful model. Neural Computation, 13, 775-797.

  3. Wu, S., Amari, S., & Nakahara, H. (2002). Population coding and decoding in a neural field: a computational study. Neural Computation, 14, 999-1026.

  4. Amari, S., Nakahara, H., Wu, S., & Sakai, Y. (2003). Synchronous firing and higher-order interactions in a neuron pool. Neural Computation, 15 (to appear).

11:00 AM
11:45 AM
Barry Richmond - Decoding Spike Trains Instant-by-instant

In the brain, spike trains are generated in time, and presumably also interpreted as they unfold. Recent work suggests that in several areas of the monkey brain, individual spike times carry information because they reflect underlying rate variation. Constructing a model based on this stochastic structure allows us to apply order statistics to decode spike trains instant by instant, as spikes arrive or do not. Order statistics are time-consuming to compute in the general case. We demonstrate that data from neurons in V1 are well-fit by a mixture of Poisson processes; in this special case, our computations are substantially faster. In these data, spike timing contributed information beyond that available from spike count throughout the trial. At the end of the trial, a decoder based on the mixture of Poissons model correctly decoded about three times as many trials as expected by chance, compared to about twice as many as expected by chance using spike count only. If our model perfectly described the spike trains, and enough data were available to estimate model parameters, then our Bayesian decoder would be optimal. For 4/5 of the sets of stimulus-elicited responses, the observed spike trains were consistent with the mixture of Poissons model. Most of the error in estimating stimulus probabilities is due to not having enough data to specify the parameters of the model rather than to misspecification of the model itself.

02:00 PM
02:45 PM
Michael Black - Connecting Brains with Machines: The Neural Control of 2D Cursor Movement

Building a direct, artificial, connection between the brain and the world, requires answers to the following questions:



  1. What "signals" can we measure from the brain? From what regions? With what technology?

  2. How is information represented (or encoded) in the brain?

  3. What algorithms can we use to infer (or decode) the internal "state" of the brain?

  4. How can we build practical interfaces that exploit the available technology?


This talk will summarize our work on developing neural prostheses and will provide preliminary answers to the above questions with a focus on the problem of modeling and decoding motor cortical activity. Recent work has shown that simple linear models can be used to approximate the firing rates of a population of cells in primary motor cortex as a function of the position, velocity, and acceleration of the hand. In particular, I will describe a real-time Kalman filter for inferring hand motion from the firing rates of a population of cells recorded with a chronically implanted microelectrode array. I will also describe non-linear generalizations of this model including Generalized Linear Models (GLM), and Generalized Additive Models (GAM). Non-linear decoding is achieved using a recursive Bayesian estimator known as the "particle filter". I will illustrate these ideas by showing recent results with direct neural control of smooth 2D cursor motion.


This is joint work with John Donoghue, Elie Bienenstock, Yun Gao, Mijail Serruya, and Wei Wu.


Web page: http://www.cs.brown.edu/people/black/


Donoghue Lab home page: http://donoghue.neuro.brown.edu/


Overview of neural prosthetics project: http://www.cs.brown.edu/people/black/Papers/capriOverviewDraft.pdf


Kalman filter decoding paper: http://www.cs.brown.edu/people/black/Papers/nips02draft.pdf


Particle filtering paper: http://www.cs.brown.edu/people/black/Papers/NIPS14.pdf

Name Email Affiliation
Afghan, Muhammad afghan@helios.phy.ohiou.edu Physics & Astronomy, Ohio University
Amari, Shun-ichi amari@brain.riken.go.jp Riken Brain Science Institute
Anderson, Charles cha@shifter.wustl.edu Department of Anatomy & Neurobiology, Washington University School of Medicine
Arathorn, David dwa@giclab.com Montana State University
Black, Michael black@cs.brown.edu Department of Computer Science, Brown University
Bokil, Hemant bokil@bell-labs.com Lucent Technologies Bell Laboratories
Borisyuk, Alla borisyuk@mbi.osu.edu Mathematical Biosciences Institute, The Ohio State University
Borst, Alexander borst@neuro.mpg.de Systems & Computational Neurobiology, Max Planck Institute of Neurobiology
Brenner, Naama nbrenner@tx.technion.ac.il Department of Chemical Engineering, Technion - Israel Institute of Technology
Brown, Emery brown@neurostat.mgh.harvard.edu Department of Anesthesia, Massachusetts General Hospital
Chi, Zhiyi chi@galton.uchicago.edu Department of Statistics, University of Chicago
Cowen, Carl cowen@mbi.osu.edu Department of Mathematics, The Ohio State University
Cracium, Gheorghe craciun@math.wisc.edu Mathematical Biosciences Institute, The Ohio State University
Crook, Sharon crook@math.umaine.edu Department of Mathematics, University of Maine
Dan, Yang ydan@uclink4.berkeley.edu University of California, Berkeley
Danthi, Sanjay danthi.1@osu.edu Mathematical Biosciences Institute, The Ohio State University
DeWeese, Mike deweese@cshl.org Cold Spring Harbor Lab
DiCaprio, Ralph rdicaprio1@ohiou.edu Department of Biological Sciences, Ohio University
Dimitrov, Alexander alex@cns.montana.edu Center for Computational Biology, Montana State University
Dougherty, Daniel dpdoughe@mbi.osu.edu Mathematical Biosciences Institute, The Ohio State University
Eliasmith, Chris eliasmith@uwaterloo.ca Department of Philosophy, University of Waterloo
Field, David djf3@cornell.edu Psychology Department, Cornell University
Foldiak, Peter peter.foldiak@st-andrews.ac.uk Psychological Laboratory, University of St. Andrews
Gedeon, Tomas tgedeon@gmail.com Department of Mathematical Sciences, Montana State University
Ghosh, Bijoy ghosh@netra.wustl.edu Department of Systems Science & Mathematics, Washington University
Gruen, Sonja gruen@neurobiologie.fu-berlin.de Neurobiology - Inst for Biology, Freie Universitat Berlin
Hayot, Fernand hayot@mps.ohio-state.edu Department of Physics, The Ohio State University
Hertz, John hertz@nordita.dk Nordic Institute for Theoretical Atomic Physics (NORDITA)
Johnson, Don dhj@rice.edu Department of Electrical & Computer Engineering, Rice University
Kass, Robert kass@stat.cmu.edu Department of Statistics, Carnegie-Mellon University
Latham, Peter pel@ucla.edu Department of Neurobiology, University of California, Los Angeles
Lee , Tai Sing or Computer Science/Ctr for Neural Basis of Cognition, Carnegie-Mellon University
Machens, Christian machens@cshl.edu Cold Spring Harbor Laboratory
Miller, John jm@incaireland.org Center for Computational Biology, Montana State University
Nirenberg, Sheila sheilan@ucla.edu Department of Neurobiology, University of California, Los Angeles
Olshausen, Bruno baolshausen@ucdavis.edu Center for Neuroscience, University of California, Davis
Paninski, Liam lmp228@nyu.edu Center for Neural Science, New York University
Panzeri, Stefano s.panzeri@umist.ac.uk Department of Optometry & Neuroscience, University of Science and Technology in Manchester (UMIST)
Rao, Rajesh rao@cs.washington.edu Computer Science and Engineering, University of Washington
Rejniak, Katarzyna rejniak@mbi.osu.edu Mathematical Biosciences Institute, The Ohio State University
Richmond, Barry bjr@linux9.nimh.nih.gov Laboratory of Neuropsychology, National Institute of Mental Health
Roy, Prasen pkroy@nbrc.ac.in National Brain Research Center, Ministry of Science/Technology - India
Schultz, Simon schultz@cns.nyu.edu Center for Neural Science, New York University
Sharpee, Tatyana sharpee@phy.ucsf.edu Department of Physiology, University of California, San Diego
Singh, Nandini nandini@nbrc.ac.in National Brain Research Centre, India
Stanley, Garrett gstanley@deas.harvard.edu Department of Biomedical Engineering, Georgia Institute of Technology
Stetter, Martin martin.stetter@mchp.siemens.de Corporate Technology, Siemens AG
Terman, David terman@math.ohio-state.edu Department of Mathematics, The Ohio State University
Thomson, Mitchell Mathematical Biosciences Institute, The Ohio State University
Trost, Craig craig.trost@pfizer.com Computational Medicine, Pfizer Central Research
Victor, Jonathan jdvicto@med.cornell.edu Department of Neurology & Neuroscience, Cornell University
Wechselberger, Martin wm@mbi.osu.edu Mathematical Biosciences Institute, The Ohio State University
Wright, Geraldine wright.572@osu.edu Mathematical Biosciences Institute, The Ohio State University
Yang, Ting-Hui Institute of Applied Mathematics, National Chiao Tung University
Mathematical Aspects of Population Coding

Information is believed to be represented by excitation patterns of populations of neurons in the brain. Neurons fire stochastically, depending on inputs from the outside and mutual interactions within the population. The present talk addresses some mathematical aspects underlying the scheme of population coding.



  1. Orthogonal decomposition of a firing patter into firing rates, pairwise correlations and higher-order interactions of neural firing in a population.

  2. Synfiring and higher-order interactions in a population of neurons.

  3. Fisher information and encoding/decoding accuracy in a neural field.

  4. Algebraic singularities when multiple targets are presented in a neural field, and their resolution by synfiring


References:



  1. Nakahara, H., & Amari, S. (2002). Information-geometric measure for neural spikes. Neural Computation, 14, 2269-2316.

  2. Wu, S., Nakahara, H., & Amari, S. (2001). Population coding with correlation and an unfaithful model. Neural Computation, 13, 775-797.

  3. Wu, S., Amari, S., & Nakahara, H. (2002). Population coding and decoding in a neural field: a computational study. Neural Computation, 14, 999-1026.

  4. Amari, S., Nakahara, H., Wu, S., & Sakai, Y. (2003). Synchronous firing and higher-order interactions in a neuron pool. Neural Computation, 15 (to appear).

Map-Seeking Circuits in Visual Cognition

A wide variety of visual tasks and psychophyical phenomena depend on the identification of a previously captured pattern which appears in part of the current retinal image transformed by translation, orientation, scaling and perspectivity. Realtime performance of biological circuits precludes a serial search of any sort, and to date all attempts to conceive robust solutions based on "invariances" have fallen short. By exploiting a simple ordering property of superpositions a class of simple, elegant circuits can concurrently discover a correct memory match and correct composition of transformations to parts of an input image in the midst of clutter or distractors. Termed map-seeking circuits, they have isomorphic biological, analog electronic and algorithmic implementations, and are capable of realtime performance in any of those realizations. Various recognition and shape-from-viewpoint-displacement tasks are demonstrated. As a general purpose forward/inverse transformation solver the map-seeking circuit may be applied to other biological computational problems. Application to limb inverse kinematics is demonstrated.



  1. Arathorn, D.W. (2002). Map-Seeking Circuits in Visual Cognition. Stanford University Press.

  2. Arathorn, D.W. (2001). Recognition under transformation using superposition ordering property. Electronics Letters IEE, 37:3-164.

Connecting Brains with Machines: The Neural Control of 2D Cursor Movement

Building a direct, artificial, connection between the brain and the world, requires answers to the following questions:



  1. What "signals" can we measure from the brain? From what regions? With what technology?

  2. How is information represented (or encoded) in the brain?

  3. What algorithms can we use to infer (or decode) the internal "state" of the brain?

  4. How can we build practical interfaces that exploit the available technology?


This talk will summarize our work on developing neural prostheses and will provide preliminary answers to the above questions with a focus on the problem of modeling and decoding motor cortical activity. Recent work has shown that simple linear models can be used to approximate the firing rates of a population of cells in primary motor cortex as a function of the position, velocity, and acceleration of the hand. In particular, I will describe a real-time Kalman filter for inferring hand motion from the firing rates of a population of cells recorded with a chronically implanted microelectrode array. I will also describe non-linear generalizations of this model including Generalized Linear Models (GLM), and Generalized Additive Models (GAM). Non-linear decoding is achieved using a recursive Bayesian estimator known as the "particle filter". I will illustrate these ideas by showing recent results with direct neural control of smooth 2D cursor motion.


This is joint work with John Donoghue, Elie Bienenstock, Yun Gao, Mijail Serruya, and Wei Wu.


Web page: http://www.cs.brown.edu/people/black/


Donoghue Lab home page: http://donoghue.neuro.brown.edu/


Overview of neural prosthetics project: http://www.cs.brown.edu/people/black/Papers/capriOverviewDraft.pdf


Kalman filter decoding paper: http://www.cs.brown.edu/people/black/Papers/nips02draft.pdf


Particle filtering paper: http://www.cs.brown.edu/people/black/Papers/NIPS14.pdf

Noise, not Stimulus Entropy, Determines Neural Information Rate

Recent theoretical advances allow for the determination of the information rate inherent in the spike trains of nerve cells. However, up to now, the dependence of the information rate on stimulus parameters has not been studied in any neuron in a systematic way. Here, I investigate the information carried by the spike trains of H1, a motion-sensitive visual interneuron of the blowfly (Calliphora vicina) using a moving grating as a stimulus. One might expect that, up to a certain limit, the information rate becomes the larger the richer the stimulus entropy. This, however, is not the case: Increasing either the dynamic range of the stimulus or the maximum velocity has little or no influence at all on the information rate. In contrast, the information rate steeply increases when the size or the contrast of the stimulus is enlarged. It appears that, regardless of the stimulus entropy, the neuron covers the stimulus with its whole response repertoir, with the information rate being limited by the noise of the stimulus and the neural hardware.

Adaptive Neural Codes: Function and Mechanism

Neural codes are highly adaptive and context dependent. Some results will be reviewed indicating the functional aspects of adaptive coding in sensory systems. Information theory can help in providing a quantitative understanding of these aspects. From a mechanistic point of view, maintaining an adaptive code requires both space and time flexibility of neural responses. Experiments will be described on random networks, indicating that some features of sensory adaptation arise from neural network structure with no anatomy.


References:



  1. Brenner, N., Bialek, W., & de Ruyter van Steveninck, R. (2000). Adaptive rescaling maximizes information transmission. Neuron, 26(3), 695-702.

  2. Fairhall, A., Lewen, G., Blalek, W., & de Ruyter van Steveninck, R. (2001). Efficiency and ambiguity in an adaptive neural code. Nature (London, U. K.), 412(6849), 787-792.

Neuroscience Data: Dynamic and Multivariate

Neuroscience Data: Dynamic and Multivariate

Analysis of Visual Coding with Nonlinear Methods

A major challenge in studying sensory processing is to understand the meaning of the neural messages encoded in the spiking activity of neurons. In the visual cortex, the majority of neurons have nonlinear response properties, making it difficult to characterize their stimulus-response relationships. I will discuss two nonlinear methods to analyze the input-response relationship of these cortical neurons: training of artificial neural networks with the back-propagation algorithm and the second-order Wiener Kernel analysis. Both methods can capture much of the input-response transformation in the classical receptive fields of the cortical complex cells.

Analysis and modeling of sensory systems with Rate Distortion Theory

We present an analytical approach through which the relevant stimulus space and the corresponding neural symbols of a neuron or neural ensemble can be discovered simultaneously and quantitatively, making few assumptions about the nature of the code or relevant features. The basis for this approach is to conceptualize a neural coding scheme as a collection of stimulus-response classes akin to a dictionary or 'codebook', with each class corresponding to a spike pattern 'codeword' and its corresponding stimulus feature in the codebook. The neural codebook is derived by quantizing the neural responses into a small reproduction set, and optimizing the quantization to minimize an information-based distortion function. This approach uses tools from Rate Distortion Theory for the analysis of neural coding schemes. Its success prompted us to consider the general framework of signal quantization with minimal distortion as a model for the functioning of early sensory processing. Evidence from behavioural and neuroanatomical data suggested that symmetries in the sensory environment need to be taken into account as well. We suggest two approaches - implicit and explicit - which can incorporate the symmetries in the quantization model.

Neural Engineering

Charles Anderson and I have recently proposed a unified framework for generating large-scale neurally plausible models that relies on integrating recent advances in neural coding with modern control theory (in our book 'Neural Engineering'). I will briefly describe this framework, including our approach to unifying population and temporal coding for scalar, vector, and function representation.

Visual Coding, Redundancy, and the Statistics of the Natural World

Over the last 15 years, a range of insights into visual coding have developed out of a deeper understanding of the statistics of the natural environment. The structure arising from correlations in pixel values as well as the sparse edge related structure of natural scenes have helped to provide an account of the processing of information along the visual pathway from retinae to cortex. However, the statistical dependencies in natural images occur at all levels of analysis. One can not assume that any method would be capable of finding descriptions where the units of description are independent. Independent components are simply impossible with most natural environments. Then how does one handle redundancy when independence is either not possible or impractical given the number of neurons? One insight may come from the lateral connections between oriented neurons in primary visual cortex. Here, we find conditions where small collections of neurons appear to be representing the redundant structure (e.g., the continuity of edges), rather than single neurons. Do insights from these modes of representation provide insights into higher levels of representing redundancy? This talk will probe some of the possible limits of what we can learn by understanding the redundancy of the natural world.



  1. Field, D. J. (1987). Relations between the statistics of natural images and the response profiles of cortical cells. Journal of the Optical Society of America A, 4, 2379-2394.

  2. Field, D. (1994). What is the goal of sensory coding & Neural Computation. 6, 559-601.

  3. Olshausen, B.A., & Field, D.J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381, 607-609.

  4. Hess, R. F., & Field, D. J. (2000). Integration of contours: New insights & trends in cognitive sciences. 3, 480-486.

Information Processing Performance Limits of Neural Populations

To determine whether or not neural populations work in concert to code information has defied conventional analysis. New techniques using information theory principles seem to hold the best promise. Using them requires defining a baseline performance against which to judge population coding. Using an information processing theoretic approach, we show that the conventional baseline is misleading. We show that stimulus-induced dependence alone is sufficient to encode information perfectly, and we propose that this standard should serve as the baseline. When using this baseline, we show that cooperative populations, which exhibit both stimulus- and connection-induced dependence, can only perform better than the baseline for relatively small population sizes.

Statistical Modeling of Temporal Evolution in Neuronal Activity

My main aim in this presentation is to motivate the use of probability models in the statistical analysis of neuronal data. Probability models offer efficiency, flexibility, and the ability to make formal statistical inferences. I illustrate by considering estimation of instantaneous firing rate, variation in firing rate across many neurons, decoding for movement prediction, within-trial firing rate (non-Poisson spiking), and correlated spiking across pairs of neurons.

Decoding Spike Trains: Are correlations important?

Correlations among action potentials, both within spike trains from single neurons and across spike trains from multiple neurons, are ubiquitous. They are observed in many species, from the common house fly to the primate. The role of these correlations is unclear and has long been the subject of debate. Do correlations carry extra information -- information that can't be extracted from the uncorrelated responses -- or don't they? Part of the reason this question has been hard to answer is that it's not clear how to separate correlated from uncorrelated responses. Here we sidestep this issue, and instead rephrase the question as follows: Is it possible to extract all the information from a set of neuronal responses without any knowledge of the correlational structure? If the answer is "yes", then correlations are not important; otherwise, they are. This provides us with a rigorous method for assessing the role of correlations. We provide several examples to clarify the method, and then compare it to other approaches.



  1. Nirenberg, S., Carcieri, S.M., Jacobs, A.L., & Latham, P.E. (2001). Retinal ganglion cells act largely as independent encoders. Nature, 411, 698-701.

  2. Dan, Y., Alonso, J.M., Usrey, W.M., & Reid, R.C. (1998). Coding of visual information by precisely correlated spikes in the lateral geniculate nucleus. Nat Neurosci., 1, 501-7.

  3. Oram, M.W., Hatsopoulos, N.G., Richmond, B.J., & Donoghue, J.P. (2001). Excess synchrony in motor cortical neurons provides redundant direction information with that from coarse temporal measures. J Neurophysiol., 86, 1700-16.

  4. Pola, G., Thiele, A., Hoffmann, K.P., & Panzeri, S. (in press). An exact method to quantify the information transmitted by different methods of correlational coding. Network.

Neural Adaptation to Environmental Statistics

The receptive fields of simple cells in the primary visual cortex have been modeled in terms of Gabor wavelets, and derived theoretically from efficient coding principles. In this talk, first, I will report findings of a neurophysiological experiment that demonstrate signals with naturalistic power spectrum provide not only a more efficient but a more accurate means for identifying the kernels (receptive fields) of V1 neurons. The reason is that the neurons have been tuned to functionbest in the regime of natural stimuli rather than in other regimes.Second, I will report findings from another experiment that showsthat different stages of the neural responses in V1 are actually codingdifferent aspects of the visual scenes. While the early stage ofthe responses to a static image reflects the filtering propertiesof the neurons, the later stage of the response reflect the outcomeof perceptual inference, which is in turn influenced by top-downfeedback of the prior statistical experience of the animals intheir environment.


Some articles related to this talk are available could be found in http://www.cnbc.cmu.edu/~tai.

Sparse Coding of Time-varying Natural Images

The images that fall upon our retinae contain certain statistical regularities over space and time. In this talk I will discuss a method for modeling this structure based upon sparse coding in time. When adapted to time-varying natural images, the model develops a set of space-time receptive fields similar to those of simple-cells in the primary visual cortex. A continuous image sequence is thus re-represented in terms of a set of punctate, spike-like events in time. The suggestion is that *both* the receptive fields of V1 neurons and the spiking nature of neural activity go hand in hand---i.e., they are part of a coordinated strategy for producing sparse representations of sensory data.



  1. Olshausen, B.A. (in press). Principles of image representation in visual cortex. In The Visual Neurosciences, L.M. Chalupa, J.S. Werner, eds. MIT Press. (Currently available at ftp://redwood.ucdavis.edu/pub/papers/visn-preprint.pdf)

  2. Olshausen, B.A., & Field, D.J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381, 607-609.

  3. Dong, D.W., & Atick, J.J. (1995). Temporal decorrelation: a theory of lagged and nonlagged responses in the lateral geniculate nucleus. Network: Computation in Neural Systems, 6, 159-178.

  4. Simoncelli, E.P., & Olshausen, B.A. (2001). Natural image statistics and neural representation. Annual Reviews of Neuroscience, 24, 1193- 1215.

  5. van Hateren, J.H., & Ruderman, D.L. (1998). Independent component analysis of natural image sequences yields spatio-temporal filters similar to simple cells in primary visual cortex. Proc.R.Soc.Lond. B, 265, 2315-2320.

Probabilistic Computation in Recurrent Cortical Circuits

There has been considerable interest in Bayesian networks and probabilisitic "graphical models" in the artificial intelligence community in recent years. Simultaneously, a large number of human psychophysical results have been successfully explained using Bayesian and other probabilistic models. A central question that is yet to be resolved is how such models can be implemented neurally. In this talk, I will show how a network architecture commonly used to model the cerebral cortex can implement probabilistic (Bayesian) inference for an arbitrary Markov model. The suggested approach is illustrated using a visual motion detection task. The simulation results show that the model network exhibits direction selectivity and correctly computes the posterior probabilities for motion direction. When used to solve the well-known random dots motion discrimination task, the model generates responses that mimic the activities of evidence-accumulating neurons in cortical areas LIP and FEF. In addition, the model predicts reaction time distributions that are similar to those obtained in human psychophysical experiments that manipulate the prior probabilities of targets and task urgency.



  1. Rajesh, P. N. R., Olshausen, B.A., & Lewicki, M.S. (Eds.). (2002). Probabilistic models of the brain: Perception and neural function. Cambridge: MIT Press.

  2. Pouget, A., Dayan, P., & Zemel, R.S. (2000). Information processing with population codes. Nature Reviews Neuroscience, 1, 125-132.

  3. Anderson, C. H., & Van Essen, D. C. (1994). Neurobiological computational systems. In Zurada, J. M., Marks II, R. J., & Robinson, C. J., eds., Computational Intelligence: Imitating Life. New York, NY: IEEE Press, pp. 213-222.

  4. Rajesh, P.N.R. (1999). An optimal estimation approach to visual perception and learning, . Vision Research, 39(11), 1963-1989.

Decoding Spike Trains Instant-by-instant

In the brain, spike trains are generated in time, and presumably also interpreted as they unfold. Recent work suggests that in several areas of the monkey brain, individual spike times carry information because they reflect underlying rate variation. Constructing a model based on this stochastic structure allows us to apply order statistics to decode spike trains instant by instant, as spikes arrive or do not. Order statistics are time-consuming to compute in the general case. We demonstrate that data from neurons in V1 are well-fit by a mixture of Poisson processes; in this special case, our computations are substantially faster. In these data, spike timing contributed information beyond that available from spike count throughout the trial. At the end of the trial, a decoder based on the mixture of Poissons model correctly decoded about three times as many trials as expected by chance, compared to about twice as many as expected by chance using spike count only. If our model perfectly described the spike trains, and enough data were available to estimate model parameters, then our Bayesian decoder would be optimal. For 4/5 of the sets of stimulus-elicited responses, the observed spike trains were consistent with the mixture of Poissons model. Most of the error in estimating stimulus probabilities is due to not having enough data to specify the parameters of the model rather than to misspecification of the model itself.

Spikes and the Coding of Visual Information in the Cortex

The elemental symbol manipulated by cortical neurons is the spike, or action potential. Spikes are not independent, however, and interactions between them - whether spikes from a different cell, or from the same cell but at a different time - may affect the way in which information is coded. We have developed procedures for separating out the contribution of interactions to the Shannon information content of the spike trains. In this talk I will discuss the application of information theory to a number of experiments which have lead to insight about how interactions between spikes affect the neural coding of visual information. The first experiment concerns how information quantities change over the course of development of the visual system. The second concerns the effect of correlations in the spiking activity of pairs of suitably related V1 neurons - do these correlations result in synergistic or redundant pooling of information across cells? In the third experiment we examine the dynamic responses of cells in an extrastriate visual area (MT), looking for synergistic and redundant interactions between spikes. In all of these cases, we see that the spike trains cannot be approximated by Poisson processes - the amount of information represented depends upon correlations between the spikes.



  1. Panzeri, S., & Schultz, S.R. (2001). A unified approach to the study of temporal, correlational and rate coding. Neural Computation, 13(6), 1311-1349.

  2. chultz, S.R., & Panzeri, S. (2001). Temporal correlations and neural spike train entropy. Physical Review Letters, 86(25), 5823-5826.

  3. Rust, N.C., Schultz, S.R., & Movshon, J.A. (in press). A reciprocal relationship between reliability and responsiveness in macaque striate cortex neurons during development. J. Neurosci.

Representation of Visual Information by Cortical Neurons: Are spikes merely estimators of a firing rate?

A striking feature of the activity of cortical neurons is that the spike trains are irregular, and responses to repeated presentations of the same stimulus can be quite variable. Moreover, neighboring neurons have generally similar response properties, but the variability is largely independent. It is often assumed that an inhomogeneous Poisson process is an adequate description cortical response variability. Were this the case, then individual spikes, at best, serve as estimators of a firing rate, and decoding of a population of similar neurons is optimally done by a population average.


We test these ideas with two kinds of experiments carried out on clusters of neurons in the primary visual cortex of the macaque monkey. In the first set of experiments, we record responses to repeated presentations of pseudorandom (m-sequence) patterns, which allows for both an analysis of average response properties and a direct test of the inhomogeneous Poisson hypothesis. In the second set of experiments, we record responses to more traditional visual stimuli (spatial grating patterns), and analyze these responses via a metric-space approach. The latter approach provides a means to formalize and test a wide variety of coding hypotheses, especially as they relate to temporal representation of information.


The experiments are complementary, and converge on the conclusion that spike trains are more than estimators of a firing rate, and that the detailed pattern of neural activity within individual spike trains and across neurons cannot be ignored.



  1. Lab web page: http://www-users.med.cornell.edu/~jdvicto/labonweb.html

  2. Background material on the spike metric method: http://www-users.med.cornell.edu/~jdvicto/metricdf.html

  3. Review article on temporal aspects of early visual processing, Victor, J.D. (1999). Temporal aspects of neural coding in the retina and lateral geniculate: a review. Network, 10, R1-66. http://www-users.med.cornell.edu/~jdvicto/vict99r.html

  4. Selected publications on neural coding (additional related publications at http://www-users.med.cornell.edu/~jdvicto/jdvpubsc.html)

  5. Victor, J.D., & Purpura, K. (1996). Nature and precision of temporal coding in visual cortex: a metric-space analysis. J. Neurophysiol., 76, 1310-1326. http://www-users.med.cornell.edu/~jdvicto/vipu96.html 6. Victor, J.D., & Purpura, K.P. (1997). Metric-space analysis of spike trains: theory, algorithms, and application. Network, 8, 127-164. http://www-users.med.cornell.edu/~jdvicto/vipu97.html

  6. Reich, D.S., Mechler, F., & Victor, J.D. (2001). Independent and redundant information in nearby cortical neurons. Science, 294, 2566-2568.

  7. Mechler, F., Reich, D. S., & Victor, J.D. (2002). Detection and discrimination of relative spatial phase by V1 neurons. J. Neurosci., 22, 6129-6157. http://www-users.med.cornell.edu/~jdvicto/merevi02.html

  8. Victor, J.D. (in press). Binless strategies for estimation of information from neural data. Phys. Rev. E. http://www-users.med.cornell.edu/~jdvicto/vict03.html