Workshop 2: Cognitive Neuroscience

(December 10,2012 - December 14,2012 )

Organizers


Peter Dayan
Gatsby Computational Neuroscience Unit, UCL
Michael Shadlen
HHMI, Howard Hughes Medical Institute
Eric Shea-Brown
Applied Mathematics, University of Washington
Murray Sherman
Neurobiology, University of Chicago

Cognitive neuroscience presents superb opportunities for mathematical contributions, especially in connecting different theoretical and experimental frameworks. On the experimental side, methods ranging from single-neuron recording to human behavioral tests are flourishing, and mathematical models are beginning to suggest how one leads to the other. Rigorous theoretical treatments from microeconomics are often applied, including Bayesian estimation and optimization, but details of how they might be implemented in stochastic, dynamic neural circuits have only recently been proposed. By bringing together experimentalists and theorists working on different levels, a workshop will move the field closer to a long-held goal of understanding and predicting behavior in increasingly rich cognitive tasks. Each day will feature a different theme, as described in more detail below, and will emphasize both work at the level of algorithms and phenomena, and at the level of implementation by circuits of spiking cells. The organizing committee suggests that this workshop include a select group of students and post-docs as participants. Time could be made available each day for brief student/post-doc presentations (short talks and/or poster sessions), and the day capped by evening sessions that encourage interactions among the students and post-docs. In addition, a student or postdoc will be chosen to be a "reporter" for each day to summarize the day's events and highlights. These will then be assembled into an overall workshop report that could be published by pre-arrangement with an appropriate journal. Attention: Where are attentional effects "generated," how are they coordinated across multiple brain areas, how is attention fed back to earlier levels of sensory processing, and what are the underlying mechanisms at the level of circuits of spiking cells? Decision making: How are diverse sensory and task cues integrated over time and combined into a "single" decision signal, how are decision rules applied to this signal, and what is the role of dopamine and other modulators in this process? Coordination of neural circuits: Under different behavioral constraints, different brain areas form cooperative units. What is the role of thalamocortical and basal ganglia circuitry here? Nascent physiological work is in need of a theoretical counterpart, both to reveal how signals are gated and amplified, and to compare the performance and efficiency of different possible mechanisms and network architectures. Reinforcement learning: Complex tasks require learning and updating of rules that relate reward to action in changing environments. What algorithms can perform this updating optimally, in the face of uncertainty about rewards and sensory cues? What neural circuitry can implement these algorithms?

Accepted Speakers

Rafal Bogacz
Computer Science, University of Bristol
Matthew Botvinick
Psychology and Neuroscience, Princeton University
Anne Churchland
Neuroscience, Cold Spring Harbor Laboratory
Sophie Deneve
Group for Neural Theory, Collège de France
Michael Frank
Cognitive, Linguistic and Psychological Science, Brown University
Stefano Fusi
Neuroscience, Columbia University
Ray Guillery
Department of Anatomy, University of Wisconsin
Okihide Hikosaka
National Eye Institute, National Institutes of Health
Phil Holmes
Program in Applied and Computational Mathematics, Princeton University
Sabine Kastner
Psychology, Princeton University
Yael Niv
Psychology & Neuroscience, Princeton University
Alex Pouget
Basic Neuroscience, University of Geneva
Antonio Rangel
HSS & CNS, California Institute of Technology
Misha Tsodyks
Neurobiology, Weizmann Institute of Science
Marty Usrey
Center for Neuroscience, University of California, Davis
Xiao-Jing Wang
Neuroscience, New York University
Monday, December 10, 2012
Time Session
09:00 AM
10:00 AM
Murray Sherman - Thalamus plays a central role in cortical functioning
Glutamatergic inputs in thalamus and cortex can be classified into two categories: Class 1( driver) and Class 2 (modulator). Following the logic that identifying driver pathways in thalamus and cortex permit insights into information processing leads to the conclusion that there are two types of thalamic relay: first order nuclei like the LGN receive driver input from a subcortical source (i.e., retina), whereas higher order nuclei like the pulvinar relay driver input from layer 5 of one cortical area to another. This thalamic division is also seen in other sensory systems: for the somatosensory system, first order is VPM/L and higher order is POm; and for the auditory system, first order is MGBv and higher order is MGBd. Furthermore, this first and higher order classification extends beyond sensory systems. Indeed, it appears that most of thalamus by volume consists of higher order relays. Many, and perhaps all, direct driver connections between cortical areas are paralleled by an indirect cortico-thalamo-cortical (transthalamic) driver route involving higher order thalamic relays. Such thalamic relays represent a heretofore unappreciated role in cortical functioning, and this assessment challenges and extends conventional views both regarding the role of thalamus and mechanisms of corticocortical communication. Evidence for this transthalamic circuit as well as speculations as to why these two parallel routes exist will be offered.
10:30 AM
11:30 AM
Ray Guillery - Most of the messages thalamus sends to cortex contribute to "forward models"
All thalamic inputs that are relayed to cortex come in axons that also send a branch to motor structures. Thus, cortex receives information from sensory receptors about the body and the world and about subcortical activity from first order thalamic relays (see Sherman abstract) and about cortical processing of those inputs from higher order relays. In addition cortex also receives from all of these inputs copies of instructions for upcoming actions (efference copies) that are on their way to execution in the motor branches. That is, essentially all the information that cortex receives from thalamus, i.e. most of the information that cortex receives, concerns sensorimotor contingencies (O'Regan and No, 2001, Behav. Brain Sci, 24,939-973), not purely sensory information. 'Sensory' here refers to past, 'motor' to future events. Wolpert & Miall, (1996, Neural Netw., 9:1265-1279) discuss how efference copies generate "forward models" about upcoming actions. Thalamus, as a gate controlling information transfer to cortex, controls generation of ubiquitous cortical forward models. Vukadinovic (2012, EJN, 34,1031-9) relates the thalamic gate to control of forward models, arguing that a closed gate prevents actions from being recognized as generated by the organism, i.e. the self, and suggesting that this links functional and structural abnormalities of the thalamus to some symptoms of schizophrenia; Rolfs et al. (2011, Nature Neuroscience 14: 252-256) demonstrate the role of forward models in attention. The ubiquity of efference copies in thalamocortical circuits suggests that key problems of the self and of attention depend on readily identifiable thalamocortical pathways.
11:45 AM
12:45 PM
Marty Usrey - Visual attention modulates thalamocortical communication
Visual attention is believed to enhance neuronal activity across cortical areas through dynamic interactions between top down and bottom up pathways. To test the hypothesis that attention enhances bottom-up processing, we studied the influence of attention at the very first processing stage in primary visual cortex -the geniculocortical synapse. Animals were trained to attend to one of two drifting gratings and report the occurrence of a contrast change. While recording from identified neurons in cortical layer 4C, we delivered brief, electrical shocks to retinotopically-matching regions of the LGN. Shocks were delivered while animals attended towards or away from the receptive fields of recorded neurons during a time window just prior to the contrast change. Importantly, stimulation levels were set such that half of the stimulation trials resulted in a monosynaptic spike. Our results reveal a significant influence of attention on geniculocortical communication, as the majority of cortical neurons in layer 4C show an increase in the probability of generating an electrically-evoked spike when monkeys attend to the stimulus overlapping their receptive field. Attention also reduces the timing jitter of postsynaptic responses within and between cortical neurons. To our knowledge, these results represent the first study of attention at a synaptic level, and demonstrate that attention can enhance neuronal communication at the very first synapse in visual cortex.

Work done in collaboration with Farran Briggs and George. R Mangun. This work was supported by NIH grants EY018683, EY013588, MH055714, and NSF grant 1228535
02:00 PM
03:00 PM
Sabine Kastner - Neural basis of visual attention in the primate brain
Our natural environments contain too much information for the visual system to represent. Therefore, attentional mechanisms are necessary to mediate the selection of behaviorally relevant information. Much progress has been made to further our understanding of the modulation of neural processing in visual cortex. However, our understanding of how these modulatory signals are generated and controlled is still poor. In the first part of my talk, I will discuss recent functional magnetic resonance imaging and transcranial magnetic stimulation studies directed at topographically organized frontal and parietal cortex in humans to reveal the mechanisms underlying space-based control of selective attention. In the second part of my talk, I will discuss recent monkey physiology studies that suggest an important function of a thalamic nucleus, the pulvinar, in controlling the routing of information through visual cortex during spatial attention. Together, these studies indicate that a large-scale network of high-order cortical as well as thalamic brain regions is involved with the control of space-based selection of visual information from the environment.
03:30 PM
04:30 PM
Okihide Hikosaka - Choosing valuable objects automatically - a basal ganglia mechanism
Many objects around us have values which have been acquired through our life-long history. This suggests that the values of individual objects are stored in the brain as long-term memories. We discovered that such object-value memories are represented in part of the basal ganglia including the tail of the caudate nucleus (CDt) and the substantia nigra pars reticulata (SNr). We had monkeys experience many visual objects repeatedly, each of which was consistently associated with a large reward (high-valued) or a small reward (low-valued). After learning sessions across several days, CDt and SNr neurons started showing differential responses to the objects. Many of the CDt neurons showed excitatory responses to high-valued objects more strongly than to low-valued objects. SNr neurons were inhibited by high-valued objects and excited by low-valued objects. These responses occurred even though rewards were given in a non-contingent manner. Many of the SNr neurons projected to the superior colliculus, suggesting that the reward-dependent visual signals are used for controlling saccadic eye movements. Indeed, when these visual objects were presented simultaneously, the monkeys tended to look at high-valued objects even though no reward was given. Thus, the CDt-SNr-SC system enables animals to choose and look at high-valued objects automatically.
Tuesday, December 11, 2012
Time Session
09:00 AM
10:00 AM
Michael Frank - Linking levels of analysis in computational models of corticostriatal function
Interactions between frontal cortex and basal ganglia are instrumental in supporting motivated control over action and learning. Computational models have been proposed at multiple levels of description, from biophysics up to algorithmic approaches. I will describe recent attempts to link across levels of description to develop on the one hand, mechanistic neural models with sufficient detail to make predictions about electrophysiology, pharmacology and genetic manipulations, and on the other hand, higher level computational descriptions which often have normative interpretations and, pragmatically, are more suited to quantitatively fit behavioral data. By fitting outputs of neural models with reduced versions, one can derive predictions about how parametric variation of particular neural mechanisms should give rise to observable change in latent computational parameters -- even if the two levels are not perfectly isomorphic. Examples include the impact of dopamine on learning and choice incentive, prefrontal-subthalamic modulation of decision thresholds, and hierarchical control over actions across multiple corticostriatal circuits. In each case, the (optimistic) result is a better understanding of the domain than that afforded by either level of model alone.
10:30 AM
11:30 AM
Matthew Botvinick - Planning as probabilistic inference
Recent developments in decision-making research have restored attention to the classic but long neglected topic of planning: the selection of actions based on a projection and evaluation of their potential outcomes. This renewed interest raises the need for an updated computational account of planning, one that makes contact with contemporary views of cognitive and neural information processing. I'll discuss two interrelated projects, both aimed at contributing toward such an account. The central gambit in both projects is to consider how planning might arise from domain-general operations for probabilistic inference. One project focuses on the core procedures involved in planning, modeling these in terms of Bayesian inversion. This approach yields a novel, unifying view of some important neurophysiological observations, and reveals a suprising continuity with drift-diffusion models of simple choice. The second project focuses on hierarchical representations in planning, applying principles from Bayesian model selection to understand how such representations might arise from experience. In addition to laying out the theoretical approach, I will describe some behavioral and neuroimaging results in which we have begun to test specific predictions.
11:45 AM
12:45 PM
Yael Niv - The orbitofrontal cortex as a state space for reinforcement learning
The first ingredient in any reinforcement learning model is the state space -- a description of the task in terms of a sequence of situations (states) that (hopefully, if they possess the Markov property) embody in them all the information needed to determine the probability of immediate rewards and state transitions given an action. For many tasks, the state space is not trivial, and must be learned. I will first demonstrate that state spaces are learned using a simple perceptual judgement task, and then argue that the orbitofrontal cortex (OFC), a region well known for its pervasive yet subtle influence on decision making, encodes a map of the states of the current task and their inter-relations. This map provides a state space for reinforcement learning elsewhere in the brain, and is especially critical in complex tasks. I will use this hypothesis to explain recent experimental findings in an odor guided choice task (Takahashi et al, Nature Neuroscience 2011) as well as classic findings in reversal learning and extinction. In addition, I will lay out a number of testable experimental predictions that can distinguish our theory from other accounts of OFC function.

Work with Robert C. Wilson, Samuel J. Gershman and Geoffrey Schoenbaum.
Wednesday, December 12, 2012
Time Session
09:00 AM
10:00 AM
Misha Tsodyks - Scaling laws of associative retrieval from long-term memory
The question I will address in the lecture is how information is retrieved from memory when there are no precise item-specific cues. Real life examples are when you try to recall the names of your class-mates, or your favorite writers, or places to see in Rome. I hypothesize that in this situation, retrieval occurs in an associative manner, i.e. each recalled item is triggering the retrieval of a subsequent one. Mathematically this problem can be reduced to random graphs, and general results about the retrieval capacity of the recall can be derived. The main conclusion of the analysis is that retrieval capacity is severely limited, such that only a small fraction of items can be recalled, with characteristic power-law scaling with the total number of items in memory. Theoretical results can be compared to free recall experiments and surprisingly good agreement is observed.
10:30 AM
11:30 AM
Stefano Fusi - The importance of mixed selectivity in complex cognitive tasks
Single-neuron activity in prefrontal cortex (PFC) is often tuned to mixtures of multiple task-related aspects. Such mixed selectivity is highly heterogeneous, seemingly disordered and di cult to interpret.

Because of its prominence in PFC, it is natural to ask whether such heterogeneity plays a role in subserving the cognitive functions ascribed to this area. We addressed this question by analyzing the neural activity recorded in PFC during an object sequence memory task. We rst show that mixed selectivity neurons can be as informative as highly selective cells. Each task-relevant aspect can be decoded from the population of recorded neurons even when the selectivity to that aspect is eliminated from individual cells. We then show that the recorded mixed selectivity neurons actually o er a signi cant computational advantage over specialized cells in terms of the repertoire of input-output functions that are implementable by readout neurons. The superior performance is due to the fact that the recorded mixed selectivity neurons respond to highly diverse non-linear mixtures of the task-relevant variables. This property of the responses is a signature of the high-dimensionality of the neural representations.

We report that the recorded neural representations have actually the maximal dimensionality. Crucially, we also observed that this dimensionality is predictive of animal behavior. Indeed in the error trials the measured dimensionality of the neural representations collapses. Surprisingly, in these trials it was still possible to decode all task-relevant aspects, indicating that the errors are not due to a failure in coding or remembering the sensory stimuli, but instead in the way the information about the stimuli is mixed in the neuronal responses. Our fi ndings suggest that the focus of attention should be moved from neurons that exhibit easily interpretable response tuning to the widely observed, but rarely analyzed, mixed selectivity neurons. Work done with M. Rigotti, O. Barak, M. Warden, N. Daw, X-J Wang, E.K. Miller.
11:45 AM
12:45 PM
Sophie Deneve - Perceptual inference predicts divisive, input targeted inhibition
Sensory systems need to identify quickly and accurately the composition of noisy, time varying sensory scenes. We will consider what this fundamentally implies for the response properties of sensory neurons. In particular, the way sensory neurons should integrate their input and compete with each other depends strongly on assumptions about the sensory noise and the temporal statistics of sensory stimuli. Past models assumed gaussian noise and static stimuli, which led to "predictive coding", i.e. the notion that sensory neurons should respond to the difference between their sensory input and a prediction of this input by other neurons. This implies substrative lateral inhibition and/or center/surround receptive fields. However, sensory inputs are not static and corrupted by gaussian noise. They are dynamic, stricktly positive, and corrupted by signal-dependant noise (i.e. with a variance that increase with the mean).

This implies that sensory neurons should compete, not by inhibiting each other through lateral inhibition, as commonly assumed, but by selectively shunting the inputs to other neurons that they can predict. This results in "divisive normalization" and a profound reshaping of sensory receptive fields by the context and past/surrounding stimuli. Many puzzing contextual and adaptive effects on sensory receptive fields can be explained in this manner. Thus, the concept of "receptive field" in sensory processing is meaningless and should be replaced by a "predictive field". We will show how this model accounts for recent data reported in early olfactory and visual processing. We will consider how these "predictive fields" could be learnt and measured experimentally.
Thursday, December 13, 2012
Time Session
09:00 AM
10:00 AM
Anne Churchland - Integrating information across time and sensory modalities for decision-making
Sensory stimuli are frequently ambiguous and uncertain. Considerable recent research has focused on how animals can generate more accurate estimates of a parameter of interest by integrating visual information across time. I will argue that the same circumstances that lead animals to integrate information across time, ambiguous and uncertain stimuli, lead them to integrate information across sensory modalities. My laboratory has developed a novel multisensory decision task that uses dynamic, time varying auditory and visual stimuli. We have collected data from rats and humans on the task and report three main findings. First, we have found that for multisensory stimuli, both species show improvements in accuracy that are close to the statistically optimal prediction. Next, we report that subjects make use of time in a similar way for unisensory and multisensory stimuli, and for reliable and unreliable stimuli. Finally, we report that synchronous activation of auditory and visual circuitry likely does not drive the improvements in accuracy, since a comparable improvement was evident even when auditory and visual stimuli were presented asynchronously.

Taken together, these findings identify two possible strategies, integrating across time and integrating across sensory modalities, that can help animals overcome sensory uncertainty to make better decisions. Because the inherent variability of cortical neurons renders all stimuli to some degree uncertain, integrating over time or across modalities may be a strategy that is apparent in many circumstances.
10:30 AM
11:30 AM
Rafal Bogacz - Do we have Bayes' theorem hardwired in the cortico-basal-ganglia circuit?
This talk will present a model assuming that during decision making the cortico-basal-ganglia circuit computes probabilities that considered alternatives are correct, according to Bayes' theorem. The model suggests how the equation of Bayes' theorem is mapped onto the functional anatomy of a circuit involving the cortex, basal ganglia and thalamus. The talk will also present relationship of model's prediction to experimental data, ranging from detailed properties of individual neurons in the circuit, to the effects of disruptions of computations in this circuit on behaviour.
11:45 AM
12:45 PM
Phil Holmes - The neural dynamics of decision making: multiple scales and a range models
I will describe a range of models, from cellular to cortical scales, that illuminate how we accumulate evidence and make simple decisions. Large networks composed of individual spiking neurons can capture biophysical details of synaptic transmission and neuromodulation, but their complexity renders them opaque to analysis. Employing methods of mean field and dynamical systems theory, I will argue that these high-dimensional stochastic differential equations can be approximately to simple drift-diffusion (DD) processes like those used to fit behavioral data in cognitive psychology. The DD models are analytically tractable, coincide with optimal methods from statistical decision theory, and prompt new experiments as well as questions on why we fail to optimize. If time permits, I will describe work in progress on a multi-area model of attention and descision making.

The talk will draw on joint work with Fuat Balci, Rafal Bogacz, Jonathan Cohen, Philip Eckhoff, Sam Feng, Mike Schwemmer, Eric Shea-Brown, Patrick Simen, Marieke van Vugt, KongFatt Wong-Lin and Miriam Zacksenhouse.

Research supported by NIMH and AFOSR.
03:15 PM
04:15 PM
Eric Shea-Brown - Neural integrators -- what do we need and what can we get away with?
Neural integrators -- what do we need and what can we get away with?
Friday, December 14, 2012
Time Session
09:00 AM
10:00 AM
Peter Dayan - Interactions between Model-free and Model-based Reinforcement Learning
Substantial recent work has explored multiple mechanisms of decision-making in humans and other animals. Functionally and anatomically distinct modules have been identified, and their individual properties have been examined using intricate behavioural and neural tools. I will discuss the background of these studies, and show fMRI results that suggest closer and more complex interactions between the mechanisms than originally conceived. In some circumstances, model-free methods seize control after much less experience than would seem normative; in others, temporal difference prediction errors, which are epiphenomenal for the model-based system, are nevertheless present and apparently effective. Finally, I will show that model-free and model-based methods on occasion both cower in the face of Pavlovian influences, and will try and reconcile this as a form of robust control.
10:30 AM
11:45 AM
Michael Shadlen - Consciousness as a decision to engage
Consciousness as a decision to engage
Name Email Affiliation
Ahmadi, Mandana mandana@gatsby.ucl.ac.uk The Gatsby Computational Neuroscience Unit, University College London
Balcarras, Matthew mbalcarr@yorku.ca Biology, York University
Barak, Omri ob2194@columbia.edu Neuroscience, Columbia University
Bernhardt-Walther, Dirk bernhardt-walther.1@osu.edu Psychology, The Ohio State University
Billock, Vincent vincent.billock.ctr@wpafb.af.mil National Research Council, U.S. Air Force Research Laboratory
Bockbrader MD PhD, Marcia marcia.bockbrader@osumc.edu Physical Medicine & Rehabilitation, The Ohio State University
Bogacz, Rafal r.bogacz@bristol.ac.uk Computer Science, University of Bristol
Botvinick, Matthew matthewb@princeton.edu Psychology and Neuroscience, Princeton University
Churchland, Anne achurchl@cshl.edu Neuroscience, Cold Spring Harbor Laboratory
Collins, Anne Anne_Collins@brown.edu CLPS, Brown University
Critch, Andrew critch@math.berkeley.edu Mathematics, University of California, Berkeley
Dayan, Peter dayan@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, UCL
Den Ouden, Hanneke hdo2@nyu.edu Center For Neural Science, New York University
Deneve, Sophie sophie.deneve@ens.fr Group for Neural Theory, Collège de France
Dethier, Julie jdethier@ulg.ac.be Montefiore Institute, University of Liège
Dewitt, Eric eric.dewitt@neuro.fchampalimaud.org Neuroscience, Champalimaud Neuroscience Programme
Drugowitsch, Jan jdrugo@gmail.com Laboratoire de Neurosciences Cognitives, DEC, ENS, Institut national de la sante et de la recherche medical
Frank, Michael Michael_Frank@brown.edu Cognitive, Linguistic and Psychological Science, Brown University
Freedman, David dfreedman@uchicago.edu Department of Neurobiology, The University of Chicago
Fusi, Stefano sf2237@columbia.edu Neuroscience, Columbia University
Gershman, Samuel sjgershm@princeton.edu Brain and Cognitive Sciences, Massachusetts Institute of Technology
Girardot, Kendra girardot.2@buckeyemail.osu.edu Department of Arts Administration, Education, and Policy, The Ohio State University
Golomb, Julie golomb.9@osu.edu Psychology, The Ohio State University
Guillery, Ray rguiller@facstaff.wisc.edu Department of Anatomy, University of Wisconsin
Hikosaka, Okihide oh@lsr.nei.nih.gov National Eye Institute, National Institutes of Health
Holmes, Phil pholmes@Math.Princeton.EDU Program in Applied and Computational Mathematics, Princeton University
Huys, Quentin qhuys+web@gatsby.ucl.ac.uk Translational Neuroimaging Unit, ETH Zurich and Psychiatric University Hospital Zurich
Kastner, Sabine skastner@princeton.edu Psychology, Princeton University
Majidpour, Mostafa mostafam@ucla.edu Psychology, University of California, Los Angeles
Meng, Xiangying lindamengxy@gmail.com department of biology, University of Maryland
Moreno-Bote, Ruben rmoreno@fsjd.org Research Unit, Foundation Sant Joan de Deu
Niv, Yael yael@princeton.edu Psychology & Neuroscience, Princeton University
Odegaard, Brian odegaard.brian@gmail.com Psychology, University of California, Los Angeles
Osborne, Leslie osborne@uchicago.edu Neurobiology, University of Chicago
Palmer, Stephanie sepalmer@uchicago.edu Organismal Biology and Anatomy, University of Chicago
Peters, Megan meganakpeters@ucla.edu Psychology, University of California, Los Angeles
Pouget, Alex alex@bcs.rochester.edu Basic Neuroscience, University of Geneva
Rajan, Kanaka krajan@princeton.edu Biophysics, Princeton University
Rangel, Antonio rangel@hss.caltech.edu HSS & CNS, California Institute of Technology
Reed, Michael reed@math.duke.edu Mathematics, Duke University
Rigotti, Mattia mr2666@columbia.edu Neuroscience, Columbia University
Rothkopf, Constantin rothkopf@fias.uni-frankfurt.de Frankfurt Institute for Advanced Studies, Frankfurt Institute for Advanced Studies
Sauer, Tim tsauer@gmu.edu Department of Mathematics, George Mason University
Savin, Cristina cs664@cam.ac.uk Engineering, CBL, Dept. Engineering, University of Cambridge
Schiff, Steven sjs49@engr.psu.edu Depts. Neurosurgery / Eng Science & Mechanics / Physics, Pennsylvania State University
Shadlen, Michael shadlen@u.washington.edu HHMI, Howard Hughes Medical Institute
Shea, Timothy timothy.shea@osumc.edu Physical Medicine & Rehabilitation, The Ohio State University
Shea-Brown, Eric etsb@washington.edu Applied Mathematics, University of Washington
Sherman, Murray msherman@bsd.uchicago.edu Neurobiology, University of Chicago
Simen, Patrick psimen@oberlin.edu Neuroscience, Oberlin College
Strowbridge, Ben bens@case.edu Neurosciences, Case Western Reserve University
Thomas, Rajat r.thomas@nin.knaw.nl Social Brain Lab, Netherlands Institute for Neuroscience
Tsodyks, Misha misha@weizmann.ac.il Neurobiology, Weizmann Institute of Science
Ullah, Ghanim ullah.10@osu.edu Theoretical Biology, Los Alamos National Laboratory
Usrey, W. Martin wmusrey@ucdavis.edu Center for Neuroscience, University of California, Davis
van der Meer, Matthijs mvdm@uwaterloo.ca Biology and Centre for Theoretical Neuroscience, University of Waterloo
Vega, Giovany aleph.omega@gmail.com Mathematics, University of Puerto Rico
Wang, Xiao-Jing xjwang@yale.edu Neuroscience, New York University
Wilson, Robert rcw2@princeton.edu Neuroscience Institute, Princeton University
Womelsdorf, Thilo thiwom@yorku.ca Biology, York University
Wunderlich, Klaus kwunder@gmail.com Neuroscience, Ludwig Maximilians University
Do we have Bayes' theorem hardwired in the cortico-basal-ganglia circuit?
This talk will present a model assuming that during decision making the cortico-basal-ganglia circuit computes probabilities that considered alternatives are correct, according to Bayes' theorem. The model suggests how the equation of Bayes' theorem is mapped onto the functional anatomy of a circuit involving the cortex, basal ganglia and thalamus. The talk will also present relationship of model's prediction to experimental data, ranging from detailed properties of individual neurons in the circuit, to the effects of disruptions of computations in this circuit on behaviour.
Planning as probabilistic inference
Recent developments in decision-making research have restored attention to the classic but long neglected topic of planning: the selection of actions based on a projection and evaluation of their potential outcomes. This renewed interest raises the need for an updated computational account of planning, one that makes contact with contemporary views of cognitive and neural information processing. I'll discuss two interrelated projects, both aimed at contributing toward such an account. The central gambit in both projects is to consider how planning might arise from domain-general operations for probabilistic inference. One project focuses on the core procedures involved in planning, modeling these in terms of Bayesian inversion. This approach yields a novel, unifying view of some important neurophysiological observations, and reveals a suprising continuity with drift-diffusion models of simple choice. The second project focuses on hierarchical representations in planning, applying principles from Bayesian model selection to understand how such representations might arise from experience. In addition to laying out the theoretical approach, I will describe some behavioral and neuroimaging results in which we have begun to test specific predictions.
Integrating information across time and sensory modalities for decision-making
Sensory stimuli are frequently ambiguous and uncertain. Considerable recent research has focused on how animals can generate more accurate estimates of a parameter of interest by integrating visual information across time. I will argue that the same circumstances that lead animals to integrate information across time, ambiguous and uncertain stimuli, lead them to integrate information across sensory modalities. My laboratory has developed a novel multisensory decision task that uses dynamic, time varying auditory and visual stimuli. We have collected data from rats and humans on the task and report three main findings. First, we have found that for multisensory stimuli, both species show improvements in accuracy that are close to the statistically optimal prediction. Next, we report that subjects make use of time in a similar way for unisensory and multisensory stimuli, and for reliable and unreliable stimuli. Finally, we report that synchronous activation of auditory and visual circuitry likely does not drive the improvements in accuracy, since a comparable improvement was evident even when auditory and visual stimuli were presented asynchronously.

Taken together, these findings identify two possible strategies, integrating across time and integrating across sensory modalities, that can help animals overcome sensory uncertainty to make better decisions. Because the inherent variability of cortical neurons renders all stimuli to some degree uncertain, integrating over time or across modalities may be a strategy that is apparent in many circumstances.
Interactions between Model-free and Model-based Reinforcement Learning
Substantial recent work has explored multiple mechanisms of decision-making in humans and other animals. Functionally and anatomically distinct modules have been identified, and their individual properties have been examined using intricate behavioural and neural tools. I will discuss the background of these studies, and show fMRI results that suggest closer and more complex interactions between the mechanisms than originally conceived. In some circumstances, model-free methods seize control after much less experience than would seem normative; in others, temporal difference prediction errors, which are epiphenomenal for the model-based system, are nevertheless present and apparently effective. Finally, I will show that model-free and model-based methods on occasion both cower in the face of Pavlovian influences, and will try and reconcile this as a form of robust control.
Perceptual inference predicts divisive, input targeted inhibition
Sensory systems need to identify quickly and accurately the composition of noisy, time varying sensory scenes. We will consider what this fundamentally implies for the response properties of sensory neurons. In particular, the way sensory neurons should integrate their input and compete with each other depends strongly on assumptions about the sensory noise and the temporal statistics of sensory stimuli. Past models assumed gaussian noise and static stimuli, which led to "predictive coding", i.e. the notion that sensory neurons should respond to the difference between their sensory input and a prediction of this input by other neurons. This implies substrative lateral inhibition and/or center/surround receptive fields. However, sensory inputs are not static and corrupted by gaussian noise. They are dynamic, stricktly positive, and corrupted by signal-dependant noise (i.e. with a variance that increase with the mean).

This implies that sensory neurons should compete, not by inhibiting each other through lateral inhibition, as commonly assumed, but by selectively shunting the inputs to other neurons that they can predict. This results in "divisive normalization" and a profound reshaping of sensory receptive fields by the context and past/surrounding stimuli. Many puzzing contextual and adaptive effects on sensory receptive fields can be explained in this manner. Thus, the concept of "receptive field" in sensory processing is meaningless and should be replaced by a "predictive field". We will show how this model accounts for recent data reported in early olfactory and visual processing. We will consider how these "predictive fields" could be learnt and measured experimentally.
Linking levels of analysis in computational models of corticostriatal function
Interactions between frontal cortex and basal ganglia are instrumental in supporting motivated control over action and learning. Computational models have been proposed at multiple levels of description, from biophysics up to algorithmic approaches. I will describe recent attempts to link across levels of description to develop on the one hand, mechanistic neural models with sufficient detail to make predictions about electrophysiology, pharmacology and genetic manipulations, and on the other hand, higher level computational descriptions which often have normative interpretations and, pragmatically, are more suited to quantitatively fit behavioral data. By fitting outputs of neural models with reduced versions, one can derive predictions about how parametric variation of particular neural mechanisms should give rise to observable change in latent computational parameters -- even if the two levels are not perfectly isomorphic. Examples include the impact of dopamine on learning and choice incentive, prefrontal-subthalamic modulation of decision thresholds, and hierarchical control over actions across multiple corticostriatal circuits. In each case, the (optimistic) result is a better understanding of the domain than that afforded by either level of model alone.
The importance of mixed selectivity in complex cognitive tasks
Single-neuron activity in prefrontal cortex (PFC) is often tuned to mixtures of multiple task-related aspects. Such mixed selectivity is highly heterogeneous, seemingly disordered and di cult to interpret.

Because of its prominence in PFC, it is natural to ask whether such heterogeneity plays a role in subserving the cognitive functions ascribed to this area. We addressed this question by analyzing the neural activity recorded in PFC during an object sequence memory task. We rst show that mixed selectivity neurons can be as informative as highly selective cells. Each task-relevant aspect can be decoded from the population of recorded neurons even when the selectivity to that aspect is eliminated from individual cells. We then show that the recorded mixed selectivity neurons actually o er a signi cant computational advantage over specialized cells in terms of the repertoire of input-output functions that are implementable by readout neurons. The superior performance is due to the fact that the recorded mixed selectivity neurons respond to highly diverse non-linear mixtures of the task-relevant variables. This property of the responses is a signature of the high-dimensionality of the neural representations.

We report that the recorded neural representations have actually the maximal dimensionality. Crucially, we also observed that this dimensionality is predictive of animal behavior. Indeed in the error trials the measured dimensionality of the neural representations collapses. Surprisingly, in these trials it was still possible to decode all task-relevant aspects, indicating that the errors are not due to a failure in coding or remembering the sensory stimuli, but instead in the way the information about the stimuli is mixed in the neuronal responses. Our fi ndings suggest that the focus of attention should be moved from neurons that exhibit easily interpretable response tuning to the widely observed, but rarely analyzed, mixed selectivity neurons. Work done with M. Rigotti, O. Barak, M. Warden, N. Daw, X-J Wang, E.K. Miller.
Most of the messages thalamus sends to cortex contribute to "forward models"
All thalamic inputs that are relayed to cortex come in axons that also send a branch to motor structures. Thus, cortex receives information from sensory receptors about the body and the world and about subcortical activity from first order thalamic relays (see Sherman abstract) and about cortical processing of those inputs from higher order relays. In addition cortex also receives from all of these inputs copies of instructions for upcoming actions (efference copies) that are on their way to execution in the motor branches. That is, essentially all the information that cortex receives from thalamus, i.e. most of the information that cortex receives, concerns sensorimotor contingencies (O'Regan and No, 2001, Behav. Brain Sci, 24,939-973), not purely sensory information. 'Sensory' here refers to past, 'motor' to future events. Wolpert & Miall, (1996, Neural Netw., 9:1265-1279) discuss how efference copies generate "forward models" about upcoming actions. Thalamus, as a gate controlling information transfer to cortex, controls generation of ubiquitous cortical forward models. Vukadinovic (2012, EJN, 34,1031-9) relates the thalamic gate to control of forward models, arguing that a closed gate prevents actions from being recognized as generated by the organism, i.e. the self, and suggesting that this links functional and structural abnormalities of the thalamus to some symptoms of schizophrenia; Rolfs et al. (2011, Nature Neuroscience 14: 252-256) demonstrate the role of forward models in attention. The ubiquity of efference copies in thalamocortical circuits suggests that key problems of the self and of attention depend on readily identifiable thalamocortical pathways.
Choosing valuable objects automatically - a basal ganglia mechanism
Many objects around us have values which have been acquired through our life-long history. This suggests that the values of individual objects are stored in the brain as long-term memories. We discovered that such object-value memories are represented in part of the basal ganglia including the tail of the caudate nucleus (CDt) and the substantia nigra pars reticulata (SNr). We had monkeys experience many visual objects repeatedly, each of which was consistently associated with a large reward (high-valued) or a small reward (low-valued). After learning sessions across several days, CDt and SNr neurons started showing differential responses to the objects. Many of the CDt neurons showed excitatory responses to high-valued objects more strongly than to low-valued objects. SNr neurons were inhibited by high-valued objects and excited by low-valued objects. These responses occurred even though rewards were given in a non-contingent manner. Many of the SNr neurons projected to the superior colliculus, suggesting that the reward-dependent visual signals are used for controlling saccadic eye movements. Indeed, when these visual objects were presented simultaneously, the monkeys tended to look at high-valued objects even though no reward was given. Thus, the CDt-SNr-SC system enables animals to choose and look at high-valued objects automatically.
The neural dynamics of decision making: multiple scales and a range models
I will describe a range of models, from cellular to cortical scales, that illuminate how we accumulate evidence and make simple decisions. Large networks composed of individual spiking neurons can capture biophysical details of synaptic transmission and neuromodulation, but their complexity renders them opaque to analysis. Employing methods of mean field and dynamical systems theory, I will argue that these high-dimensional stochastic differential equations can be approximately to simple drift-diffusion (DD) processes like those used to fit behavioral data in cognitive psychology. The DD models are analytically tractable, coincide with optimal methods from statistical decision theory, and prompt new experiments as well as questions on why we fail to optimize. If time permits, I will describe work in progress on a multi-area model of attention and descision making.

The talk will draw on joint work with Fuat Balci, Rafal Bogacz, Jonathan Cohen, Philip Eckhoff, Sam Feng, Mike Schwemmer, Eric Shea-Brown, Patrick Simen, Marieke van Vugt, KongFatt Wong-Lin and Miriam Zacksenhouse.

Research supported by NIMH and AFOSR.
Neural basis of visual attention in the primate brain
Our natural environments contain too much information for the visual system to represent. Therefore, attentional mechanisms are necessary to mediate the selection of behaviorally relevant information. Much progress has been made to further our understanding of the modulation of neural processing in visual cortex. However, our understanding of how these modulatory signals are generated and controlled is still poor. In the first part of my talk, I will discuss recent functional magnetic resonance imaging and transcranial magnetic stimulation studies directed at topographically organized frontal and parietal cortex in humans to reveal the mechanisms underlying space-based control of selective attention. In the second part of my talk, I will discuss recent monkey physiology studies that suggest an important function of a thalamic nucleus, the pulvinar, in controlling the routing of information through visual cortex during spatial attention. Together, these studies indicate that a large-scale network of high-order cortical as well as thalamic brain regions is involved with the control of space-based selection of visual information from the environment.
The orbitofrontal cortex as a state space for reinforcement learning
The first ingredient in any reinforcement learning model is the state space -- a description of the task in terms of a sequence of situations (states) that (hopefully, if they possess the Markov property) embody in them all the information needed to determine the probability of immediate rewards and state transitions given an action. For many tasks, the state space is not trivial, and must be learned. I will first demonstrate that state spaces are learned using a simple perceptual judgement task, and then argue that the orbitofrontal cortex (OFC), a region well known for its pervasive yet subtle influence on decision making, encodes a map of the states of the current task and their inter-relations. This map provides a state space for reinforcement learning elsewhere in the brain, and is especially critical in complex tasks. I will use this hypothesis to explain recent experimental findings in an odor guided choice task (Takahashi et al, Nature Neuroscience 2011) as well as classic findings in reversal learning and extinction. In addition, I will lay out a number of testable experimental predictions that can distinguish our theory from other accounts of OFC function.

Work with Robert C. Wilson, Samuel J. Gershman and Geoffrey Schoenbaum.
Consciousness as a decision to engage
Consciousness as a decision to engage
Neural integrators -- what do we need and what can we get away with?
Neural integrators -- what do we need and what can we get away with?
Thalamus plays a central role in cortical functioning
Glutamatergic inputs in thalamus and cortex can be classified into two categories: Class 1( driver) and Class 2 (modulator). Following the logic that identifying driver pathways in thalamus and cortex permit insights into information processing leads to the conclusion that there are two types of thalamic relay: first order nuclei like the LGN receive driver input from a subcortical source (i.e., retina), whereas higher order nuclei like the pulvinar relay driver input from layer 5 of one cortical area to another. This thalamic division is also seen in other sensory systems: for the somatosensory system, first order is VPM/L and higher order is POm; and for the auditory system, first order is MGBv and higher order is MGBd. Furthermore, this first and higher order classification extends beyond sensory systems. Indeed, it appears that most of thalamus by volume consists of higher order relays. Many, and perhaps all, direct driver connections between cortical areas are paralleled by an indirect cortico-thalamo-cortical (transthalamic) driver route involving higher order thalamic relays. Such thalamic relays represent a heretofore unappreciated role in cortical functioning, and this assessment challenges and extends conventional views both regarding the role of thalamus and mechanisms of corticocortical communication. Evidence for this transthalamic circuit as well as speculations as to why these two parallel routes exist will be offered.
Scaling laws of associative retrieval from long-term memory
The question I will address in the lecture is how information is retrieved from memory when there are no precise item-specific cues. Real life examples are when you try to recall the names of your class-mates, or your favorite writers, or places to see in Rome. I hypothesize that in this situation, retrieval occurs in an associative manner, i.e. each recalled item is triggering the retrieval of a subsequent one. Mathematically this problem can be reduced to random graphs, and general results about the retrieval capacity of the recall can be derived. The main conclusion of the analysis is that retrieval capacity is severely limited, such that only a small fraction of items can be recalled, with characteristic power-law scaling with the total number of items in memory. Theoretical results can be compared to free recall experiments and surprisingly good agreement is observed.
Visual attention modulates thalamocortical communication
Visual attention is believed to enhance neuronal activity across cortical areas through dynamic interactions between top down and bottom up pathways. To test the hypothesis that attention enhances bottom-up processing, we studied the influence of attention at the very first processing stage in primary visual cortex -the geniculocortical synapse. Animals were trained to attend to one of two drifting gratings and report the occurrence of a contrast change. While recording from identified neurons in cortical layer 4C, we delivered brief, electrical shocks to retinotopically-matching regions of the LGN. Shocks were delivered while animals attended towards or away from the receptive fields of recorded neurons during a time window just prior to the contrast change. Importantly, stimulation levels were set such that half of the stimulation trials resulted in a monosynaptic spike. Our results reveal a significant influence of attention on geniculocortical communication, as the majority of cortical neurons in layer 4C show an increase in the probability of generating an electrically-evoked spike when monkeys attend to the stimulus overlapping their receptive field. Attention also reduces the timing jitter of postsynaptic responses within and between cortical neurons. To our knowledge, these results represent the first study of attention at a synaptic level, and demonstrate that attention can enhance neuronal communication at the very first synapse in visual cortex.

Work done in collaboration with Farran Briggs and George. R Mangun. This work was supported by NIH grants EY018683, EY013588, MH055714, and NSF grant 1228535
video image

Linking levels of analysis in computational models of corticostriatal function
Michael Frank Interactions between frontal cortex and basal ganglia are instrumental in supporting motivated control over action and learning. Computational models have been proposed at multiple levels of description, from biophysics up to algorithmic approaches.

video image

The neural dynamics of decision making: multiple scales and a range models
Phil Holmes I will describe a range of models, from cellular to cortical scales, that illuminate how we accumulate evidence and make simple decisions. Large networks composed of individual spiking neurons can capture biophysical details of synaptic transmission

video image

Thalamus plays a central role in cortical functioning
Murray Sherman Glutamatergic inputs in thalamus and cortex can be classified into two categories: Class 1( driver) and Class 2 (modulator). Following the logic that identifying driver pathways in thalamus and cortex permit insights into information processing leads

video image

Consciousness as a decision to engage
Michael Shadlen Consciousness as a decision to engage

video image

Neural integrators -- what do we need and what can we get away with?
Eric Shea-Brown Neural integrators -- what do we need and what can we get away with?

video image

Most of the messages thalamus sends to cortex contribute to "forward models"
Ray Guillery All thalamic inputs that are relayed to cortex come in axons that also send a branch to motor structures. Thus, cortex receives information from sensory receptors about the body and the world and about subcortical activity from first order thalamic r