The most common use of single microelectrodes is to record unitary action potentials from single neurons, but a variety of signals can be recorded from these electrodes, depending on the frequency band selected and the method of extracting measurements from the signal. For example, it has recently become popular to record "local field potentials" (LFPs) from microelectrodes, by using signals in the EEG band (1-150 Hz). Contributors to the LFP are thought to include currents associated with local action potentials and, especially, local synaptic potentials. Simple physical considerations suggest that signals in these frequency bands may in fact arise from very wide areas of brain. Moreover, signals in this band that are synchronized will dominate the LFP, and signals that are temporally incoherent will go undetected. Interpreting LFP recordings and relating them to local brain circuits is therefore an uncertain and perilous undertaking. Recordings of multiunit activity are also possible, using a somewhat higher frequency band, but their relationship to local circuit activity is also difficult to determine with certainty.
Recordings of action potentials from single isolated neurons, on the other hand, are readily interpreted and represent our best measurement pf the basic computational currency of the brain. Such recordings are the most reliable way to deduce the function and architecture of neuronal circuits, and it is a mistake to believe that other kinds of recording can substitute. I will give some examples from my own laboratory and from the literature of the kinds of analysis that are possible only with single unit recordings, and of the dangers of using other kinds of signals without careful consideration of their possible origin.
Recent experimental measurements have suggested an increasing importance of synchronous pikes in cortical computation. These observations are difficult ro reconcile with decades of single-cell recordings that have revealed the correlation of increased firing rate with beahvioral measures. One possibility is that the cortex has adopted a signaling strategy that makes extensive use of synchrony for fast communication, but does it in a way that is consistent with the rate-code indications. We suggest a way in which this could be done. Specifically we show how synchronous spike codes on both feed forward and feedback connections between the Lateral Geniculate Nucleus (LGN) and cortex can be used to form oriented receptive fields given natural images as input. The novel features of our spike model is that it combines synchronous updating of inputs with a probabilistic signaling strategy. We show that these features allow the reproduction of synchronicity measured in the LGN as well as classical rate-code features.
One of the hallmarks of human cognition is the ability to flexibly adjust to changing internal states and environmental contingencies by exerting control over thoughts and actions. Progress in understanding the neural mechanisms that mediate this process will require a convergence of empirical studies examining the dynamics of how such neural systems operate during cognitive control and computational analyses aimed at elucidating the formal principles that underlie such operations. We have developed a computational framework that specifies core operations of cognitive control in terms of the interactions of the lateral prefrontal cortex (PFC), anterior cingulate cortex (ACC), and the mesocortical dopamine (DA) neuromodulatory system (Braver and Cohen, 2000; Botvinick et al., 2001; Braver and Cohen, 2001). In this theory, lateral PFC represents and maintains context information that can be used to bias attention and action systems in accordance with internal goals. . The DA system modulates activation in lateral PFC, enabling both flexible updating of representations while insuring robust active maintenance. The ACC detects the presence of conflict in the response system (e.g., co-activation of competing response tendencies), and conveys such information to control systems such as lateral PFC, so that control states can be adjusted to reduce the occurrence of future conflict. An example case is presented in which computational and neuroimaging data were integrated in a convergent manner. In particular, we describe a study of sequential choice responding in which conflict fluctuates on a trial-by-trial basis (Jones et al., in press). We show how our model can fit detailed aspects of this complex dataset and predict trial-by-trial modulation in ACC activity.
The basal ganglia and frontal cortex together enable animals to learn and perform conditional responses that acquire rewards when prepotent responses must be suppressed. Dopamine (DA) cells in the substantia nigra pars compacta and adjacent ventral tegmental area learn to selectively respond to unexpected rewards or omissions of expected rewards (Schultz, 1998; modeled in Brown, Bullock & Grossberg, 1999). Such DA cells project widely to the striatum and frontal cortex, and these cells' phasic responses appear to act as reinforcement learning signals. A separate class of basal ganglia outputs "gate" voluntary saccades via inhibitory GABAergic projections to the superior colliculus and motor thalamus (Hikosaka & Wurtz, 1989). Anatomical and physiological studies suggest a highly differentiated pattern of interactions between these basal ganglia outputs and distinct frontal cortical regions and laminae. A new computational theory (Brown, Bullock & Grossberg, 2000) of how the laminar circuitry of the frontal cortex interacts with the basal ganglia, motor thalamus, superior colliculus, and inferotemporal and parietal cortices, offers insights into how these brain regions cooperate to learn and perform conditional behaviors under guidance by DAergic learning signals. The associated neural model, whose frontal cortical component represents the frontal eye fields, describes interactions and dynamics within these circuits with a large system of differential equations. Simulations of the model illustrate how it provides a functional explanation of the dynamics of 17 electrophysiologically identified cell types found in the modeled areas. The model emphasizes striatal feedforward competition among cortically represented plans, and explains how action planning/priming (in cortical layers III-Va and VI) is dissociated from execution (via layer Vb). It also explains how learning enables stimuli to serve as spatial movement targets, as discriminative cues to withhold movement, or as cues to generate remembered actions not directed toward the cue's location. The model integrates neurophysiological, anatomical, and behavioral data from a range of eye movement tasks in primates, including: single, overlap, gap, and memory-guided saccades; and gaze fixation maintained despite distractors.
In planning visually guided movements, such as pointing and reaching to visual targets, the brain transforms target information in visuo-spatial coordinates into motor commands. The internal model of this visuo-motor transformation needs to be modified in response to altered environments, such as a screen cursor rotation (Teulings et al., 2002; Kagerer et al., 1997). Under such distortions, one must practice to acquire an internal model of the novel environment, which would represent the altered relationship between the cursor movement and the hand/mouse movement. Prior studies suggest that the process of adaptation to a rotational bias depends on the time course of the distortion (Kagerer et al., 1997; Robertson and Miall, 1999). Lesion studies in non-human primates and in clinical populations (cerebellar syndrome and Parkinson's disease) indicate a differential involvement of brain structures in adaptation to gradual as compared to sudden visuo-motor distortions. In this talk, I will describe a neural network model of fronto-parietal, fronto-striatal, and parieto-cerebellar networks thought to be engaged in learning a new internal model of a kinematic distortion. These networks can be differentiated in terms of their learning rules (unsupervised learning, reinforcement learning and supervised learning), the error signals (inferior olive for the cerebellar network, dopamine for the basal ganglia system), the spatio-temporal resolution (high-resolution for cerebellar network, poor resolution for basal ganglia), and the timing of recruitment of these structures during learning. This proposal is consistent with the view that the basal ganglia may be involved in the selection of appropriate movements and/or cognitive strategies based on explicit error signals, whereas the cerebellum may be involved in the recalibration of motor commands through the adjustment and optimization of movement parameters. Thus, functional basal ganglia engagement should be critical in tasks that are initially effortful and in which correct responses are self-selected through trial-and-error mechanisms (Contreras-Vidal and Schultz, 1999). Once the appropriate action has been found and stabilized, the cerebellum can fine-tune the internal model through practice until the task can be performed automatically.
We present a neurodynamical model for visual cognition. We assume that the construction of explicit mechanistic models to gain the computational aspects of cognitive processes involved in visual information processing can provide a conceptual framework for establishing and understanding the underlying basic principles. More specific, we follow a computational neuroscience approach in order to study the role of top-down and bottom-up processes by the interaction of attention and memory in visual object perception. We adopt the theoretical framework of neurodynamics for integrating known experimental facts and hypotheses at all neuroscience levels. Neurodynamics offers a quantitative formulation for describing the dynamical evolution of single neurons, neural networks and coupled modules of networks. We explicitly model the feedforward (bottom-up) and feedback (top-down) interactions between posterior (V1,V2,V4,IT,PP) and anterior (PFC,OFC,BG) brain areas which are known to build the neural network underlying processing and coding, modulation and storage of visual information. The main ingredients of this formulation are based on the theory of nonlinear dynamical systems and the statistical theory of neural learning. The model is developed on the basis of a concrete mathematical description of brain mechanisms involved allowing complete simulation and prediction of effects of the disruption of sub-mechanisms in the model. Thus the model can predict specific impairments in visual information selection, attentional modulation, and visual working memory capabilities, and their interaction, in patients suffering from focal brain injury. The simulation experiments are empirically verified by testing normal subjects and patients. The model integrates, in a unifying form, the explanation of several existing experimental data at different neuroscience levels. At the microscopic neural level, we simulated single cell recordings, at the mesoscopic level of cortical areas we reproduced the results of fMRI studies, and at the macroscopic perceptual level psychophysical performances. Specific predictions at the different neuroscience levels have also been done. These predictions inspired single cell, fMRI and psychophysical experiments.
Studies of information processing by the brain span a range of spatial and temporal scales. Functional processing streams consist of dozens of distinct areas arrayed across much of neocortex, yet network function may depend critically on the activities of individual cells. Responses of the system can last for hundreds of milliseconds, but sub-millisecond timing may be functionally relevant. No single imaging methodology provides all of the information required for studies of the functional organization and dynamics of the brain. However, integrated computational models can exploit the complementary strengths and transcend limitations of individual methods.
Model-based analyses of neural electromagnetic data (MEG and EEG) allow inferences about regions and dynamics of neural activation, in spite of the fundamental ambiguity of the inverse problem. Models of tissue geometry and conductivity estimated from MRI or CT and estimates of conductivity from diffusion MRI or impedance tomography can improve the accuracy of the forward calculation required for source localization. Cortical anatomy from MRI provides useful constraints on inverse procedures that attempt tomographic reconstruction of regions of neural activity. We have developed a Bayesian technique for the analysis of MEG/EEG data that provides a powerful framework for probabilistic inference based on multiple forms of anatomical, functional and physiological data. The underlying spatial source model is a patch of contiguous cortical voxels, defined by a series of dilation operations about a seed point on the cortical surface. Cortical voxels have a prior probability for activity, with current oriented normal to the local cortical surface. In the absence of specific prior knowledge, a uniform prior probability is assumed. Alternatively, priors might be assigned based onPET or fMRI data for the individual subject, or drawn from spatial/temporal probabilistic databases based on multiple subjects and functional imaging modalities. Markov Chain Monte Carlo techniques are used to sample the source model parameter space to estimate the posterior probability distribution, allowing identification of consistent features across many possible solutions. The use of a spatio-temporal model of neural activation produces significant gains in the resolution and accuracy of probabilistic mapping. This approach allows integration of multiple, complementary (occasionally disparate) forms of image data on a macroscopic scale.
To explore the dynamics activities of networks on a microscopic scale, we have turned to optical techniques. We have recently demonstrated the imaging of rapid intrinsic optical responses that track the electrical dynamics of neuronal activation, using a novel image probe and high performance video techniques. Near infrared illumination is provided around the perimeter of the image probe, providing dark field illumination that enhances contrast of light scattering signals. The image probe can be configured to create confocal, spectrally resolved images of tissue. Stimulation elicits characteristic spatial and temporal optical patterns within brain tissue, corresponding to at least four distinct physical processes. Two early optical components are synchronous with fast electrical evoked responses and reflect direct neural response components not seen in previous imaging studies. Such fast responses may be attributed to a number of biochemical or cellular processes, including neural swelling that accompanies activation. The slower signals reflect the hemodynamic changes that underlie functional imaging techniques such as fMRI. We have employed fast optical signals to visualize expected spatial patterns of physiological activation of rat somato-sensory "barrel" cortex. Optical signals showed evidence of high frequency structure correlated with synchronous oscillatory activity observed in simultaneous electrical recordings.
In order to account for dynamic neural behavior, we are developing simulation tools that allow us to predict experimentally observable responses of neural populations. We will use these capabilities to generate testable hypotheses, and to optimize network models that account for observed responses. The convergence of dynamic neuroimaging with neural network modeling techniques, may allow us to understand integrated function of the brain as revealed by noninvasive macroscopic methods, in terms of the dynamic activities of networks of individual neurons.
The processes whereby our brains continue to learn about a changing world in a stable fashion throughout life are proposed to lead to conscious experiences. These processes include the learning of top-down expectations, the matching of these expectations against bottom-up data, the focusing of attention upon the expected clusters of information, and the development of resonant states between bottom-up and top-down processes as they reach a predictive and attentive consensus between what is expected and what is there in the outside world. It is suggested that all conscious states in the brain are resonant states, and that these resonant states trigger learning of sensory and cognitive representations when they amplify and synchronize distributed neural signals that are bound by the resonance. The name Adaptive Resonance Theory, or ART, summarizes the predicted link between resonance and learning. Illustrative psychophysical and neurobiological data are explained using these concepts from early vision, visual object recognition, auditory streaming, and speech perception, among others. It is noted how these mechanisms seem to be realized by known laminar circuits of sensory and cognitive neocortex. In particular, they seem to be operative at all levels of the visual system. It is suggested that sensory and cognitive processing in the What processing stream of the brain obey top-down matching and learning laws that are often complementary to those used for spatial and motor processing in the brain's Where/How processing stream. This enables sensory and cognitive representations to maintain their stability as we learn more about the world, while allowing spatial and motor representations to forget learned maps and gains that are no longer appropriate as our bodies develop and grow from infanthood to adulthood. Procedural memories are proposed to be unconscious because the inhibitory matching process that supports these spatial and motor processes cannot lead to resonance.
Neocortical neurons in vivo are embedded in networks with intensive ongoing activity. How this network activity affects the neurons' integrative properties and what functional consequences this may have at the network level remains largely unknown. Most of our knowledge regarding these properties is based on recordings in vitro, where network activity is strongly diminished or even absent. We have performed two complementary series of experiments based on intracellular recordings in anaesthetized rat frontal cortex measuring (i) the relationship between the excursions of a neuron's membrane potential and the level of activity of the surrounding network, (ii) how cortical neurons integrate synaptic inputs and (III) how integration of synaptic inputs is affected by ongoing network activity. Based on the results of these experiments we suggest that network activity strongly affects the integration time window in cortical neurons increasing the importance of synchrony during periods of high network activity. I will briefly introduce a new method which allows us to induce in vivo-like network activity in acute neocortical brain slices.
I will describe joint work with Eric Brown, Jeff Moehlis, Ed Clayton and Gary Aston-Jones in which we model the response of neurons in the brain nucleus locus coeruleus (LC) in target detection and selective attention tasks. Extracellular recordings on behaving monkeys show varying responses dependent on stimulus type and whether the LC is in its phasic or tonic mode. From membrane voltage and ion channel equations, we derive a phase oscillator model for LC neurons. Average firing probabilities of a pool of cells in response to stimuli over many trials are then computed via a probability density formulation. Using this, we show that: (1) Response is elevated in populations with lower firing rates; (2) Response decays at exponential or faster rates due to noise and distributions of neuron frequencies; and (3) Shorter stimuli preferentially cause refractory periods. These results may account for much of the observed response variability.
We explore the tendency of a perceptual system to create internal representations of features appearing in the concepts that it learns. This process is empirically demonstrated by requiring subjects to learn a set of new perceptual concepts and to verbally report their components. These reports reflect the internal representations that have been created. A neural network is trained in a similar paradigm, and used to model the mental search task and the verbal report task. The network replicates the empirical results and leads to a prediction that future learning of new concepts will be facilitated if they contain the previously acquired features. This is empirically verified by a second experiment.
This talk will present an overview to the different functional brain imaging methods (especially PET and fMRI), to the kinds of questions these methods try to address, and to some of the questions associated with functional neuroimaging data for which neural modeling must be employed to provide reasonable answers. In particular, the way computational modeling can be used to relate neural activity to functional brain imaging signals will be addressed.
Widespread areas of the cerebral cortex send input to the striatum of the basal ganglia (BG) and to the cerebellum (CB) through the pons. Improved anatomical methods have revealed quite specific channels of signal processing through BG and CB that feed back to many different areas of the cerebral cortex (Middleton and Strick, 2001). The existence of segregated subcortical channels is calling for a revised interpretation of subcortical signal processing (Houk, 2001). I will discuss two computational models of signal processing in these loops. One demonstrates the suitability of the loop through BG for encoding the serial order of sensory events (Beiser and Houk, 1998). The other explores the suitability of the loop through CB for predictively regulating movement commands.
Smooth pursuit eye movements use a neural circuit that consists of multiple cortical and cerebellar areas to allow primates to track smoothly moving objects. Recordings from many loci within the pursuit circuits have revealed neurons that discharge in relation to the image motion that drives pursuit, the eye velocity of pursuit, or both. However, these observations have not elucidated different roles for different areas, nor revealed why there are so many different components to the pursuit circuit. Our goal has been to record pursuit behavior in a variety of paradigms to reveal the underlying components of pursuit, and then to use physiological techniques to determine how each area of the pursuit circuit contributes to the different components of the behavior.
It has been common to think of pursuit as a visual-motor reflex in which visual motion inputs are processed and converted to commands for eye velocity without any additional control. However, our behavioral experiments have shown that the internal gain of pursuit is subject to modulation under natural conditions. If a monkey is fixating a stationary spot, then there is only a tiny eye velocity response to a brief perturbation of the spot consisting of one cycle of a 10 Hz sine wave. If the same perturbation is presented during accurate pursuit of a moving target, then the response is much larger and can have a gain that approaches one. We interpret this observation as evidence for the existence of on-line modulation or "gain control" in pursuit. The modulation can be invoked even during fixation by microstimulation in the smooth eye movement area of the frontal eye fields ("frontal pursuit area" or "FPA"), suggesting a specific function for the frontal cortex in gain modulation.
Visual inputs for pursuit arise in extrastriate area MT and are transmitted in parallel from MT and companion parietal areas to the pontine nuclei and the cerebellum. We have used a combination of behavioral and physiological experiments to determine the nature of the transformation that converts visual responses in MT into motor commands. Our strategy was to degrade the quality of visual motion by presenting motion that was sampled at different spatial and temporal intervals, and then to measure the effect on pursuit and on the response of visual motion neurons in area MT at the same time. Our analysis revealed a behavioral illusion in which pursuit first improved and then deteriorated when motion was degraded progressively. Recordings of the population response in area MT demonstrated that the preferred speed of the most active neurons shifted toward higher values when pursuit was improved. Computational analysis showed that a vector averaging computation based on an opponent motion signal would transform the population response in MT into the smooth eye velocities we measured.
Our data suggest that pursuit has two separate and parallel components, one that transforms visual inputs into preliminary commands for motor outputs, and one that modulates that gain of the visual-motor transformation. We identify these pathways tentatively as arising from the parietal and frontal cortices, respectively.
The determination of the link between the mental operations considered in cognitive psychology and the biological operations of the brain has been a perpetual challenge for neuroscientists. There is a growing agreement that cognitive function is carried by some combination of localized operations in a brain area and distributed interactions among several regions, though these two notions are often conceived as a dichotomy. However, there remains a difficulty as to the precise mixture of these two notions that exemplifies the brain's translation of biological operations to cognition. I will propose that such a translation may be best appreciated under a principle of Neural Context. Systems-level neuroanatomy shows that most parts of the brain receive projections from many areas and send to projections to many others. Neural context emphasizes that the precise pattern these structural connections are functionally engaged is the translation of brain operations into cognitive operations. Consequently, the same region may show exactly the same pattern of activity across many different tasks, but because the pattern of interactions with other connected areas differs, the region contributes to these different cognitive operations. Stated differently, the neural context within which an area is active embodies the cognitive operation.Background information for these ideas can be found at: http://psych.utoronto.ca/~mcintosh/#Pub
A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this talk, we discuss how a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary Markov model. We illustrate the suggested approach using a visual motion detection task. Our simulation results show that the model network exhibits direction selectivity and correctly computes the posterior probabilities for motion direction. When used to solve the well-known random dots motion discrimination task, the model generates responses that mimic the activities of evidence-accumulating neurons in cortical areas LIP and FEF. In addition, the model predicts reaction time distributions that are similar to those obtained in human psychophysical experiments that manipulate the prior probabilities of targets and task urgency.
Many developing circuits show spontaneous oscillations. We study models for the slow episodic population rhythms (time scale, mins) that are seen in chick embryonic spinal cord. We use firing rate models for the population activity in a recurrent network of excitatory-coupled cells. Fast/slow methods are used to analyze the models. The primary candidate for the slow negative feedback mechanism that sets the burst period is a form of synaptic depression. The individual units have simple tonic firing properties. Specific predictions based on the model about how the rhythm is affected due to brief stimuli that switch the system from the quiescent to the active phase have now been confirmed in experiments. A positive correlation was found between episode duration and the preceding inter-episode interval, but not with the following interval, suggesting that episode onset is stochastic while episode termination occurs deterministically, when network excitability falls to a fixed level. We also formulate and analyze a minimal model that demonstrates the plausibility of a specific mechanism for depression: the slow modulation of the synaptic reversal potential (for the GABA synapses, which are depolarizing at this stage of development). Preliminary results show that a cell-based network (integrate-and-fire units) with synaptic depression can also alternate between phases of active firing and quiescence. (with J Tabak, M O'Donovan, B Vladimirski)
Our visual system represents an important and complex part of the brain, and is thought to contribute already to precognitive and cognitive processes including attention, working memory and conflict handling. Here Iconcentrate on early visual processing in V1 of higher mammals and use this reduced system as an example for modeling different putatively intraareal network effects on multiple scales in space.
After a short overview over some biological facts a brief introduction into a very simple mean-field formulation, which is still capable of describing synaptic history, is provided. This framework is then used to show a few examples where mean field approaches can be used to describe coding and response properties on various scales in space, going from more localized to increasingly context-dependent aspects of cortical processing.
After the phenomenon of contrast invariant orientation tuning (i.e., separation of stimulus quality and quantity), the role of threshold variability on the tissue-response function will be addressed as examples for localized processing schemes. Contextual information will be introduced in a framework of a model, which shows how the nonlinear response properties of cortical grating cells (repetitive texture selective neurons) could arise as the consequence of context and recurrent cortical processing. Other contextual effects which can yield the basis of texture-based segmentation or perceptual grouping will be addressed as well. The final part of the talk touches the question if and how spiking neuronal activity can be reliably linked to indirect measurements of global brain activity as produced by functional brain imaging.
Although functional human neuroimaging methods such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) have become increasingly important in the study of human brain function, there is still a large gap between the imaging results and underlying neuronal processes such as those described in single-cell animal studies. While the single-cell studies examine the spiking activity of single or relatively few neurons, usually excitatory pyramidal cells, imaging data reflects the integrated synaptic activity of large populations of mixed cortical cell types. A few efforts have been made to bridge the gap between these vastly different scales by the use of large-scale modeling (Arbib et al., 1995; Tagamets & Horwitz, 1998). Some of the factors and constraints that need to be considered in building such a model are discussed. An emphasis is placed on features of the circuits in the brain that are likely to have an effect on the relationship between imaging data and the underlying neuronal activity. In particular, it is demonstrated how the interaction of the relatively sparse inter-regional connections and rich local circuitry may have unintuitive effects on imaging data, especially in the presence of synaptic inhibition or neuromodulation.
A basic local circuit that includes both excitatory and inhibitory cells is presented, and the choice of parameters, which are based on a mix of experimental data and desired qualitative behaviors, is described. The expected effects of the parameters on both local neuronal activity and quantitative imaging results are demonstrated. Then a large-scale model of visual working memory is presented. The model is made up of four major regions that have been identified in the ventral visual pathway of both humans and non-human primates. It performs a visual working memory task that has been used in both animal single-cell recordings and human brain imaging experiments. The model includes a working memory circuit that has elements with dynamics similar to the various neuronal populations that have been identified in the frontal cortex of the monkey. At the same time the total summed synaptic activity in the different regions is quantitatively similar to human imaging data. In this model, the emphasis is on the interaction between long-range interregional connections and the local circuits. Specific experiments and predictions made by the model will be discussed. These include the expected effect of synaptic inhibition on imaging data (Tagamets & Horwitz, 2001) and comparisons of potential mechanisms for the mediation of working memory in the prefrontal cortex (Tagamets & Horwitz, 2000).
The talk will present a neurocomputational framework to account for choice-RT and maintenance in working memory, as measured in tasks such as speeded choice and free/cued recall of list of items. The model makes use of a stochastic accumulation process, where neural leak, recurrent excitation and lateral inhibition play important roles (Usher & McClelland, 2001; Haarmann & Usher, 2001; Davelaar & Usher, 2002a). Variations in these parameters account for individual differences and online changes of the parameters take place, in a task dependent way, in response to attentional processes mediated by neuromodulation (Usher & Davelaar, 2002b).