[work with Barry Richmond (LN-NIMH), Pauline Ruffiot (Nordita and Univ Joseph Fourier, Grenoble) Cristina Ursta (Nordita, Niels Bohr Institute, and West Univ, Timisoara), Gustaf Sterner (KTH Stockholm), Mandana Ahmadi (Ahvaz Univ, Iran) and Alexander Lerchner (DTU)]
The observed spike count distributions of V1 neurons are non-Poissonian: The variance generally exceeds the mean, and the variance-vs-mean relation is well-fit by a power law with an exponent greater than 1. In this work we find that the spike statistics of neurons in a model network with dynamically balanced excitation and inhibition show the same features. Our model, intended to represent a generic cortical column, comprises randomly connected excitatory and inhibitory leaky integrate-and-fire neurons driven by excitatory input from a large population of neurons external to the model. We take this input to vary in time like typical thalamic input to cortex. The synaptic strengths are chosen to produce asynchronous irregular firing at rates up to 200 Hz, depending on the strength of the input. Random variability among neurons in both firing thresholds and the strengths of external input currents is also included. The high degree of connectivity permits a mean-field description in which all input currents, both external and recurrent, can be treated as Gaussian noise, the mean and autocorrelation function of which are calculated self-consistently from the firing statistics of single model neurons.
I will report on two problems under current study: (1) Balanced networks with conductance-based synapses. Here the firing statistics are controlled by the synaptic dynamics. (2) A balanced net model for a visual cortical hypercolumn. The firing statistics vary systematically with orientation: The Fano factor is largest at orientations away from the optimal one.
Talk 1: Expecting what you work for: behavioral, neurophysiological and molecular studies into motivation and reward expectancy. Barry J. Richmond, Section on Neural Coding and Computation, Nat'l Inst. of Mental Health, Nat'l Institutes of Health, Dept. of Health and Human Services.
Evaluating the balance between the amount of work that needs to be done to obtain a reward or achieve a goal is critical step in normal behavior. We study this critical step in behavior by observing how monkeys make use of visual stimuli to predict how many trials of a simple operant task need to be performed to obtain a reward. In line with many long know observations, the number of errors made increase as the visual stimulus indicates that there are more trials remaining, e.g. there are more errors when 3 trials remain than when 2 trials remain. This show that the monkeys have an internal working model of this simple reward schedule task. Recording single neurons in areas of the brain related to dopamine, a transmitter known to play an important role in reward-seeking behavior, we find signals in several brain regions related to the path through the task and the expectancy of the reward. Finally, borrowing techniques from molecular biology we are able to inactivate dopaming 2 (D2) receptor in one region of cortex and completely block learning of the association of visual stimuli with reward expectancy. There are many aspects of this task and our findings that would benefit from theoretical work.
Talk 2: The coordination of activity across neurons: are the messages redundant? Barry J. Richmond, Section on Neural Coding and Computation, Nat'l Inst. of Mental Health, Nat'l Institutes of Health, Dept. of Health and Human Services.
It is clear from informational measurements of single neurons in the visual system that single neurons carrying relatively small amounts of information about the outside world. The responses are variable, meaning that they appear noisy. Thus, for the brain to work as reliably as it does, the brain must pool the responses of many neurons. Is it likely that this is done through simple averaging, or is there evidence that the brain might choose to combine the responses in some other way? I will present showing that neurons recorded simultaneously only share a relatively small amount of information, suggesting that combining signals would vastly underestimate the capabilities of neuronal populations for processing information.
Anyone who has watched a fly make a flawless landing on the rim of a teacup, or marvelled at a honeybee speeding home after collecting nectar from a flower patch several kilometres away, would know that insects possess visual systems that are fast, reliable and accurate. Insects cope remarkably well with their world, despite possessing a brain that carries fewer than 0.01% as many neurons as ours does. What are the secrets of their success, and can some of these navigational principles be usefully implemented in robots?
Although most insects lack stereo vision, they use a number of ingenious strategies for perceiving their world in three dimensions and navigating successfully in it. For example, distances to objects are gauged in terms of the apparent speeds of motion of the objects' images, rather than by using complex stereo mechanisms . Objects are distinguished from backgrounds by sensing the apparent relative motion at the boundary . Narrow gaps are negotiated by balancing the apparent speeds of the images in the two eyes . Flight speed is regulated by holding constant the average image velocity as seen by both eyes . Roll is stabilised by balancing the output signals from the two lateral ocelli, which function as horizon sensors [5,6]. Bees landing on a horizontal surface hold constant the image velocity of the surface as they approach it, thus automatically ensuring that flight speed is close to zero at touchdown . Foraging bees gauge distance flown by integrating optic flow: they possess a visually-driven "odometer" that is robust to variations in wind, body weight and energy expenditure [8-10]. This presentation will review some of this work, and outline applications of some of these strategies to the design of autonomous robots and flying vehicles [11-15].
Recent work is beginning to reveal that insects may not be the simple, reflexive creatures that they were once assumed to be. Honeybees, for example, can learn rather general features of flowers and landmarks, such as colour, orientation and symmetry, and apply them to distinguish between objects that they have never previously encountered [1-5]. Bees exhibit "top-down" processing: that is, they are capable of using prior knowledge to detect poorly visible or camouflaged objects . Furthermore, bees can learn to navigate through labyrinths [7-10], to form complex associations [11, 12] and to acquire abstract concepts such as "sameness" and "difference" . All of these observations suggest that there is no hard dichotomy between invertebrates and vertebrates in the context of perception, learning and 'cognition'; and that brain size is not necessarily a reliable predictor of perceptual capacity. This presentation will review some of the perceptual capacities of honeybees and speculate, where possible, on underlying mechanisms.