Mathematical analysis and modeling have played influential roles in the current and classical descriptions of sensory processing, object identification and representation. The bases for these descriptions have involved the properties of feedforward interactions, receptive-fields, and firing rates or spike counts and stimuli have typically been static in time and stereotypical (oriented bars, pure tones, ...). Successes include Hubel and Weisel (1981 Nobel Prize shared with Roger Sperry) and Barlow (Swartz Prize for Computational Neuroscience, 2009). There is a growing awareness that processing is not passive but active (e.g., Kleinfeld, Bower) that involves dynamic feedback loops and recurrent processing and that feedback may extend down to the sensory receptor level. This workshop will address the evolving research area of active sensory processing, such as the top-down responsive control of whiskers in the rat somatosensory system, and the mathematical modeling of these feedback systems and the principles and optimizations that might pertain. The notion of static receptive fields as described in over-idealized and restricted stimulus sets in laboratory settings is also under challenge when one considers that in real-world settings the scenes are much more complex and they are dynamic, constantly changing. A statistical framework for natural scene analysis seems much more appropriate. The workshop will consider the approaches of statistical representation of scenes and their possible realization in the brain. Futhermore, sensory systems are capable of rapid adaptation to scene dynamics, including the statistics of changing scenes, and models for such are under development (Fairhall, Riecke).
So, what does the brain do with the processed sensory input? What scene aspects/cues are used in object identification and segregation; what commonalities group different individuals together; how do we categorize objects? Modeling challenges are presented by these questions and some will be addressed during the workshop. An interesting paradigm arises in the context of ambiguous scenes, such as the Necker cube or the face-vase image, in which multiple interpretations are perceived alternately. The dynamics of such alternations are stochastic and the differential equation models typically involve competition through mutual inhibition amongst the model neural subpopulations that are hypothesized to represent the two or more percepts. In the auditory context there are dynamic ambiguous stimuli that introduce another temporal layer and raise issues of what cues are used to define and track an auditory object through time.
Issues that arise in the neural representations of scenes lead naturally to neural coding. What language/means do neuronal systems use internally to encode the features of an image? These questions are usually addressed from an information theory point of view. In which context is the temporal patterning of spike trains significant or is the mean firing adequate to carry the information? How do cell ensembles mutually represent features, i.e., what is the population code? Perceptions must be developed on the fly. Given some sensory tuning properties how might the parameters be chosen amongst cells to give the most efficient and rapid population code?
Throughout the workshop we will ask about plausible mechanistic models that can implement the notions of active processing, coding strategies, adaptation features, and so on.