We will focus on dynamics and information processing in large, nonlinear networks. The aim is to highlight a set of mathematical questions that recur across neuroscience, and to discuss both recent progress and outstanding problems. The final day will feature a series of retrospective talks on the interplay of mathematics and neuroscience, leading into moderated discussions of future prospects. These sessions will align with the major themes of the workshop, which are proposed to be as follows:
Linking large-scale network structure and dynamics. The heterogeneous components and vast scale and connectivity of neural systems make for an overwhelming range of possible networks. However, network architectures are constrained by key principles - for example, each cell produces connections of only one sign, leading to non-normal connectivity matrices (Murphy and Miller). What are the consequences of such features for general properties of network dynamics? We will focus on this and other systematic departures from random connectivity, including small-world structures, localized connection "neighborhoods," feedforward inhibition circuits, and the impact of highly recurrent, and hence bistable, network components. In addition, we will cover the latest results on how basic assumptions about single-neuron properties do and do not impact network-wide dynamics.
Bridging scales -- mean field models. What mathematical tools can bridge scales from networks of spiking cells to averaged statistical variables that usefully summarize the activity of large networks? Mean-field techniques have yielded major advances in mathematical neuroscience for decades, but many developments remain to be realized, especially for networks with nonsparse connections and hence partial synchrony among spikes.
Information and coding in large spiking networks. Information-theoretic studies have shown that certain patterns of correlated, or partially synchronized, spiking across large networks enhances the fidelity with which they can transmit information. But what network dynamics lead to such patterns? We will highlight general mathematical results that connect architecture and information processing.
Plasticity and learning in network connections. Perhaps the most fascinating aspect of neural dynamics is how network activity drives network architecture to evolve over time. We will focus on mathematical tools for understanding the consequences of such rules, both in terms of the general connectivity structures that they produce and in terms of network function - e.g., encoding and releasing "memories" of past inputs. A related theme is robustness and variability in neural circuits - for example, how widely can connection strengths and intrinsic properties vary while preserving basic features of a network's dynamics?