Research

Broadly speaking, my main research interests concern statistical models of behavior and coding in neural populations: what collective behaviors can interacting networks of neurons exhibit, what interactions give rise to those behaviors, how do networks encode and perform computations on their inputs, and how do network structure and external environments or stimuli modify these behaviors and computations? My approaches to these problems combine the methods of statistical physics and information theory with recently developed methods commonly used in computational and theoretical neuroscience. 

More information about specific research projects follows below!

Coding & computation in neural circuits

One of our research aims is to understand how information is reliably encoded and transmitted throughout the brain, despite the vast degree of variability in the environment or even internal circuit activity, including noise that may corrupt neural signals at several stages during transmission. A large body of work in this area is guided by the principles of information theory, which posits that neurons ought to convey as much information about their inputs as possible, given physiological constraints like metabolic costs. 

In recent work with postdoctoral mentors Eric Shea-Brown and Fred Rieke, and graduate student Alison Weber, we studied a simplified model of signal processing in parallel neural pathways, such as are often found in sensory neural circuits.  In particular, we focused on understanding how noise entering the circuit at different locations influences strategies for optimally encoding sensory input. Many studies make particular assumptions about the location and size of noise in neural circuits. This work shows that the consequences of the location have unexpected consequences; for instance, even if a circuit nonlinearity is approximately linear in the center, noise before and after the nonlinearity does not simply combine, but rather have opposite influences on the optimal nonlinearity shape. See our 2016 paper for full details.

Interplay of network structure, statistics, and dynamics

The brain is not a crystal. Neurons are not often arranged on a lattice, but are wired together in intricate webs. Understanding the behavior of populations of neurons will require taking into account the architecture of neural connections, and investigating the consequences of this network structure on the dynamics and statistics of neurons. Unraveling the interplay between these three aspects of neural populations is of fundamental importance for interpreting neural data and understanding how neural activity implements complex computations. 

In recent work with Eric Shea-Brown, Fred Rieke, and Michael Buice we investigated how our understanding of neural circuit architecture can be skewed by unrecorded neurons. Often, the interactions between neurons are inferred statistically, but the inferred interactions (purple in the figure above) do not necessarily represent the time course of synaptic connections from one neuron another another, but rather an effective interactions mediated by all of the other paths the pre-synaptic neuron could send a signal to the post-synaptic neuron through unrecorded paths through the circuit (red and blue in the figure above). We worked out a quantitative relationship between the effective connection we expect to measure when a large fraction of the network is unobserved. See our 2018 paper for details.

While this worked was framed in terms of the experimental problem of unobserved neural activity, we expect this framework will be useful for investigating the effect of architecture on coding more broadly. For instance, circuits often consist of "principal neurons," which project from one circuit to another, but typically not to other principal neurons within a circuit, and interneurons, which do make connections within their own circuit. What is the functional purpose of this organization? By viewing the principal neurons as "recorded" and the interneurons as "hidden" from the perspective of downstream readout neurons, we may be able to use our framework to gain insight into this question.

Origins of pathological neural activity 

As we begin to understand the normal functions of healthy neural circuitry we can also begin to investigate how that normal operation goes awry or degrades with age--and potentially how to steer neural dynamics back on course. To this end, we want to explore how pathologies in circuit architecture, neuromodulator expression, and homeostatic regulation can conspire to give rise to pathological neural activity.

In an ongoing project with Simons Summer Research student Seth Talyansky, we are modeling how observed changes in physiological properties in the aging brain may account for other observed physiological changes or functional deficits that develop with advanced age. For example, using a model of visual cortex based on EInet (A), we have shown that increases in excitatory firing cause the network to become less selective to orientation in grating stimuli (not shown), as well as distort both the receptive field structure and strength of input weights Q (B) and weaken lateral inhibitory synapses W (C), consistent with observed decreases in inhibition with age. See our 2021 paper for details.

Emergent phenomena & phase transitions in neural circuitry

One of the more controversial ideas in neuroscience is the idea that the brain (or portions of it) may operate near a critical point, the boundary between two different phases of collective behavior, akin to the point at which liquid water freezes to become a solid as temperature is lowered. Proponents of the idea argue that operating at criticality may have several advantages, and while there is some evidence for criticality in certain cases, it is far from clear if actual neural circuits operate near critical points. Even if they do not, understanding the critical properties of network models can tell us much about emergent collective behavior of a neural population, even if it is not close to a critical point. This may be one way to understand the origin of low-dimensional behavior in neural systems: it may arise from collective modes of activity. 

Dr. Brinkman has recently adapted tools from the non-perturbative renormalization group to study phase transitions in networks of stochastic spiking neurons.