Linking connectivity, dynamics and computations in recurrent neural networks

Summary by Alan Akil

In a recent preprint, Mastrogiuseppe and Ostojic discuss an important extension of classical results on neural networks: Theoretical results dating from the late 80s [e.g. Sompolinsky, Cristani and Sommers 1988] show how high-dimensional, random (chaotic) activity arises robustly in networks of rate units whose activity evolves according to

x'_i(t) =-x_i(t)+\sum_{j=1}^{N}J_{ij}\phi(x_j(t))+I_i

for nonlinear activation functions \phi. However, this classical work assumes that connectivity in the network is unstructured. Mastrogiuseppe  and Ostojic discuss the case when the connectivity matrix has structure that is defined by a low-rank matrix. In particular, they assume that the connectivity is given by a sum of a low-rank and a random matrix, as

J_{ij}=g \chi_{ij}+P_{ij}

where g is the disorder strength, \chi_{ij} is a Gaussian all-to-all random matrix and every element is drawn from a centered normal distribution with variance 1/N, and P_{ij}=\frac{m_in_j}{N} where m and n are N-dimensional vectors.

Interestingly, the model remains highly tractable: The activity can either be predominantly confined to a few dimensions determined by the (constant) input, and the vectors that define the low-rank component of the connectivity, or it can exhibit high-dimensional chaos, when the unstructured component dominates. The tractability of the model allowed the authors to to design networks that perform specific computations. Moreover, increasing the rank of the structured part of the connectivity leads to networks that can support an ever wider dynamical repertoire, accompanied by an expanding computational capacity.  This allows for the implementation of complex tasks such as context-dependent decision making.

The authors start by studying a recurrent network with a rank-one structured connectivity component and no external input. In this case the network supports four states of spontaneous activity that depend mainly on disorder strength and structure strength.  For instance, strong structure and low disorder lead to heterogeneous firing rates that are approximately constant in time. The most interesting case occurs when the strengths are comparable, leading to a structured chaotic state characterized by approximately one-dimensional dynamics accompanied by high dimensional temporal fluctuations. This state, is characterized by the emergence of very slow timescales, which may be of separate interest [40]. Importantly, the transitions between these four states can be obtained analytically.

Next, the authors examine what happens when a constant, spatially heterogeneous input drives the neurons in the network [e.g. Rajan, Abbott, Sompolinsky 2010]. In this case, the relation between the left and right structure vectors and the input gave rise to different dynamics in the network. The two structure vectors play different roles in the dynamics: the right-structure vector determines the output pattern of network activity, while the left-structure vector selected inputs that give rise to patterned outputs. An increase in external input generally led to suppression of chaotic and bistable dynamics.

Networks with structured connectivity can be used perform a specific computation, and the authors start with a simple Go-Nogo discrimination task (equivalent to simple classification). Here  the animal has to produce a specific motor output in response to a sensory stimulus (Go simulus) and ignore all others (Nogo stimuli). This implementation showed very desirable computational properties such as generalization to noisy or novel stimuli, and was extended to the detection of multiple stimuli.  However, as far as we could see, although individual units are nonlinear, the network  still acts as a linear discriminator.

Rank-two structure in the connectivity matrix leads to a richer repertoire of behaviors. The authors do not provide a full dynamical description. However, they show that the two unit-rank terms in the connectivity can implement two independent input-output channels. This observations allows for an implementation of a network that can perform a Two-Alternative Choice Task (2AFC) which requires two different classes of inputs to be mapped to two different readout directions.

Networks with a rank-two structure can also support  a continuum of spontaneous states that lie on a circle in the two-dimensional circle in the m_1-m_2 plane. The points on the ring-like attractor lie on a slow manifold, and this ring structure was remarkably robust to changes in the disorder strength.

Rank-two structure in the network can also be used to implement a context-dependent discrimination task using the described rank-two structure network. In this case, the stimuli were characterized by two features A and B. The stimuli are random dot kinetograms and the features A and B were direction of motion and color, respectively. Hence, the task consisted in classifying these stimuli based on an explicit contextual cue. The stimulus features were represented as independent and thus mutually orthogonal. The key of this implementation is that the irrelevant feature needs to be ignored, no matter how strong it is. The task was implemented with success and in context A, the output was nearly independent of the stimulus B; and similarly for context B.

Lastly, the authors considered an example in which the geometrical configuration was such that the right- and left-structure vectors exhibited cross-overlaps. In particular, one of these cross-overlaps was negative, which implies that the two vectors were anti-correlated. This gave rise to an effective negative feedback loop, which could generate oscillatory activity. In a particularly interesting regime this activity was a low-dimensional mixture of oscillatory and chaotic activity. Also, since different units have very diverse temporal profiles of activity, a linear readout unit added to the network can exploit them as a rich basis set for constructing a range of periodic outputs.

This work builds on a range of ideas in computational neuroscience  from Hopfield networks, echo-state networks (ESN), to FORCE learning. In the framework of Hopfield networks, memory patterns are stored in the network by adding a rank-one for each pattern to the connectivity matrix. There are studies in which the connectivity matrix consists of a sum of rank-one terms and a random part [51, 52, 53]. This is similar to the approach used here but it differs in some ways. First, the rank-one terms are symmetric, however here the authors considered any right- and left-structure vectors. Second, the rank-one terms are generally uncorrelated, whereas here general vectors are considered. And third, the interest of this paper is not on fixed points of spontaneous activity, but on responses to external inputs, and input-output computations. While in Hopfield networks the focus is on stored patterns and network capacity,  here the authors show that the full dynamical repertoire relies on the geometrical arrangement of the structure vectors, and increasing to rank-two structure shows a significant increase in computational capacity.

In the frameworks of ESN and FORCE learning, randomly connected recurrent networks are trained to produce specified outputs using a feedback loop from the readout unit to the network. This feedback loop is equivalent to adding a rank-one term to the connectivity matrix, where the left-structure vector corresponds to the readout vector and the right-structure corresponds to the feedback. When extending the analysis to the ESN case, the solutions matched the ones obtained by ESN. Also, the correlations between rank-one structure obtained through training and the realization of the random matrix are weak (they are zero for ESN), and the readout error scaled as 1/\sqrt{N}.

It is important to note that the class of network proposed here lacks many biophysical constraints. Regardless, it was shown that in low rank recurrent networks, the representation of stimuli and outputs is high dimensional, distributed and mixed, however the computations are based on emergent low-dimensional dynamics, as found in large-scale recordings of behaving animals [2]. Additionally, this class of networks have the property that stimulus onset reduces variability in neural activity, which is also seen in experiments. The unit-rank structure inferred from computational constraints reproduces known properties of synaptic connectivity: If two neurons both strongly encode some stimulus, their reciprocal connections are stronger than expected, in accord with experimental findings.

In conclusion, the authors were able to describe in detail the spontaneous and stimulus evoked activity using mean field analysis a network that featured both structured and low-rank connectivity. A key result is that low rank structure in the connectivity induces low-dimensional dynamics in the network, a hallmark of population activity recorded in behaving animals. Additionally, they predicted the low-dimensional subspace that contains the dominant part of the dynamics based on the connectivity and input structure, and that the dynamical repertoire increases sharply with the rank of the connectivity structure. Finally, they also showed how to easily implement context-dependent computations, a task that can be challenging in realistic neural networks.

Note: The authors have notified us that they can also implement a discrimination
task that cannot be performed with a linear discriminator.

Leave a Reply

Your email address will not be published. Required fields are marked *