Journal of Mathematical Neuroscience (2018) 8:8 https://doi.org/10.1186/s13408-018-0063-y RESEARCH Open Access Investigating the Correlation–Firing Rate Relationship in Heterogeneous Recurrent Networks 1 2 Andrea K. Barreiro · Cheng Ly Received: 5 January 2018 / Accepted: 21 May 2018 / © The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Abstract The structure of spiking activity in cortical networks has important im- plications for how the brain ultimately codes sensory signals. However, our under- standing of how network and intrinsic cellular mechanisms affect spiking is still in- complete. In particular, whether cell pairs in a neural network show a positive (or no) relationship between pairwise spike count correlation and average ﬁring rate is generally unknown. This relationship is important because it has been observed ex- perimentally in some sensory systems, and it can enhance information in a common population code. Here we extend our prior work in developing mathematical tools to succinctly characterize the correlation and ﬁring rate relationship in heterogeneous coupled networks. We ﬁnd that very modest changes in how heterogeneous networks occupy parameter space can dramatically alter the correlation–ﬁring rate relationship. Keywords Recurrent network · Spike count correlation · Linear response · Heterogeneity Abbreviations E excitatory I inhibitory Asyn asynchronous Strong Asyn strong asynchronous A.K. Barreiro abarreiro@smu.edu C. Ly CLy@vcu.edu Department of Mathematics, Southern Methodist University, Dallas, USA Department of Statistical Science and Operations Research, Virginia Commonwealth University, Richmond, USA Page 2 of 25 A.K. Barreiro, C. Ly 1 Introduction One prominent goal of theoretical neuroscience is to understand how spiking statis- tics of cortical networks are modulated by network attributes [9, 28, 42]. This un- derstanding is essential to the larger question of how sensory information is encoded and transmitted, because the statistics of neural activity impact population coding [7, 15–17, 37]. One family of statistics that is implicated in nearly all population coding studies is trial-to-trial variability (and co-variability) in spike counts; there is now a rich history of studying how these statistics arise, and how they effect coding of stim- uli [1, 10, 18, 25, 33]. Recent work has demonstrated that the information content of spiking neural activity depends on spike count correlations and its relationship (if any) with stimulus tuning [1, 6, 18, 25, 44]. An important relationship observed in many experimental studies is that pairwise correlations on average increase with ﬁring rates. This has been observed in vitro [8] and in several visual areas: area MT [2], V4 [5] (when measured between cells in the same attentional state), V1 [19, 36], and notably, in ON–OFF directionally sensitive retinal ganglion cells [11, 44]. The retinal studies involved cells with a clearly identi- ﬁed function, and therefore allowed investigation of the coding consequences of the observed correlation–ﬁring rate relationship. These studies found that the stimulus- dependent correlation structure observed compared favorably to a structure in which stimulus-independent correlations were matched to their (stimulus-)averaged levels. This ﬁnding reﬂects a general principle articulated in other studies [18, 25], that stimulus-dependent correlations are beneﬁcial when they serve to spread the neural response in a direction orthogonal to the signal space. These ﬁndings thus provide strong motivation for understanding what network mechanisms can produce this positive (and perhaps beneﬁcial) correlation–ﬁring rate relationship. This correlation–ﬁring rate trend has been explained theoretically in feedforward networks driven by common input [8, 26, 38]; however, many corti- cal networks are known to be dominated by strong recurrent activity [24, 34, 39]. On the other hand, theoretical studies of the mechanisms for correlations in recurrent networks have largely analyzed homogeneous networks (i.e., identical cells, aside from excitatory/inhibitory cell identity) [9, 13, 27, 28, 40, 41], and have not con- sidered how correlations vary with ﬁring rates with realistic intrinsic heterogeneity. Thus, how spike count correlations can vary across a population of heterogeneously- tuned, recurrently connected neurons is not yet well understood despite its possible implications for coding. In a previous paper, we addressed this gap by developing a mathematical method to compactly describe how second-order spike count statistics vary with both intrinsic and network attributes [4]. Speciﬁcally, we adapted network linear response theory [14, 27, 43] to account for heterogeneous and recurrent networks, which in turn al- lows us to identify important network connections that contribute to correlations via a single-cell susceptibility function [8]. Here, we will use this method to survey a broad family of recurrent networks to understand how three factors inﬂuence the re- lationship between correlations and ﬁring rates; how the neurons occupy parameter space, the strength of recurrent excitation, and the strength of background noise. This work thus provides a more complete picture of how even modest changes in important circuit parameters can alter the correlation–ﬁring rate relationship. Journal of Mathematical Neuroscience (2018) 8:8 Page 3 of 25 2Results First, we summarize how a network linear response theory allows us to use the single- neuron ﬁring rate function to approximate correlations. We then apply this theory to examine three factors that can modulate the correlation–ﬁring rate relationship: direction in effective parameter space, strength of recurrent excitation, and strength of background noise. 2.1 Using the Single-Neuron Firing Rate Function to Characterize Correlation Susceptibility The response of a neuron is generally a nonlinear function of model parameters. However, background noise linearizes this response, so that we can use a linear theory to describe the change in response to small changes in parameters. We assume we have some way to approximate the change in ﬁring rate which occurs as a result of a change in parameter X: ν (t ) = ν + (A ∗ X )(t ); (1) i i,0 X,i i ν is the baseline rate (when X = 0) and A (t ) is a susceptibility function that i,0 X,i characterizes the ﬁring rate response [8, 20, 41]. The parameter which is modulated has often been chosen to be a current bias μ [8, 41]; however, it could also be the mean or variance of a conductance, a time constant, or a spike generation parameter [29, 30]. In a coupled network, the parameter change X arises from inter-neuron coupling; substituting X (t ) → (J ∗ ν ) and moving to the spectral domain, we ﬁnd i X,ij j −1 −1 ∗ 0 0∗ ∗ ˜ ˜ y( ˜ ω)y ˜ (ω) = I − K(ω) y ˜ (ω)y ˜ (ω) I − K (ω) , (2) where y ˜ = F [y − ν ] is the Fourier transform of the mean-shifted process (ν is i i i i ˜ ˜ the average ﬁring rate of cell i ) and f = F [f ] for all other quantities; K (ω) = ij ˜ ˜ A (ω)J (ω) is the interaction matrix in the frequency domain (which may de- X,i X,ij 0 0∗ pend on the parameter being varied, i.e. X); ˜ y (ω)y ˜ (ω) is the power spectrum of −1 the uncoupled neural response. Using the usual series expansion for (I − K(ω)) ,we see that the covariance C(ω) ≡y( ˜ ω)y ˜ (ω) naturally decomposes into contributions from different graph motifs: −1 −1 0 ∗ ˜ ˜ ˜ ˜ C(ω) = I − K(ω) C (ω) I − K (ω) 0 0 0 ∗ 0 ∗ ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ = C (ω) + K(ω)C (ω) + C (ω)K (ω) + K(ω)C (ω)K (ω) + ··· . (3) Each instance of K includes the susceptibility function in the spectral domain, A (ω), which modulates the effect of each directed connection by the responsive- ness of the target neuron to the parameter of interest. As noted by previous authors ˜ ˜ [27], the validity of the expansion in Eq. (3) relies on the spectral radius of K , ρ(K), remaining less than one. We checked that this remains true for all networks we ex- amined in this paper, with a maximum of ρ(K) = 0.9564. Page 4 of 25 A.K. Barreiro, C. Ly We next justify why—at least for long-time correlations—we can estimate each of these terms using the single-neuron ﬁring rate function. First, in the small frequency limit, A (ω) will coincide with the derivative of the ﬁring rate with respect to the X,i parameter: dν lim A (ω) = . X,i ω→0 dX 0 ∗ ˜ ˜ ˜ For the common input motif, K(ω)C (ω)K (ω), we can write 0 ∗ 0 ˜ ˜ ˜ ˜ ˜ ˜ KC K = K C K (4) ik jk kk ij k→i,k→j ˜ ˜ ˜ ˜ ˜ = (A J )C (A J ) g ,i ik g ,j jk E kk E k→(i,j ),k∈E ˜ ˜ ˜ ˜ ˜ + (A J )C (A J ) (5) g ,i ik g ,j jk I kk I k→(i,j ),k∈I 2 0 ˜ ˜ ˜ ˜ =|J | A A C E g ,i g ,j E E kk k→(i,j ),k∈E 2 0 ˜ ˜ ˜ ˜ +|J | A A C (6) I g ,i g ,j I I kk k→(i,j ),k∈I 2 0 ˜ ˜ ˜ ˜ =|J | A A C E g ,i g ,j E E kk k→(i,j ),k∈E 2 0 ˜ ˜ ˜ ˜ +|J | A A C , (7) I g ,i g ,j I I kk k→(i,j ),k∈I where we have separated excitatory (E) and inhibitory (I) contributions, using g and g to denote the mean conductance of each type, and assumed that the synaptic coupling kernels, J , depend only on E/I cell identity (and have thus dropped the jk ﬁrst subscript, which adds no additional information). This provides a formula for the long-time covariance, but we are typically inter- estedinthe correlation; fortunately, the small frequency limit also allows us to obtain a normalized correlation measure from the cross-spectrum, because Cov (n ,n ) C (0) T i j ij lim ρ = lim = , (8) T,ij T →∞ T →∞ Var (n ) Var (n ) T i T j ˜ ˜ C (0)C (0) ii jj where Cov (n ,n ) and Var (n ) denote covariance and variance of spike counts in T i j T i a time window T ; that is, ρ is the Pearson correlation coefﬁcient (which varies T,ij between −1 and 1). Finally, letting ω → 0 and normalizing with the assumption that spiking is close to a Poisson process, as expected for an asynchronously ﬁring network, so that Journal of Mathematical Neuroscience (2018) 8:8 Page 5 of 25 C ≈ ν : kk k 0 ∗ ˜ ˜ ˜ (KC K ) 1 dν dν ij i j 2 0 ˜ ˜ = |J | C √ E kk ν ν dg dg i j E E ˜ ˜ C C k→(i,j ), ii jj k∈E modulation by ﬁring rate function total E common input 1 dν dν i j 2 0 ˜ ˜ + |J | C . (9) √ I kk ν ν dg dg i j I I k→(i,j ), k∈I modulation by ﬁring rate function total I common input The above expression summarizes how the impact of excitatory and inhibitory com- mon input are modulated by the single-neuron ﬁring rate function, ν , and its deriva- tives. Thus far, we have presented results previously obtained by others [27, 40, 41]. We now depart from these authors by focusing speciﬁcally on the behavior of the term in Eq. (9); and for simplicity, the behavior of this modulating factor for two identical neurons; e.g. 1 dν dν 1 dν i j → . (10) ν ν dg dg ν dg i j I I I In principle, analogous expressions govern larger motifs, such as chains, that in- volve additional evaluations of ν and its derivatives; for example, excitatory length-3 3 0 ˜ ˜ chains arising from K C would be: 3 0 ˜ ˜ (K C ) 1 dν dν dν ij i l k 3 0 ˜ ˜ = ×|J | × C . (11) √ E jj ν ν dg dg dg i j E E E ˜ ˜ C C l→i k→l, j →k, ii jj l∈E k∈E modulation by ﬁring rate function all E → E → E → E paths However, we found that, for a wide range of networks, direct common input—and inhibitory common input in particular—was the dominant contributor to pairwise correlations (Fig. 6(A)). We now examine how different network mechanisms modulate the correlation– ﬁring rate relationship, focusing on three factors: direction in effective parameter space, strength of recurrent excitation, and strength of background noise. 2.2 Direction in Parameter Space Modulates the Correlation–Firing Rate Relationship Suppose we have a ﬁring rate function of one or more intrinsic parameters (for expo- sition purposes, assume a function of two parameters (x ,x )), i.e. 1 2 ν = F(x ,x ). 1 2 Page 6 of 25 A.K. Barreiro, C. Ly The parameters x might include input bias, membrane time constant, ionic/synaptic reversal potentials, or a spiking threshold. Based on our arguments from the previous section, we will approximate correlation susceptibility by the quantity 1 ∂F S = , F ∂x where x is an appropriately chosen parameter. Speciﬁcally, we will ﬁnd, empirically, that the inhibitory common input is the dominant term, and therefore will use x = g , 1 I the mean inhibitory conductance. Thus, the units of S in all ﬁgures are Hz/[g ], where g is dimensionless (see Eq. (17)). Heterogeneous ﬁring rates can arise when each neuron occupies a different loca- tion in parameter space (i.e. a different (x ,x ) point), thus causing both ﬁring rate F 1 2 ˆ ˆ and susceptibility S to vary from neuron to neuron. We now ask: how does S change with ﬁring rate? Note that this question, as stated, is ill-posed because F and S are both functions of two parameters (x and x ): there is not necessarily a one-to-one or even a func- 1 2 tional relationship between these quantities. Suppose that, locally, a population of neurons can be described as lying along a parameterized path in the (x ,x ) plane: 1 2 i.e., (x (s), x (s)),for s <s <s . Then we can compute the directional deriva- 1 2 min max tive: ˆ ˆ ˆ dS dS/ds ∇S · d x = = , (12) dF dF /ds ∇F · d x where we have expressed the directional derivatives in terms of the local direction of dx dx 1 2 the path: i.e. d x = ( , ) and the gradients of F and S . However, this depends ds ds not only on the functions F and S , but also on the direction d x. To gain some intuition for why (and when) direction in (x ,x ) space matters, we 1 2 consider the networks studied in [4]. Previously, we simpliﬁed the ﬁring rate function by setting all but two parameters (inhibitory conductance, g , and threshold, θ ) I,i i to their population average; i.e. F g ,θ ≡ f g , σ , g , σ , σ ,θ , (13) I,i i I,i g ,i p E,i g ,i p i p i I E and · denotes the population average. In Figs. 1(A) and (B), we show F and S ≡ ∂F ( ) /F thus computed, for the asynchronous network studied in that paper. We then ∂x surveyed a sequence of diagonal paths through the center (i.e., midpoint of the ranges of threshold and inhibitory conductance), with each path plotted in a different color. In Fig. 1(C) we plot ﬁring rate (solid lines) and susceptibility (dashed-dotted lines) along each curve. Finally, in Fig. 1(D) we plot the susceptibility versus the ﬁring rate, along each path. This last panel shows that there is striking variability in the susceptibility-ﬁring rate relationship, depending on the direction the path takes through the center of the (θ , g ) plane. Any given ﬁring rate (below ∼ 15 Hz) is consistent with either increase or decrease of susceptibility. All curves go through a single point in the (θ , g ) plane, and therefore a single point in the (F , S) plane; here the direction I Journal of Mathematical Neuroscience (2018) 8:8 Page 7 of 25 Fig. 1 Firing rate and susceptibility (S ), computed for the asynchronous (Asyn) network studied in [4], with all other parameters besides threshold θ and mean inhibitory conductance g set to their average values (thereby leaving a two-dimensional parameter space). Here, we traverse the plane on straight-line paths deﬁned by their angle through the origin. Although the units of g are dimensionless, they are shown as the units for S for completeness. The units of θ (i.e., voltage) are also scaled to be dimensionless ˆ ˆ of the S –F relationship (i.e., whether S increases or decreases with F ) can change rapidly with angle, as for the red, orange, and yellow curves. We then extended these observations by traversing the phase space with two additional families of straight-line paths (Fig. 2); the radial paths are repeated in Figs. 2(A) and (B). For horizontal (Figs. 2(C) and (D)) and vertical (Figs. 2(E) and (F)) families, paths no longer intersect at a single point; nevertheless, a given ﬁring rate is consistent with both increases and decreases in susceptibility. This is pro- nounced in Fig. 2(F), where at 15 Hz susceptibility decreases with ﬁring rate in the orange, yellow and light green paths, but increases for the light blue, medium blue, royal blue, and indigo paths. We performed the same computations on the strong asynchronous network studied in [4] that has larger excitatory coupling strength: results are shown in Fig. 3.Agiven ﬁring rate could be consistent with either increase or decrease of susceptibility; we see this in the radial paths (Figs. 3(A) and (B)) and horizontal paths (Figs. 3(C) and (D)) for rates between 5–10 Hz. However, vertical paths (Figs. 3(E) and (F)) always have susceptibility increasing with ﬁring rate (except perhaps at the highest ﬁring rates). As in the asynchronous network, direction of susceptibility (increase vs. decrease) can change rapidly with angle, for paths that intersect a single point; see Figs. 3(A)– (B), red to orange to yellow. Page 8 of 25 A.K. Barreiro, C. Ly Fig. 2 Susceptibility (S ) vs. ﬁring rate, computed for the asynchronous network studied in [4], with all other parameters besides threshold θ and mean inhibitory conductance g set to their aver- age values (thereby leaving a two-dimensional parameter space: the other (averaged) parameters are σ = 0.6602 (see Eq. (30)), σ = 0.0026 (see Eq. (29)), σ = 6.3246, g ,i = 0.0053 p p p p g ,i g ,i i E I E (see Eq. (17)). Here, we traverse the plane on three different families of straight-line paths. The dashed lines show paths visualized in [4]: θ = 1 (vertical, aqua blue) and g = 1.83 (horizontal, orange/yellow) 2.3 Quantifying the Likelihood of a Positive Correlation–Firing Rate Relationship In the previous section, we saw that the path a network occupies in effective parameter space can have a dramatic effect on the correlation–ﬁring rate relationship: we next seek to formalize these observations. Let d x be the local direction in which we want to Journal of Mathematical Neuroscience (2018) 8:8 Page 9 of 25 Fig. 3 Susceptibility (S ) vs. ﬁring rate, computed for the strong asynchronous (Strong Asyn) network studied in [4], with all other parameters besides threshold θ and mean inhibitory conductance g set to their average values (thereby leaving a two-dimensional parameter space: the other (averaged) parameters are σ = 0.5884 (see Eq. (30)), σ = 0.0378 (see Eq. (29)), σ = 4.7434, p p p g ,i g ,i i I E g ,i = 0.0611, see Eq. (17)). Here, we traverse the plane on three different families of straight-line E p paths. Dashed lines show paths visualized in [4]: θ = 1 (vertical, aqua blue) and g = 1.46 (horizontal, yellow/green) ˆ ˆ ˆ consider the behavior of F and S.If ∇S · d x and ∇F · d x are of the same sign, then S ˆ ˆ increases with F ;if ∇S · d x and ∇F · d x have opposite signs, then S decreases with F . dS The more aligned ∇S and ∇F , the more paths lead to > 0; the more anti-aligned dF dS ∇S and ∇F , the more paths lead to < 0. For example, consider the limiting dF ˆ ˆ cases where: (i) if ∇S and ∇F point exactly in the same direction, then ∇S · d x and Page 10 of 25 A.K. Barreiro, C. Ly ˆ ˆ Fig. 4 Where ∇S and ∇F are similarly aligned, S and F will both increase along most paths through that ˆ ˆ ˆ dS ∇S·d x dS point. In each panel, gray shows the part of the x-plane where = > 0, black where < 0. dF ∇F ·d x dF ˆ ˆ ˆ From left to right: ∇S and ∇F positively aligned; ∇S and ∇F orthogonal; ∇S and ∇F negatively aligned ∇F · d x are always same-signed for any d x; (ii) if ∇S and ∇F point in opposite directions, then ∇S · d x and ∇F · d x are never same-signed. Figure 4 illustrates how the alignment of these two quantities alters the region where correlation increases with ﬁring rate. To examine the utility of this idea, we reconsider the radial paths along which we previously computed susceptibility. The paths all go through a single point, so we can check the sign((∇S · x)(∇F · x)) for all directions through this point. In Figs. 5(A) and (C), white indicates positive and black negative. Figures 5(B) and (D) repeat the susceptibility-ﬁring rate curves from Fig. 2(B) and Fig. 3(B). For the asynchronous network (Fig. 5(A)), the red, indigo, and royal blue paths are predicted to have negative dS/dF , as we can conﬁrm in Fig. 5(B). Yellow, green, and light blue curves are predicted to have positive dS/dF . The orange curve is close to dF = 0; the true blue curve is close to dS = 0. For the strong asynchronous network, only the red curve is in the negative region of Fig. 5(C); this is also the only path with dS/dF < 0in Fig. 5(D). Of course, this prediction only applies to portions of the path near the point at which we computed the gradients; away from this point, gradients will change along with the direction of the S vs. F curve. For example, the royal blue curve in Fig. 5(B) increases with ﬁring rate for small ﬁring rates, and the light blue, true blue, and royal blue curves in Fig. 5(D) decrease with ﬁring rate, (for large ﬁring rates). This motivates a direction-independent measure to quantify the fraction of paths that lead to an increase of correlation with ﬁring rate. This is given exactly in terms of the angle between ∇S and ∇F : ∇S ·∇F cos θ = (14) ∇S ∇F and in particular the fraction of x directions in which S increases with F is given by 1 ∇S ·∇F −1 π − cos . (15) ∇S ∇F Journal of Mathematical Neuroscience (2018) 8:8 Page 11 of 25 Fig. 5 Using a single number to predict dS/dF . (A) Paths through parameter space for the asynchronous ˆ ˆ ˆ dS ∇S·d x dS network: white shows the part of the x-plane where = > 0, black where < 0, where ∇S dF ∇F ·d x dF and ∇F are computed at the center of the diagram. (B) Correlation susceptibility vs. ﬁring rate, for each path illustrated in (A). (C)–(D) As in (A)–(B), but for the strong asynchronous network −1 Because cos has range [0,π ], this varies between 0 and 1. The more aligned ∇S dS and ∇F , the more paths lead to > 0; the more anti-aligned ∇S and ∇F ,the more dF dS paths lead to < 0. dF 2.4 Strength of Recurrent Excitation Modulates the Correlation–Firing Rate Relationship Our use of inhibitory susceptibility (i.e. Eq. (10)) to characterize the relationship between correlations and ﬁring rates relied on intermediate assumptions, speciﬁcally: • Second-order motifs dominate pairwise correlations. • Inhibitory common input is the dominant second-order motif. • Asynchronous spiking assumption: Var (n ) = Tν ⇒ C = ν . T i i ii i Page 12 of 25 A.K. Barreiro, C. Ly Here, we check that these conditions are still satisﬁed for a range of neural network models. In [4], we examined two spiking regimes achieved by varying the strength of ex- citation: both recurrent excitation W and excitatory input into the inhibitory pop- EE ulation W . We next applied our theory to a dense grid of parameters (different IE networks), each identiﬁed by its location on the (W ,W ) plane. Both excitatory EE IE strengths were varied from a minimum of their values for the asynchronous network (W = 0.5 and W = 5) to a maximum of 1.4 times their value in the strong asyn- EE IE chronous network (to W = 12.6 and W = 11.2). The parameter W is a dimen- EE IE XY sionless scale factor (see Eqs. (17)–(20)); for example in an all-to-all homogeneous network, W = 1 is when the presynaptic input is exactly the average population XY ﬁring rate (ﬁltered by the synapse). To quantify the importance of paths of different length through the network, we can deﬁne the contributions at any speciﬁc order k by using the interaction net- work K: l 0 ∗ k−l ˜ ˜ ˜ ( K C (K ) ) ij k l=0 R = . (16) ij ˜ ˜ C C ii jj ˜ ˜ ˜ Then we regressed the total correlation (C / C C ) against the contributions at ij ii jj k 2 each speciﬁc order (R ); the corresponding fraction of variance explained (R value) ij gives a quantitative measure of how well the total correlation can be predicted from each term. Similarly, we quantity the importance of speciﬁc second-order motif types, by regressing the total contribution from second-order motifs (R ) against the contribu- ij tion from speciﬁc motifs. We performed this computation for each network (a total of 225 networks), and summarize the results in Fig. 6; to present the data compactly, we collapse R values (all values are ∈[0, 1])for aﬁxed W and varying W by EE IE showing mean and standard deviation only (standard deviation as error bars). Second- order contributions dominate for small to moderate W (Fig. 6(A)); other orders EE only compete when W has already exceeded the level of the strong asynchronous EE network (where the network is close to the edge of instability, and at the limit of validity for mean-ﬁeld, asynchronous assumptions). To illustrate the meaning of this statistic, in Fig. 6(B) we show contributions up to ˜ ˜ ˜ ˜ fourth order (R ,for k = 1,..., 4) vs. total correlation (C / C C ) for all E–E ij ii jj ij cell pairs in a network, for two individual networks included in the summary panel. Note that the second-order contributions cluster near the unity line in both cases, indicating that second-order contributions are the best predictor of total correlations, consistent with the R values stated. Of the second-order motifs, inhibitory common input is the dominant contribution at any value of W , except perhaps the last, at which point the excitatory population EE has unrealistically high ﬁring rates (Fig. 6(C)). To summarize, we have conﬁrmed that far from being limited to a few examples, the conditions we identiﬁed in [4]as Journal of Mathematical Neuroscience (2018) 8:8 Page 13 of 25 Fig. 6 Second-order motifs dominate pairwise correlations in a wide range of networks; inhibitory com- mon input is the dominant second-order motif. (A) Fraction of variance explained (R ) from linear re- ˜ ˜ ˜ gressions of total correlation (C / C C ) against contributions from ﬁrst order (blue), second order ij ii jj (red), third-order (green), and fourth-order (magenta) motifs. (B) Contributions up to fourth order (R , ij ˜ ˜ ˜ for k = 1,..., 4) vs. total correlation (C / C C ) for all E-E cell pairs in a network, for two individual ij ii jj networks included in panel A. (C) Fraction of variance explained (R ) from linear regressions of con- tributions to pairwise correlations from second-order motifs (R ) against contributions from the distinct ij types of second-order motifs: inhibitory common input (magenta), excitatory common input (red), decor- relating chains (green), and correlating chains (blue). Both (A,C): Each data point represents the mean value from 15 networks with W between 5 and 11.2; error bars show standard deviation across these IE values. W = 1 is when the presynaptic input is exactly the average population ﬁring rate (ﬁltered by the XY synapse) in an all-to-all homogeneous network allowing us to focus on susceptibility to inhibition to explain pairwise correlations, appear to hold up for a variety of networks. We note that our ﬁndings about the relative magnitudes of terms of different orders are purely empirical; that is, they are based on concrete numerical observations, rather than a priori estimates. Thus, it should be reassessed if anything about the parame- ters or network connectivity is changed. In particular, a likely reason for the relative weakness of ﬁrst-order terms is that in these networks excitation is almost always weaker than inhibition; while ﬁrst-order terms are always excitatory, second-order terms can involve excitation and/or (comparatively strong) inhibition. Having conﬁrmed the validity of our approach, we computed the susceptibility with respect to inhibition, for each of the networks examined in the previous section (because of instability, we restricted these computations to excitatory strengths ×1.2 the values used in [4]). Because background noise values varied slightly, we actually examined two network families; one in which we chose σ values as in the asyn- chronous network, another in which we chose σ values as in the strong asynchronous network. Also as in [4], we estimated susceptibility using network-averaged values of g , g , σ , and σ . E I g g E I Surprisingly, the difference in background noise dwarfed the recurrent excitation strengths, at least in accessing the relationship between S and ﬁring rate. In Fig. 7, we show S vs. F curves, for a set of representative networks, on a single plot. Color indicates W (shade of blue) and W (shade of red); values of W are 0.50 (as EE IE EE in the asynchronous network from [4]), 6.45, 9 (as in the strong asynchronous net- work from [4]), and 10.7, values of W are 5 (as in the asynchronous network from IE [4]), 7.1, 8 (as in the strong asynchronous network from [4]), and 8.6. For reference, Page 14 of 25 A.K. Barreiro, C. Ly Fig. 7 Firing rate vs. susceptibility (S ), computed for a family of networks generated by modulating the strength of excitation (W and W ). (A) Background noise values σ , σ set as in the asynchronous EE IE E I network from [4]. (B) Background noise values σ , σ set as in the strong asynchronous network from E I [4]. Sixteen curves are chosen, for a survey of the range of networks achievable by varying strength of recurrent excitation. Values of W are 0.50 (as in the asynchronous network from [4]), 6.45, 9 (as in the EE strong asynchronous network from [4]), and 10.7. Values of W are 5 (as in the asynchronous network IE from [4]), 7.1, 8 (as in the strong asynchronous network from [4]), and 8.6. Again, W = 1is when XY the presynaptic input is exactly the average population ﬁring rate (ﬁltered by the synapse) in an all-to-all homogeneous network, so the coupling strengths vary signiﬁcantly W = 1 is when the presynaptic input is exactly the average population ﬁring rate XY (ﬁltered by the synapse) in an all-to-tall homogeneous network, so these coupling strengths vary signiﬁcantly. We see that, for the full range of recurrent excitation val- ˆ ˆ ues, S vs. F curves in Fig. 7(A) are mostly decreasing; S vs. F curves in Fig. 7(B) are mostly increasing for low F , and saturating for high F . 2.5 Background Noise Modulates the Correlation–Firing Rate Relationship To further explore the role of background noise, we repeated the susceptibility calcu- lation on additional families of networks, now allowing background noise strengths σ and σ (i.e. to the excitatory and inhibitory populations) to vary separately. Input E I to excitatory cells was varied between σ = 1.5 and 2.5; input to inhibitory cells was varied between σ = 1.5 and 3. These noise values are relatively large; see Eq. (17) and note that voltage is of order 1. In Fig. 8(A) we display susceptibility vs. ﬁring rate curves for 12 (σ ,σ ) pairs; asterisks indicate σ and σ by color (green intensity E I E I for σ and blue intensity for σ ). Within each panel curves are colored as in Fig. 7: E I red intensity for W and blue intensity for W . IE EE Surprisingly, most network families (i.e. (σ ,σ )) were associated with a de- E I crease in correlation with ﬁring rate. The exceptions are (0.15, 0.25) (as in the strong asynchronous network in [4]) and (0.15, 0.3). This latter was most robustly associ- ated with a positive correlation–ﬁring rate relationship. Furthermore, the shape of susceptibility-ﬁring curves did not appear to vary much with the strength of recurrent excitation (i.e., curves within each panel are similar). Journal of Mathematical Neuroscience (2018) 8:8 Page 15 of 25 Fig. 8 The strength of background noise modulates the correlation–ﬁring rate relationship. (A) Each panel shows ﬁring rate vs. susceptibility (S ), computed for a family of networks generated by modulating the strength of excitation (W and W ) with various background noise levels (see Eq. (17)for σ and σ EE IE E I deﬁnitions). (B) Population-averaged effective parameters g and E , for each network displayed in rev (A); see Eq. (28)for E rev We next sought to investigate possible mechanisms for a positive correlation–ﬁring rate relationship, by examining the effective parameters that govern the neural re- sponse: in essence, the network’s “operating point” (see Eq. (26)). Possible choices include g , g , σ , σ , and the effective reversal potential E ; we found σ and I E g g rev g E I E σ to be largely functions of g and g , while E has a (nonlinear) functional re- g E I rev lationship with g and g . Thus two parameters sufﬁce, and here we chose to use E I g and E .InFig. 8(B), we plot average parameter values for each curve, color- I rev coded by (σ ,σ ). Any given color is consistent with a relatively tight range of g E I I and (comparatively) broad range of E .As σ increases (increasing blue intensity), rev I inhibitory conductance g increases and reversal potential E decreases. However, I rev Page 16 of 25 A.K. Barreiro, C. Ly it was not apparent that any particular region in (g , E ) parameter space was as- I rev sociated with a positive correlation–ﬁring rate relationship, in that the values of g and E that supported a positive relationship supported negative relationships as rev well. 3 Discussion In this paper, we showed that using a single-cell ﬁring rate function to examine the relationship between correlations and ﬁring rates is feasible for a wide range of het- erogeneous, recurrent networks. We focused on three factors that can modulate the correlation–ﬁring rate relationship: how the network occupies effective parameter space, strength of recurrent excitation, and strength of background noise. Although there are many sets of parameters known to vary within a heterogeneous network of neurons, we have already demonstrated vastly different correlation–ﬁring rate rela- tionships with our methods, with a theory that can be readily applied to other model networks. One possible application of this work is in designing neural networks for com- putational experimentation; just as modelers now modify cortical networks to obey experimental constraints on ﬁring rates [3, 35], we could also include a constraint on the desired correlation–ﬁring rate relationship. Here we showed that we can quickly assess a wide range of possible network conﬁgurations for a positive correlation– ﬁring rate relationship: in Sect. 2.5, for example, we performed calculations on 15 × 15 × 12 = 2700 heterogeneous networks, with a nominal amount of comput- ing time. One surprising ﬁnding in our computations was the relative insensitivity of the slope of the correlation–ﬁring rate relationship to recurrent excitation (W , W ), EE IE as demonstrated in Figs. 7 and 8. This is striking in contrast to the strong sensitivity on displayinFigs. 2(B) and 3(B). This difference is explained as follows: in every case where we computed the susceptibility for a self-consistent network (i.e. a solution of Eqs. (26)–(30) and (32)–(33)), the source of heterogeneous ﬁring rates was neural excitability, set via a spiking threshold θ . The resulting effective parameters, such as inhibitory conductance g , did not deviate strongly from their population mean values. In essence, all of these networks took a horizontal path through (θ , g ) parameter space, as in Figs. 2(C), (D) and Figs. 3(C), (D). If we were to generate networks where heterogeneity arises from another source—such as the strength or frequency of inhibitory connections [23]—we might see different results. We look forward to exploring this in future work. A priori, there is no reason to expect that the correlation–ﬁring rate relationship in these recurrent networks can be simpliﬁed to a feedforward motif based on shared inhibitory input; this was purely an empirical observation (see Fig. 6(B)). We remark that others have shown that the effective input correlation can be canceled to have near zero input (and thus output) correlation on average in balanced networks [28, 40], in contrast to some of the models considered here (i.e., strong asynchronous regime with more net excitation). The conditions for correlation cancellation in these model networks is beyond the scope of this study, but note that others have shown Journal of Mathematical Neuroscience (2018) 8:8 Page 17 of 25 correlation cancellation does not always hold ([21, 22] via altering connection prob- abilities). The studies that demonstrate correlation cancellation often have faster (or equal) inhibitory synaptic time scales than excitatory: τ ≤ τ [21, 28, 32]([40]used I E current-based instantaneous synapses or τ = τ = 10 ms) while in our networks I E the inhibitory synapses have longer time scales (Table 1). Note that Fig. S4 of [28] shows that having effectively zero input correlation does not hold as the inhibitory time scales increase beyond the excitatory time scales. Finally, system size is another key parameter that could certainly affect the magnitude of the recurrent feedback. In contrast to [28, 40], we did not account for how system size would affect correlation cancellation in these heterogeneous networks. Although afﬁrmative answers to whether correlations increase with ﬁring rate in experiments were cited in the Introduction [2, 5, 8, 11, 19, 36, 44] we also note that many experiments have shown that the average correlation across a population can decrease with ﬁring rate when a global state changes or a stimulus is presented. A re- cent review [9] shows that stimulus-induced decorrelation (with increased ﬁring rate) occurs in a variety of brain regions and animals. This is slightly different from the situation we examine here, where we consider the relationship between correlations and ﬁring rates within a stimulus condition. Regardless, the fact that the relationship between correlation and ﬁring rate is not obvious points to the continued need for theoretical studies into the mechanisms of spike statistics modulation. Finally, our ﬁnding that correlations often decrease, rather than increase, with ﬁr- ing rate stands in apparent contradiction to earlier work on feedforward networks [8, 38]. On closer inspection, we can identify several reasons why our results dif- fer; with conductance inputs (rather than currents) we have a quantitatively different relationship between input parameters and ﬁring rates; furthermore with more ad- justable single-neuron parameters, the same sets of ﬁring rates may be observed with single-neuron parameters set in different ways. While the current clamp experiments described in [8] found a consistent increase of correlations with ﬁring rates, we can hypothesize that the parallel dynamic clamp experiments in which pairwise correla- tions arise from common inhibitory input, would in fact show a decrease with ﬁring rate. However, we also predict that whether an increase or decrease with ﬁring rate is observed would depend on whether ﬁring rates are modulated by varying the level of inhibitory input, or by otherwise varying the excitability of the cells (perhaps phar- macologically). 4 Methods 4.1 Neuron Model and Network Setup We considered randomly connected networks of excitatory and inhibitory neurons. Each cell is a linear integrate-and-ﬁre model with second-order alpha-conductances, i.e. membrane voltage v was modeled with a stochastic differential equation, as long as the voltage is below threshold θ : dv √ τ =−v − g (t )(v − E ) − g (t )(v − E ) + σ τ ξ (t ). (17) m i E,i i E I,i i I i m i dt Page 18 of 25 A.K. Barreiro, C. Ly Table 1 Intrinsic parameters that are ﬁxed throughout Parameter τ E E τ α α τ τ τ τ m E I E I r,E d,E r,I d,I ref Value 20 ms 6.5 −0.5 2 ms 1 2 1ms 5 ms 2ms 10ms Fixed parameters for model networks; see Eqs. (17)–(20). All are dimensionless except the time scales. When v reaches θ , a spike is recorded and voltage is reset to 0 following a refractory i i period: v (t ) ≥ θ ⇒ v (t + τ ) = 0, (18) i i i ref Each neuron receives Gaussian white background noise with magnitude σ depending only on the cell type; that is, ξ (t )= 0 and ξ (t )ξ (t + s)= δ(s). The membrane i i i time constant, τ , and excitatory and inhibitory synaptic reversal potentials, E and m E E , are the same for every cell in the network (see Table 1). The thresholds θ are a I i signiﬁcant source of heterogeneity, and they are selected from a log–normal distri- (0.2) bution with mean 1 and variance e − 1; since the system size is moderate, the θ ’s were set to have C.D.F. (cumulative distribution function) values equally spaced from 0.05 to 0.95 for both E and I cells. Each cell responds to synaptic input through conductance terms, g and g , E,i I,i which are each governed by a pair of differential equations: dg X,i (1) τ =−g + g , (19) d,X X,i X,i dt (1) dg YX X,i (1) τ =−g + τ α δ(t − t ), (20) r,X r,X X j,k X,i dt N YX j ∈X,j →i k where Y ={E, I } denotes the type of cell i and X ={E, I } denotes the type of the source neuron j . Each spike is modeled as a delta-function that impacts the auxiliary (1) variable g ; here t is the kth spike of cell j . The rise and decay time constants j,k X,i τ and τ and pulse amplitude α depend only on the type of the source neuron, r,X d,X X that is they are otherwise the same across the population. The parameter W denotes YX the strength of X → Y synaptic connections, which are (once given the type of source and target neurons) identical across the population. The “raw” synaptic weight (listed in Table 2) is divided by N , the total number of X → Y connections received by YX each Y -type cell. Table 2 show connectivity parameters for the two example networks we discuss in Sect. 2.2. For Figs. 1–3, ﬁve parameters are set as stated in this table. In Sect. 2.4 and Figs. 6–7, W was varied between 0.5 and 12.6 and W between 5 and 11.2. In EE IE Sect. 2.5 and Fig. 8, W was varied between 0.5 and 10.8 and W between 5 and EE IE 9.6; σ was varied between 1.5 and 2.5 and σ between 1.5 and 3. E I Journal of Mathematical Neuroscience (2018) 8:8 Page 19 of 25 Table 2 Excitatory connection strengths mediate different ﬁring regimes Parameter W (I → E) W (E → I ) W W σ σ Figures EI IE EE II E I √ √ Asynchronous 10 5 0.5 5 2/ 23/ 2Figs. 1, 2, 5(A) and (B) √ √ Str. Asynch. 10 8 9 5 1.5/ 22.5/ 2Figs. 3, 5(C) and (D) % connectivity 35 % 20 % 40 % 40 % Connectivity details for networks examined in Sect. 2.2.Here W denotes X → Y connections; e.g. YX W denotes the strength of excitatory connections onto inhibitory neurons. The parameter σ denotes IE i the strength of background noise in units of (scaled) voltage, and depends only on cell type (E or I ). The parameters W , W , σ and σ will vary in Sects. 2.4 and 2.5; see text for details. For refer- EE IE E I ence, W = 1 is when the presynaptic input is exactly the average population ﬁring rate (ﬁltered by the XY synapse) in an all-to-tall homogeneous network. 4.2 Linear Response Theory In general, computing the response of even a single neuron to an input requires solv- ing a complicated, nonlinear stochastic process. However, it often happens that the presence of background noise linearizes the response of the neuron, so that we can describe this response as a perturbation from a background state. This response is furthermore linear in the perturbing input and thus referred to as linear response the- ory [31]. The approach can be generalized to yield the dominant terms in the coupled network response as well. We will use the theory to predict the covariance matrix of spiking activity. The derivation is presented in full in [20, 29, 30]; here, we present only the main points. We assume we have some way to approximate the change in ﬁring rate which occurs as a result of a change in parameter: ν (t ) = ν + (A ∗ X )(t ); (21) i i,0 X,i i ν is the baseline rate (when X = 0) and A (t ) is a susceptibility function that i,0 X,i characterizes this ﬁring rate response up to order [8, 20, 41]. In order to consider joint statistics, we need the trial-by-trial response of the cell. First, we propose to approximate the response of each neuron by y (t ) ≈ y (t ) + A ∗ (J ∗ y ) (t ); (22) i X,i X,ij j that is, each input X has been replaced by a ﬁltered version of the presynaptic ﬁring rates y . In the frequency domain this becomes ˜ ˜ y ˜ (ω) =˜ y + A (ω) J (ω)y ˜ (ω) , (23) i X,i X,ij j j Page 20 of 25 A.K. Barreiro, C. Ly where y ˜ = F [y − ν ] is the Fourier transform of the mean-shifted process (ν is the i i i i average ﬁring rate of cell i ) and f = F [f ] for all other quantities. In matrix form, this yields a self-consistent equation for y ˜ in terms of y ˜ : −1 0 0 ˜ ˜ I − K(ω) y ˜ =˜ y ⇒˜ y = I − K(ω) y ˜ , (24) ˜ ˜ ˜ where K (ω) = A (ω)J (ω) is the interaction matrix in the frequency domain. ij X,i X,ij The cross-spectrum is then computed via −1 −1 ∗ 0 0∗ ∗ ˜ ˜ y( ˜ ω)y ˜ (ω) = I − K(ω) y ˜ (ω)y ˜ (ω) I − K (ω) . (25) To compute the interaction matrix for a network of conductance-based neurons, we use the effective time constant approximation (as in the supplemental for [41]). We ﬁrst separate each conductance into mean and ﬂuctuating parts, e.g., g →g + E,i E,i (g −g ) (see the discussion in [12]). Next we identify an effective conductance E,i E,i g and potential E , and treat the ﬂuctuating part of the conductances as noise, 0,i rev,i i.e. g −g → σ ξ (t ), so that Eq. (17) becomes E,i E,i g ,i E,i dv τ =−g (v − E ) + σ ξ (t )(v − E ) m 0,i i rev,i g ,i E,i i E dt + σ ξ (t )(v − E ) + σ τ ξ (t ), (26) g ,i I,i i I m i where g = 1 +g +g , (27) 0,i E,i I,i g E +g E E,i E I,i I E = , (28) rev,i 0,i σ = Var g (t ) = E g (t ) −g , (29) E,i E,i E,i g ,i σ = Var g (t ) = E g (t ) −g . (30) I,i I,i I,i g ,i The parameters which govern the ﬁring rate response will now be the conductance mean and variance, e.g. g and σ ; we next compute how these depend on E,i g ,i incoming ﬁring rates for second-order α-function synapses (Eqs. (19) and (20)). We ﬁrst simplify the equation for the auxiliary variable (Eq. (20)): (1) dg X,i (1) τ =−g + τ α ˆ δ(t − t ) (31) r,X r,X X,i k X,i dt so that α ˆ includes all factors that contribute to the pulse size in Eq. (20), including X,i synapse strength and pulse amplitude. The time constants τ , τ and synapse r,X d,X jump sizes α ˆ generally depend on cell type. Then assuming that each spike train X,i is a Poisson process with a constant mean ﬁring rate: i.e., each spike train is modeled as a stochastic process S(t) with S(t) = ν; S(t)S(t + τ) − ν = νδ(τ ), Journal of Mathematical Neuroscience (2018) 8:8 Page 21 of 25 a straightforward but lengthy calculation shows that g (t ) =ˆ α ν τ , (32) X,i X,i X,i r,X 1 τ r,X Var g (t ) = α ˆ ν τ X,i X,i r,X X,i 2 τ + τ r,X d,X α ˆ τ r,X = g (t ) × × , (33) X,i 2 τ + τ r,X d,X where ν is the total rate of type-X spikes incoming to cell i . Notice that modulating X,i the rate of an incoming spike train will impact both the mean and variance of the input to the effective equation, Eq. (26)(via E and σ ). Furthermore, this impact may rev,i g ,i differ for excitatory and inhibitory neurons, giving us a total of four parameters that can be varied in the effective equation. Therefore, we have four susceptibility functions to compute, A (ω), g ,i ˜ ˜ ˜ A (ω), A (ω), and A (ω). The ﬁrst two capture the change in ﬁring rate 2 2 g ,i I σ ,i σ ,i g g E I as a result of a change in mean conductance—g →g +g exp(ıωt ) E,i E,i 0 E,i 1 or g →g +g exp(ıωt )—while the ﬁnal two address a change in I,i I,i 0 I,i 1 2 2 2 2 2 variance—σ → (σ ) + (σ ) exp(ıωt ) or σ → (σ ) + 0 1 0 g ,i g ,i g ,i g ,i g ,i E E E I I (σ ) exp(ıωt ). Since the corresponding Fokker–Planck equation required to ob- g ,i tained these entities is linear, we can compute both susceptibilities separately and combine them to get the net effect. With these pieces, we now have the interaction matrix: ˜ ˜ ˜ ˜ A (ω)J (ω) + A 2 (ω)L (ω), j excitatory, g ,i ij ij σ ,i ˜ E K (ω) = (34) ij ˜ ˜ ˜ ˜ A (ω)J (ω) + A (ω)L (ω), j inhibitory, g ,i ij ij I σ ,i ˜ ˜ where L(ω) plays a similar role as J, but for the effect of incoming spikes on the variance of conductance. Its relationship to J (either in the frequency or time domain) is given by the same simple scaling shown in Eq. (33): i.e., for j excitatory, α ˆ τ E,i r,E ˜ ˜ L (ω) = J (ω) × × , (35) ij ij 2 τ + τ r,E d,E where the ﬁrst factor comes from the effective spike amplitude α ˆ (and is the scale E,i factor proposed in [29], Eq. (64)), and the second arises from using second-order (vs. ﬁrst-order) alpha-functions. To implement this calculation, we ﬁrst solve for a self-consistent set of ﬁring rates: that is, ν is the average ﬁring rate of Eq. (26), along with Eqs. (27)–(30) and (32)– (33). We then apply Richardson’s threshold integration method [29, 30] directly to 0 0∗ Eq. (26) to compute the unperturbed power spectrum (˜ y (ω)y ˜ (ω)) and suscepti- bility functions. The software we used to implement this calculation is described more fully in [4] and can be found at https://github.com/andreakbarreiro/LR_CondBased. Page 22 of 25 A.K. Barreiro, C. Ly 4.3 Computing Statistics from Linear Response Theory Linear response theory yields the cross-spectrum of the spike train, ˜ y (ω)y ˜ (ω),for each distinct pair of neurons i and j (see Eq. (25)). The cross-correlation function, C (τ ), measures the similarity between two processes at time lag τ , while the cross- ij spectrum measures the similarity between two processes at frequency ω: C (τ ) ≡ y (t ) − ν y (t + τ) − ν , (36) ij i i j j C (ω) ≡ y ˜ (ω)y ˜ (ω) . (37) ij i j The Weiner–Khinchin theorem [31] implies that {C , C } are a Fourier transform ij ij pair: that is, −2πıωt C (ω) = C (t )e dt. (38) ij ij −∞ In principle, the cross-correlation C(t ) and cross-spectrum C(ω) matrices are functions on the real line, reﬂecting the fact that correlation can be measured at dif- ferent time scales. In particular, for a stationary point process the covariance of spike counts over a window of length T , n and n , can be related to the cross-correlation i j function C by the following formula [17]: ij Cov (n ,n ) = C (τ )(T −|τ |)dτ. (39) T i j ij −T The variance of spike counts over a time window of length T , n , is likewise given by integrating the autocorrelation function C : ii Var (n ) = C (τ )(T −|τ |)dτ. (40) T i ii −T By normalizing by the time window and taking the limit as T →∞, Cov (n ,n ) |τ | T i j lim = lim C (τ ) 1 − dτ ij T →∞ T T →∞ T −T = C (τ ) dτ = C (0), (41) ij ij −∞ we can see that, for an integrable cross-correlation function, we can use C (0) as a ij measure of long-time covariance. Similarly, the long-time limit of the Pearson correlation coefﬁcient of the spike counts, Cov (n ,n ) C (0) T i j ij lim ρ = lim = , (42) T,ij T →∞ T →∞ Var (n ) Var (n ) T i T j ˜ ˜ C (0)C (0) ii jj gives us a normalized measure of long-time correlation. Journal of Mathematical Neuroscience (2018) 8:8 Page 23 of 25 Acknowledgements This work was motivated in part by useful feedback we received at the Third An- nual International Conference on Mathematical Neurosciences, held in Boulder CO in May–June 2017. We would like to thank the organizers for an enjoyable and stimulating conference. Funding Not applicable. Availability of data and materials Software used to generate the computational results shown here can be found at: https://github.com/andreakbarreiro/LR_CondBased. Ethics approval and consent to participate Not applicable. Competing interests The authors declare that they have no competing interests. Consent for publication Not applicable. Authors’ contributions AKB and CL designed the project. AKB and CL wrote software. AKB per- formed simulations. AKB and CL designed ﬁgures. AKB and CL wrote the paper. All authors read and approved the ﬁnal manuscript. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional afﬁliations. References 1. Averbeck BB, Latham PE, Pouget A. Neural correlations, population coding and computation. Nat Rev Neurosci. 2006;7:358–66. 2. Bair W, Zohary E, Newsome WT. Correlated ﬁring in macaque visual area mt: time scales and rela- tionship to behavior. J Neurosci. 2001;21(5):1676–97. 3. Barreiro AK, Gautam SH, Shew W, Ly C. A theoretical framework for analyzing coupled neuronal networks: application to the olfactory system. PLoS Comput Biol. 2017;13(10):e1005780. 4. Barreiro AK, Ly C. When do correlations increase with ﬁring rate in recurrent networks? PLoS Com- put Biol. 2017;13(4):e1005506. 5. Cohen MR, Maunsell JHR. Attention improves performance primarily by reducing interneuronal cor- relations. Nat Neurosci. 2009;12:1594–600. 6. da Silveira RA, Berry MJ. High-ﬁdelity coding with correlated neurons. PLoS Comput Biol. 2014;10(11):e1003970. 7. Dayan P, Abbott LF. Theoretical neuroscience: computational and mathematical modeling of neural systems. London: Taylor & Francis; 2001. 8. de la Rocha J, Doiron B, Shea-Brown E, Josic ´ K, Reyes A. Correlation between neural spike trains increases with ﬁring rate. Nature. 2007;448:802–6. 9. Doiron B, Litwin-Kumar A, Rosenbaum R, Ocker GK, Josic ´ K. The mechanics of state-dependent neural correlations. Nat Neurosci. 2016;19(3):383–93. 10. Ecker AS, Berens P, Tolias AS, Bethge M. The effect of noise correlations in populations of diversely tuned neurons. J Neurosci. 2011;31(40):14272–83. 11. Franke F, Fiscella M, Sevelev M, Roska B, Hierlemann A, Azeredo da Silveira R. Structures of neural correlation and how they favor coding. Neuron. 2016;89:409–22. 12. Gerstner W, Kistler WM, Naud R, Paninski L. Neuronal dynamics: from single neurons to networks and models of cognition. Cambridge: Cambridge University Press; 2014. 13. Helias M, Tetzlaff T, Diesmann M. The correlation structure of local neuronal networks intrinsically results from recurrent dynamics. PLoS Comput Biol. 2014;10(1):e1003428. Page 24 of 25 A.K. Barreiro, C. Ly 14. Hu Y, Trousdale J, Josic K, Shea-Brown E. Motif statistics and spike correlations in neuronal net- works. J Stat Mech Theory Exp. 2013;2013(3):03012. 15. Hu Y, Zylberberg J, Shea-Brown E. The sign rule and beyond: boundary effects, ﬂexibility, and noise correlations in neural population codes. PLoS Comput Biol. 2014;10(2):e1003469. 16. Josic ´ K, Shea-Brown E, Doiron B, de la Rocha J. Stimulus-dependent correlations and population codes. Neural Comput. 2009;21:2774–804. 17. Kay SM. Fundamentals of statistical signal processing, volume 1: estimation theory. New York: Pren- tice Hall; 1993. 18. Kohn A, Coen-Cagli R, Kanitscheider I, Pouget A. Correlations and neuronal population information. Annu Rev Neurosci. 2016;39:237–56. 19. Lin IC, Okun M, Carandini M, Harris KD. The nature of shared cortical variability. Neuron. 2015;87:644–56. 20. Lindner B, Doiron B, Longtin A. Theory of oscillatory ﬁring induced by spatially correlated noise and delayed inhibitory feedback. Phys Rev E. 2005;72(6):061919. 21. Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci. 2012;15(11):1498–505. 22. Litwin-Kumar A, Doiron B. Formation and maintenance of neuronal assemblies through synaptic plasticity. Nat Commun. 2014;5:5319. 23. Ly C. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network hetero- geneity. J Comput Neurosci. 2015;39:311–27. 24. Martin KAC. Microcircuits in visual cortex. Curr Opin Neurobiol. 2002;12(4):418–25. 25. Moreno-Bote R, Beck J, Kanitscheider I, Pitkow X, Latham P, Pouget A. Information-limiting corre- lations. Nat Neurosci. 2014;17:1410–7. 26. Ostojic S, Brunel N, Hakim V. How connectivity, background activity, and synaptic properties shape the cross-correlation between spike trains. J Neurosci. 2009;29:10234–53. 27. Pernice V, Staude B, Cardanobile S, Rotter S. How structure determines correlations in neuronal networks. PLoS Comput Biol. 2011;7(5):e1002059. 28. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD. The asynchronous state in cortical circuits. Science. 2010;327:587–90. 29. Richardson MJE. Firing-rate response of linear and nonlinear integrate-and-ﬁre neurons to modulated current-based and conductance-based synaptic drive. Phys Rev E. 2007;76:021919. 30. Richardson MJE. Spike-train spectra and network response functions for non-linear integrate-and-ﬁre neurons. Biol Cybern. 2008;99:381–92. 31. Risken H. The Fokker–Planck equation: methods of solutions and applications. New York: Springer; 32. Rosenbaum R, Smith MA, Kohn A, Rubin JE, Doiron B. The spatial structure of correlated neuronal variability. Nat Neurosci. 2017;20:107–14. 33. Ruff DA, Cohen MR. Attention can either increase or decrease spike count correlations in visual cortex. Nat Neurosci. 2014;17(11):1591–7. 34. Sanchez-Vives MV, McCormick DA. Cellular and network mechanisms of rhythmic recurrent activity in neocortex. Nat Neurosci. 2000;3(10):1027–34. 35. Schuecker J, Schmidt M, van Albada SJ, Diesmann M, Helias M. Fundamental activity constraints lead to speciﬁc interpretations of the connectome. PLoS Comput Biol. 2017;13(2):e1005179. 36. Schulz DPA, Sahani M, Carandini M. Five key factors determining pairwise correlations in visual cortex. J Neurophysiol. 2015;114:1022–33. 37. Shamir M, Sompolinsky H. Implications of neuronal diversity on population coding. Neural Comput. 2006;18:1951–86. 38. Shea-Brown E, Josic K, de la Rocha J, Doiron B. Correlation and synchrony transfer in integrate-and- ﬁre neurons: basic properties and consequences for coding. Phys Rev Lett. 2008;100:108102. 39. Shu Y, Hasenstaub A, McCormick DA. Turning on and off recurrent balanced cortical activity. Nature. 2003;423(6937):288–93. 40. Tetzlaff T, Helias M, Einevoll GT, Diesmann M. Decorrelation of neural-network activity by in- hibitory feedback. PLoS Comput Biol. 2012;8:e1002596. 41. Trousdale J, Hu Y, Shea-Brown E, Josic K. Impact of network structure and cellular response on spike time correlations. PLoS Comput Biol. 2012;8(3):e1002408. 42. van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274:1724–6. Journal of Mathematical Neuroscience (2018) 8:8 Page 25 of 25 43. Zhao L, Beverlin B II, Netoff T, Nykamp DQ. Synchronization from second order network connec- tivity statistics. Front Comput Neurosci. 2011;5:1–16. 44. Zylberberg J, Cafaro J, Turner MH, Shea-Brown E, Rieke F. Direction-selective circuits shape noise to ensure a precise population code. Neuron. 2016;89(2):369–83.
The Journal of Mathematical Neuroscience – Springer Journals
Published: Jun 6, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”
Daniel C.
“Whoa! It’s like Spotify but for academic articles.”
@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”
@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”
@JoseServera
DeepDyve Freelancer | DeepDyve Pro | |
---|---|---|
Price | FREE | $49/month |
Save searches from | ||
Create lists to | ||
Export lists, citations | ||
Read DeepDyve articles | Abstract access only | Unlimited access to over |
20 pages / month | ||
PDF Discount | 20% off | |
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.
ok to continue