Abstract
Many networks in the mammalian nervous system remain active in the absence of stimuli. This activity falls into two main patterns: steady firing at low rates and rhythmic bursting. How are these firing patterns generated? Specifically, how do dynamic interactions between excitatory and inhibitory neurons produce these firing patterns, and how do networks switch from one firing pattern to the other? We investigated these questions theoretically by examining the intrinsic dynamics of large networks of neurons. Using both a semianalytic model based on mean firing rate dynamics and simulations with large neuronal networks, we found that the dynamics, and thus the firing patterns, are controlled largely by one parameter, the fraction of endogenously active cells. When no endogenously active cells are present, networks are either silent or fire at a high rate; as the number of endogenously active cells increases, there is a transition to bursting; and, with a further increase, there is a second transition to steady firing at a low rate. A secondary role is played by network connectivity, which determines whether activity occurs at a constant mean firing rate or oscillates around that mean. These conclusions require only conventional assumptions: excitatory input to a neuron increases its firing rate, inhibitory input decreases it, and neurons exhibit spikefrequency adaptation. These conclusions also lead to two experimentally testable predictions: 1) isolated networks that fire at low rates must contain endogenously active cells and2) a reduction in the fraction of endogenously active cells in such networks must lead to bursting.
INTRODUCTION
In the absence of stimuli, neurons in many areas of the mammalian CNS remain active. This activity falls into two main patterns: low, steady firing in the 1 to 5Hz range and rhythmic bursting. The former has been widely observed in sensory cortex, motor cortex, and spinal cord and has been seen in both awakebehaving and anesthetized animals (Collins 1987; Gilbert 1977; Herrero and Headley 1997; Lamour et al. 1985; Leventhal and Hirsch 1978;Mednikova and Kopytova 1994; Ochi and Eggermont 1997; Salimi et al. 1994; Szente et al. 1988; Zurita et al. 1994); the latter is fundamental to central pattern generators, which generate rhythmic behavior such as respiration, locomotion, and chewing (for reviews seeMarder and Calabrese 1996; Rekling and Feldman 1998). Both patterns may occur in the same neural tissue, with neuromodulators inducing a reversible switch between steady firing and bursting (Berkinblit et al. 1978; Kudo and Yamada 1987; Zoungrana et al. 1997).
From a theoretical point of view, it has been relatively easy to understand how large neuronal networks might generate rhythmic bursting (Feldman and Cleland 1982; Perkel and Mulloney 1974; Rekling and Feldman 1998; Rybak et al. 1997; Smith 1997) but difficult to understand how such networks might generate steady, low firing rates (Abeles 1991). Several investigators have explored the latter problem theoretically (Amit and Treves 1989;Buhmann 1989; Treves and Amit 1989). Their approach was to use simulated networks and attempt to generate low firing rates through dynamic interactions between excitatory and inhibitory neurons. The networks they used were (1) highly interconnected, (2) isolated from any external sources of input, and (3) comprised of neurons whose resting membrane potentials were far enough below threshold that several nearsynchronous excitatory postsynaptic potentials (EPSPs) were necessary to trigger an action potential. Networks with these seemingly standard properties were unable to produce maintained low firing rates; the lowest numerically generated mean rates were ∼20 Hz, significantly larger than the background firing rates observed in biological networks.
These experimental and numerical findings raise three main questions. First, why were networks with the above “standard” properties unable to fire at low rates? Did the numerical studies simply miss a parameter regime that would have generated low firing rates, or were the properties actually too restrictive? Second, what are the conditions that allow networks to fire robustly at low rates? And third, what is the relation between steadily firing networks and rhythmically bursting ones? In particular, is there a single parameter, or a small set of parameters, that allow networks to switch naturally from one state to the other?
To answer these questions, we developed a model that allowed us to explore analytically the intrinsic dynamics of large, isolated networks in the low firing rate regime. Our main assumptions were conventional: excitatory input to a neuron increases its firing rate, inhibitory input decreases it, and neurons exhibit spikefrequency adaptation (their firing rates decrease during repetitive firing). We found theoretically, and verified using large, simulated networks of spiking neurons, that a single parameter plays a dominant role in controlling intrinsic firing patterns. That parameter is the fraction of endogenously active cells; i.e., the fraction of cells that fire without input. Our primary result is that there is a sharp distinction between networks in which the fraction of endogenously active cells is zero and those in which it is greater than zero. When the fraction is zero, a stable, low firing rate equilibrium cannot exist; networks are either silent or fire at high rates. Only when the fraction is greater than zero are low average firing rates possible. In this regime there is a further subdivision into networks that burst and those that fire steadily, the former having fractions below a threshold and the latter having fractions above threshold. Connectivity also plays a role in shaping firing patterns, but to a lesser degree; it determines whether activity occurs at a constant mean firing rate or oscillates around that mean.
These theoretical results imply that an isolated network that fires at low rates must contain endogenously active cells. We tested this strong, parameterfree prediction experimentally in cultured neuronal networks and consistently found a large fraction of endogenously active cells, ∼30% on average, in networks that displayed low firing rates. We also investigated experimentally the transition between steady firing and bursting as the fraction of endogenously active cells changed, and we found that networks that fired steadily could be induced to burst by reducing the fraction of endogenously active cells. Both of these experimental results are presented in the following paper (Latham et al. 2000).
The above analysis applies to networks that receive external input as well as to isolated ones. This is because cells that receive sufficient external input to make them fire are effectively endogenously active, in the sense that they can fire without receiving input from other neurons within the network. These pseudoendogenously active cells control firing patterns in the same way that truly endogenously active ones control firing patterns in isolated networks. In particular, for networks receiving external input, firing patterns follow the same set of transitions as discussed above: as activity produced by external input increases there is a transition first from silence to bursting, and then from bursting to steady firing.
METHODS
In results we make a series of predictions about the behavior of large neuronal networks, then test those predictions by performing largescale network simulations. In this section we describe the single neuron equations and synaptic coupling used in the simulations, discuss our choice of simulation parameters, and provide a particular realization of a reduced network model that is used to illustrate the general principles derived in results.
Network simulations
Our predictions about the behavior of large neuronal networks are based on a simplified firing rate model. To verify these predictions, we perform largescale simulations with synaptically coupled model neurons. Following are the single neuron equations used in the simulations, the prescription for choosing single neuron parameters and synaptic coupling, and a complete list of network parameters.
SINGLE NEURON EQUATIONS.
Our simulated network consists of N synaptically coupled neurons, N_{E}
of which are excitatory andN_{I}
of which are inhibitory. The time evolution equation for the membrane potential of neuron i,V_{i}
, may be written
Rather than using conductancebased currents forI
_{spike}, we model this quantity as
If we were to adhere strictly to the θneuron model [which is related to our model by the change of variablesV_{i} ∼ tan (θ/2) + constant] an action potential would occur whenever V_{i} reaches +∞ (the spike apex), at which point the voltage would be reset to −∞ (the spike repolarization). For numerical work, however, it is necessary to use finite spike apex and repolarization voltages. The apex we use in our simulations, denoted V _{apex}, is 20 mV, and the reset value, V _{repol}, is −80 mV.
The fast and slow afterhyperpolarization currents may be written
Combining Eqs. 1–4, the network evolution equations become
The normalized synaptic current, Î
_{syn}, is written
The conductance change on a postsynaptic neuron is mediated by the fraction of open channels, s_{ij}
. That fraction increases instantaneously when a spike occurs at neuron jand decays exponentially between spikes: for all i
Equations 5–7, along with the rule that wheneverV_{i}
reaches V
_{apex} an action potential is emitted and the voltage is reset toV
_{repol}, constitute our complete network equations. Implementing this model as it stands, however, is costly numerically. This is because the fraction of open channels,s_{ij}
(t), must be integrated separately for each synapse. These separate integrations can be avoided by noting that the relevant sdependent quantity is the sum that appears on the right hand side of Eq. 6. That sum can be divided into two terms
DISTRIBUTION OF APPLIED CURRENTS.
The applied current, Î _{a,i}, determines how close a neuron is to the threshold for generating an action potential, or, if it is above threshold, its endogenous firing rate. This quantity is chosen probabilistically:Î _{a,i} has a boxcar distribution, uniform between 0 and Î _{max} and 0 outside those two values. We use this distribution because it gives us precise control over the number of neurons that are endogenously active. The precise control arises because neuron i is endogenously active if and only if Î _{a,i} > ΔV/4 ≡ Î _{thresh}. Consequently, when Î _{max} <Î _{thresh} no neurons are endogenously active, and when Î _{max} >Î _{thresh} the fraction of endogenously active cells is (Î _{max} −Î _{thresh})/Î _{max}.
NETWORK CONNECTIVITY.
Connectivity is specified by the weight matrix,W_{ij} . This quantity determines both whether two neurons are connected and, if they are, the strength of that connection. Like the applied current, the weight matrix is chosen probabilistically, and the probability that neuron jconnects to neuron i is denoted P_{ij} . If there is a connection, the strength of that connection is set by considerations of EPSP and inhibitory postsynaptic potential (IPSP) sizes. If there is no connection, W_{ij} is set to zero.
We use two models of connectivity, infinite range and local, when constructing P_{ij}
. For infinite range connectivity, P_{ij}
depends only on the types of neurons i and j
The connection probabilities consist of four numbers, corresponding to the four combinations of pairs of excitatory and inhibitory neurons. We parameterize these four numbers with the quantitiesK
_{Tj
} and B
_{Tj
}, where K
_{Tj
} is the mean number of postsynaptic neurons a presynaptic neuron connects to andB
_{Tj
} is the connectivity bias
Inverting Eq. 11 yields the expressions for the connection probabilities in terms of the mean number of connections and bias
For local connectivity, P_{ij}
depends on distance between neurons. For “distance between neurons” to make sense we need to assign to each neuron a spatial location. We work in a twodimensional space, and specify the location of a neuron in polar coordinates, (r, θ), where r is radius and θ is azimuthal angle. Neurons are randomly assigned a position according to the probability distribution P(r, θ) whereP(r, θ)rdrdθ is the probability that a neuron falls within the area bounded by [r,r + dr] and [θ, θ + dθ]. ForP(r, θ) we use the azimuthally symmetric function
Using the probability distribution given in Eq. 13
to assign a position, x
_{i} ≡ (r_{i}
cos θ_{i},r_{i}
sin θ_{i}), to every neuron i, the distance between two neurons is then given simply by the Euclidean norm, d_{ij}
≡ ‖x
_{i} −x
_{j}‖. We use a Gaussian profile for local connectivity, for which the probability of making a connection is modulated by the factor exp(−d
_{ij}
^{2}/2ς_{Tj
}
^{2}) where ς_{Tj
} is a parameter that determines the axonal spread of neurons of type j. Thus
We consolidate the infinite range and local connectivity models by using the local connectivity description and setting ς_{Tj } = ∞ whenever we want to recapture the infinite range case.
The strength of a connection, once one is made, is determined by the size of W_{ij}
. To ensure thatW_{ij}
is in a biophysically reasonable range, we need a relation between it and the sizes of both excitatory and inhibitory PSPs. To derive such a relation, we use Eq. 5a
to compute, as a function of W_{ij}
, the peak voltage in response to a single presynaptic action potential. For definiteness we consider a neuron whose resting membrane potential is the nominal one, V_{r}
(which implies thatÎ_{a}
= 0), and assume that the neuron has not produced an action potential for a sufficiently long time that bothĝ_{K}
andĝ
_{K−Ca} have decayed to zero. Under these conditions, Eq. 5a
for the membrane potential can be combined with Eq. 6
for the synaptic current to yield
PARAMETERS.
Table 1 contains a list of all parameters used in our simulations. Parameters followed by an asterisk are the ones we vary; for those, a range or set of values is given. Parameters not followed by an asterisk remain constant throughout the simulations. The grouping into singlecell, synaptic, and network parameters is in most cases clear; the exceptions are V _{EPSP} andV _{IPSP}, the nominal amplitudes of the excitatory and inhibitory postsynaptic potentials. These are usually thought of as synaptic or singlecell properties, but in our simulations we use them only to determine the connectivity matrix, W_{ij} , via Eq. 14. Specifically,V _{PSPij} =V _{EPSP} when neuron j is excitatory andV _{PSPij} =V _{IPSP} when neuron j is inhibitory. Thus they are listed under network parameters.
We performed simulations with two networks, which we denotenetwork A and network B. The former was chosen to simulate networks of cortical neurons, the latter to simulate networks of cultured mouse spinal cord neurons in which we did experiments relevant to the theoretical predictions made here (Latham et al. 2000). They differ primarily in their connectivity and postsynaptic potential amplitude, network A having high, infinite range connectivity and small PSPs and network Bhaving low, local connectivity and large PSPs. The parameters specific to each network are given in Table 2, which contain all the parameters marked with an asterisk from Table 1. A dash in Table 2 indicates parameters that are varied during the course of the simulations.
Most of the parameters in Tables 1 and 2 are either dimensionless or have physical units. There are, however, three normalized parameters:Î _{max}, which has units of voltage, and δĝ_{K} and δĝ _{K−Ca}, which are both dimensionless. To convert these to their correct physical units (Amps for current and Siemans for conductance) divide by the cell membrane resistance.
Simulations were performed with a fourthorder RungeKutta integration scheme. In all simulations reported we used a time step of 1 ms. We occasionally checked that this was short enough by decreasing the time step to 0.5 ms, and in no cases did we see a change in our results.
Choice of simulation parameters
In choosing parameters we attempted to stay close to values observed in two different systems: networks of cortical neurons, and networks of cultured mouse spinal cord neurons in which we did experiments relevant to the theoretical predictions made here (Latham et al. 2000). These two systems differ primarily in their connectivity and postsynaptic potential amplitude, the former having high connectivity and small PSPs, the latter low connectivity and large PSPs. Thus most parameters were the same for the two model networks used in our simulations, and these were relatively standard; the parameters that differed pertained to connectivity and PSP amplitude, mirroring the differences between cortical and cultured spinal cord networks.
As is wellknown, connectivity in cortex is high; each neuron connects to approximately 7,000 others (Braitenberg and Schüz 1991). The size of a local circuit, assuming such a thing exists, is more difficult to determine. However, given that axonal spread is measured in hundreds of micrometers and the density of neurons is ∼10^{5}/mm^{3} (Braitenberg and Schüz 1991), a local circuit on the order of 100,000 neurons is a reasonable estimate. Unfortunately, we do not yet have the computational power to model such large networks. Consequently, we considered networks of only 10,000 neurons, each of which made ∼1,000 connections. To make up for the reduced connectivity in our model network we increased the size and frequency of the postsynaptic potentials: instead of using the more realistic numbers of 0.2 mV for EPSPs (Komatsu et al. 1988; Matsumura et al. 1996) and −1 mV for IPSPs (Tamás et al. 1998), we typically used 1.0 and −1.5 mV. The frequency of PSPs was increased in our model by ignoring failures, so neurotransmitter was released every time an action potential occurred.
Unlike cortex and spinal cord in vivo, cultured spinal cord networks are intrinsically localized. However, connectivity in our particular preparation has not been characterized. We thus made estimates as follows. The number of connections a neuron makes isAςP_{A} , where A is the area of the axonal arborization, ς is the number of neurons per area, and P_{A} is the probability that a neuron connects to another neuron within its axonal arborization. We measured axonal arborization from neurons visualized after intracellular injection with horseradish peroxidase (Neale et al. 1978), and found it to be 2.0 ± 0.86 mm^{2} (mean ± SD,n = 12 neurons). The density of our cultures, ς, was 23.7 ± 10.2 neurons/mm^{2} (n = 10 culture dishes). Thus a single neuron is capable of connecting to ∼475 others. Finally, in paired recordings (Nelson et al. 1981), approximately onehalf the cell pairs tested were connected. This gives a connectivity of ∼240 connections/neuron. We used 200 connections/neuron in most of our simulations, although we explored a range of values.
The size of the postsynaptic potentials is large in cultured spinal cord neurons; on the order of 3–10 mV for EPSPs and approximately −6 mV for IPSPs (Nelson et al. 1981, 1983). In contrast to cortex, transmission failures are not observed; this is because a presynaptic neuron makes a large number of connections (∼100) on each of its postsynaptic targets (Nelson et al. 1983; Pun et al. 1986). In our simulations we typically used EPSP and IPSP amplitudes of 4 and −6 mV, respectively, and, for simplicity, we did not include variation.
The parameters given in Tables 1 and 2 were the ones we used in the bulk of our simulations. In appendix we studied the effects of varying these parameters. In no cases did changing parameters produce network behavior that was inconsistent with our model. This suggests that our model made robust predictions and that our simulation results were not due simply to a particular choice of parameters.
Reduced network model
The reduced network model we use in our analysis is based on the Wilson and Cowan equations (Wilson and Cowan 1972), in which the dynamical variables are the mean excitatory and inhibitory firing rates. Those equations are given in generic form in Eq. 16 of results. Here we write down a particular realization of Wilson and Cowan–type equations that we augment by including spikefrequency adaptation.
Although it would be desirable to start with the equations describing the full network simulations and derive from those a set of mean firing rate equations, this is essentially impossible to do analytically except in highly idealized cases (van Vreeswijk and Sompolinsky 1998), and extremely difficult numerically. Thus we propose a set of reduced equations designed to capture qualitative, but not quantitative, features of largescale networks. Denotingν̄_{E} and ν̄_{I}as the mean excitatory and inhibitory firing rates (averaged over neurons), and G_{E}
and G_{I}
as the mean level of spikefrequency adaptation of the excitatory and inhibitory populations, we let these variables evolve according to
The qualitative features that these equations capture are 1) the gain functions, g(…), are generally increasing functions of excitatory firing rate and decreasing functions of inhibitory firing rate [the C_{LM} are all positive and g(x) is a nondecreasing function ofx], 2) increasing the firing rate increases spikefrequency adaptation (δG is positive), 3) increasing the level of spikefrequency adaptation reduces firing rates, 4) the gain functions are approximately thresholdlinear, consistent with the numerical gain functions in Fig.FB1 A, and 5) the gain is reduced by a divisive term proportional to the inhibitory firing rate. The divisive term, which has been proposed as a mechanism for gain control in cortical neurons (Carandini and Heeger 1994; Carandini et al. 1997; Heeger 1992; Nelson 1994), was included for two reasons. First, it was observed in our simulations: Fig. FB1 A in appendix shows a pronounced drop in gain as the inhibitory firing rate increases. Second, curved nullclines are essential for the existence of the saddlenode bifurcation that produces bursting (Fig. 5), and the divisive term increases nullcline curvature, thus making bursting more robust.
We are interested in understanding the dynamics represented byEq. 15 in the regime in which τ_{E}and τ_{I} are on the order of the membrane time constant, ∼10 ms, and τ_{SFA} is on the order of the spikefrequency adaptation time, which can be as high as several seconds. Thus τ_{E}, τ_{I} ≪ τ_{SFA}, and we can use the fast/slow dissection proposed by Rinzel (1987) to analyze the dynamics. With this approach, the full fourdimensional system is reduced to two subsystems: a fast one corresponding to the population firing rates,ν̄_{E} andν̄_{I}, and a slow one corresponding to the level of spikefrequency adaptation, G_{E} andG_{I} . Although this represents a major simplification, it still leaves us with three main behaviors: steady firing, oscillations, and bursting. The first two can be readily understood in terms of the twodimensional firing rate dynamics withG_{E} and G_{I} held fixed; only the third, bursting, requires the interaction of all four variables. We thus briefly discuss a method for its analysis.
The idea behind the fast/slow dissection is thatν̄_{E} and ν̄_{I}relax rapidly to an attractor (either a steady state or oscillations) compared with the time scale over which G_{E} andG_{I} change. Motion in theG_{E} − G_{I} plane is then determined be Eqs. 15c and 15d withν̄_{E} and ν̄_{I}evaluated on their attractor. In the purely bursting regime, for attainable levels of spikefrequency adaptation the network is either silent or fires steadily; the attractor is a fixed point for given values of G_{E} and G_{I} . In this steadystate regime, the mean inhibitory firing rate,ν̄_{I}, is a singlevalued function of the mean excitatory rate, ν̄_{E} (see Fig.3 B). The dynamics associated with Eq. 15 can thus be reduced to three dimensions, and the important features of the dynamics are captured by a threedimensional bifurcation diagram that consists of a sheet whose height above the G_{E} −G_{I} plane represents the value ofν̄_{E}. Bifurcations (sudden changes in firing rate) occur at folds in the sheet.
For a particular set of parameters, the trajectory in theG_{E} − G_{I} plane is typically a closed curve that exhibits sudden changes of direction at bifurcation points. If the trajectory collapses onto, or almost onto, a single curve, we may make a further, approximate, reduction to a twodimensional bifurcation diagram ofν̄_{E} versus G_{E} . For the parameters considered in results (Fig. 6), the curve in the G_{E} − G_{I} plane collapsed almost to a line, resulting in the twodimensional bifurcation diagrams shown in Fig. 6, C–E. For each of these diagrams the lines were found numerically using leastsquares regression. The equations for the lines were as follows: Fig.6 C (θ_{max} = 0), G_{I} = 0.00 + 0.64G_{E} ,R ^{2} = 1.0000; Fig. 6 D(θ_{max} = 0.25), G_{I} = 0.01 + 0.65G_{E} , R ^{2} = 0.9996; Fig. 6 E (θ_{max} = 0.52),G_{I} = 0.03 + 0.65G_{E} ,R ^{2} = 0.9999.
RESULTS
In this section we investigate how single neuron and network properties affect intrinsic firing patterns. We use as our model system large, isolated networks with random, sparse connectivity and develop a theory that describes the firing patterns in such networks in the low firing rate regime. We then test specific predictions of this theory by performing simulations with large networks of spiking neurons. Robustness of the theory is verified in appendix , where we examine a broad range of single neuron and network parameters.
Theory
The theoretical development is divided into three parts:1) analysis of networks that do not exhibit adaptation,2) analysis of the more realistic, and more biologically relevant, case of networks that do exhibit adaptation, and3) analysis of the stability of low firing rate equilibria.
LOW FIRING RATE EQUILIBRIA IN SPARSE, RANDOMLY CONNECTED NETWORKS WITHOUT ADAPTATION.
To describe the dynamics of large, sparse, randomly connected networks, we start with the model proposed by Wilson and Cowan (1972). We use as our dynamical variables the mean excitatory and inhibitory firing rates, denoted ν̄_{E}and ν̄_{I}, respectively. In terms of these variables, the Wilson and Cowan model can be cast in the form
The first step in the analysis of Eq. 16 is to examine the equilibria. The equilibrium equations, which are found by setting both dν̄_{E}/dt and dν̄_{I}/dt to zero, are
At high firing rates the gain functions are stereotypically sigmoidal, independent of single neuron and network parameters: whenν̄_{I} is large, both gain functions approach zero as ν̄_{E} approaches zero, rise rapidly as ν̄_{E} increases past some critical value, and approach the maximum neuronal firing rate asν̄_{E} becomes large. Similarly, whenν̄_{E} is large, the gain functions are sigmoidal versus ν̄_{I} but with negative slope. At very low firing rates, however, the gain functions become sensitive to both single neuron and network properties. It is the single neuron properties, however, that have the largest effect on the gain functions, because at very low firing rates the gain functions are determined primarily by the responses of neurons at rest to a very small number of EPSPs. This led us to divide networks into three regimes based on single neuron response properties. Each regime produces qualitatively different gain functions and thus, as we will show, qualitatively different nullclines. Those regimes are as follows:
 1
 None of the cells have their resting membrane potential within one EPSP of threshold for the generation of an action potential; i.e., no cell fires in response to a single EPSP, which in turn implies that no cells are endogenously active.
 2
 Some cells have their resting potential within one EPSP of threshold, but none are endogenously active.
 3
 Some cells are endogenously active.
In Fig. 2 A we plot Φ_{E}(ν̄_{E}, 0) versus ν̄_{E} for these three regimes. We chose ν̄_{I} = 0 in this plot because it accentuates the differences among the gain function. The three regimes are distinguished by the value and the slope of the gain function at the origin of the (ν̄_{E},ν̄_{I}) plane. In regime 1 the cells are far enough from threshold that a single action potential cannot cause any neurons to fire, so the gain function and its slope are both zero at the origin. In regime 2, as in regime 1, the neurons cannot fire without input, so Φ_{E}(0, 0) is again zero. However, a single action potential can cause cells to fire and thus ignite network activity; consequently the origin is unstable and the slope there is greater than one. In regime 3 a fraction of the neurons fire without input, so Φ_{E}(0, 0) > 0.
In Fig. 2 B we plot Φ_{I}(0,ν̄_{I}) versusν̄_{I}, again for the three regimes listed above. The behavior here is somewhat simpler: in regimes 1and 2, Φ_{I}(0,ν̄_{I}) = 0 for all values ofν̄_{I}, whereas in regime 3,Φ_{I}(0, ν̄_{I}) is greater than zero when ν̄_{I} = 0 and decreases monotonically with increasingν̄_{I}.
Armed with the shapes of the gain functions in the three regimes, we are now in a position to construct the nullclines. That construction is done graphically: starting with the excitatory nullcline, we plot the function Φ_{E}(ν̄_{E},ν̄_{I}) versusν̄_{E} for various values ofν̄_{I} and look for intersections with the line Φ_{E} =ν̄_{E}. This is shown in Fig.3 A for regime 1 (no cells within one EPSP of threshold). Each of the intersections, including the one at the origin, is a solution to Eq. 17a and thus corresponds to a point on the excitatory nullcline. Dashed lines connect the intersections to the excitatory nullcline, which is drawn in gray in the two dimensional (ν̄_{E}, ν̄_{I}) plane directly below the gain curves (Fig. 3 C). Note that the gain curves in Fig. 3 A that correspond to low values of the inhibitory firing rate have three intersections with the line Φ_{E} = ν̄_{E}. In addition, all gain curves intersect at the origin, which implies that the ν̄_{I} axis is part of the excitatory nullcline. For a similar construction in an allexcitatory network, seeO'Donovan et al. (1998).
The excitatory nullcline in Fig. 3 C has the following interpretation: for small enough inhibition the network can either fire stably at high rate (the negatively sloped region of the excitatory nullcline) or be completely silent (theν̄_{I} axis). Between those two extremes is a threshold. That threshold, which lies on the region of the nullcline with positive slope, represents an unstable equilibrium at fixed inhibitory drive. These differences in stability naturally divide the excitatory nullcline: the region with positive slope is an unstable branch; the regions with negative slope, including theν̄_{I} axis (which is considered to have a slope of −∞), are stable branches.
Construction of the inhibitory nullcline is also done graphically: again we plot Φ_{I}(ν̄_{E},ν̄_{I}) versusν̄_{I} for various values ofν̄_{E} and look for intersections with the line Φ_{I} = ν̄_{I}(Fig. 3 B). These intersections correspond to points on the inhibitory nullcline, which is drawn in black in the twodimensional (ν̄_{E}, ν̄_{I}) plane to the right of the gain curves (Fig. 3 C). In contrast to the excitatory one, which has both stable and unstable branches, the inhibitory nullcline has a single branch that is everywhere stable at fixed ν̄_{E}.
Construction of the nullclines in the other two regimes is a straightforward extension of the above method. We thus turn our attention to how those nullclines affect the stability and location of firing rate equilibria. We begin with regime 1, in which no cells are within one EPSP of threshold.
In regime 1 we find three intersections between the excitatory and inhibitory nullclines, and thus three equilibria (Fig.4 A). A necessary condition for the local stability of an equilibrium is that the inhibitory nullcline intersect the excitatory one from below (Rinzel and Ermentrout 1998) (see also the section stability of low firing rate equilibria). Consequently, the lower intersection on the unstable branch (marked “U” in Fig. 4 A) is unstable. There are thus two possible locally stable equilibria in this figure, one at zero firing rate (marked “S,” meaning absolutely stable because there are no endogenously active cells to ignite activity once the network falls silent) and the other on the rightmost branch of the excitatory nullcline (marked “M” for metastable, meaning the lifetime of the equilibrium is finite). The rightmost branch corresponds to high firing rate, so neither of these equilibria are at low, but nonzero, firing rate. Consequently, this configuration of nullclines cannot generate the low firing rates observed in vivo.
Is it possible to achieve a locally stable equilibrium at low firing rate in regime 1? Reexamining Fig. 4 A, we see that such an equilibrium could be achieved by bending the inhibitory nullcline up so that it intersects the unstable branch of the excitatory nullcline from below (Fig. 4 B; equilibrium marked M). Although such an intersection is possible, it occurs in an extremely restricted parameter regime: small changes in either network or single neuron parameters would shift the equilibrium to high firing rate or eliminate it altogether. This problem is especially severe in high connectivity networks where the nullclines are straight, as shown in Fig. 4 C (van Vreeswijk and Sompolinsky 1996, 1998). Moreover, even if the equilibrium does exist and is locally stable, it is only metastable: relatively small downward fluctuations in firing rate can drive the network to the nearby equilibrium at zero firing rate, where it would remain forever.
The reason it is difficult to achieve robust, low firing rates inregime 1 is that there is a gap between the origin of the (ν̄_{E}, ν̄_{I}) plane and the unstable branch of the excitatory nullcline, as shown in Fig. 4, A–C. The gap arises because neurons cannot fire below 0 Hz. This lower bound effectively cuts off the bottom portion of the excitatory nullclines, as seen in the nullcline construction, Fig.3, A and C. Thus it is the lower bound on firing rate that ultimately constrains the nullclines to have the shapes depicted in Fig. 4, A–C. Given those shapes, simple geometrical arguments determine the location and stability of the firing rate equilibria.
In regime II, in which some neurons have their resting membrane potential within one EPSP of threshold, a single action potential can cause a chain reaction that ignites the network. Consequently, an arbitrarily small excitatory firing rate is sufficient to produce network activity. This eliminates the gap in Fig. 4, A–C,resulting in the nullclines illustrated in Fig. 4 D. Figure4 D shows a low firing rate equilibrium (marked M, again for metastable) that does not require the nullclines to have strong curvature, so it can exist even in high connectivity networks. However, the parameter range in which the equilibrium is stable is still relatively narrow: small changes in single neuron properties can cause resting membrane potentials to shift downward so that none of the cells are within one EPSP of threshold, or upward so that some cells cross threshold and become endogenously active. In addition, because fluctuations can cause a drop to zero firing rate, where the network will remain forever, the equilibrium is metastable. Again, this effect can be traced to the fact that neurons cannot fire below 0 Hz.
The nullclines for regime 3, in which some cells are endogenously active, are presented in Fig. 4 E. The addition of endogenously active cells eliminates the equilibrium along theν̄_{I} axis, transforming the excitatory nullcline from two distinct curves into one continuous one. This transformation allows a robust, globally stable, low firing rate equilibrium at the intersection between the excitatory and inhibitory nullclines (marked S in Fig. 4 E). The corresponding set of nullclines in the high connectivity regime is shown in Fig.4 F.
The existence of a robust, stable low firing rate equilibrium when endogenously active cells are present, and the absence of such an equilibrium when they are not, leads to the following prediction: an isolated network that fires steadily at low rate for an infinitely long time must contain endogenously active cells. This does not mean that such cells are absolutely essential for finitelifetime, low firing rate states; longlived metastable states can exist in networks where all cells have their resting membrane potential below threshold. However, the parameter regime for this is narrow, and, as we will see below, it vanishes altogether when spikefrequency adaptation is taken into account.
These conclusions apply whether endogenous activity arises through an intrinsic membrane property (e.g., a persistent inward current), noisedriven voltage fluctuations, or any other mechanism. They also apply to networks that receive external input. For example, consider a network that has no endogenously active cells, but instead receives sufficient external input that some cells fire without input from neurons within the network. The existence of these pseudoendogenously active cells place the network in regime 3, which allows a robust low firing rate equilibrium to exist. If, on the other hand, the same network receives external input that is too weak to cause any neurons to fire, then a robust low firing rate equilibrium is not possible.
EFFECT OF ADAPTATION ON FIRING PATTERNS.
So far we have assumed that the gain curves, and thus the nullclines, are static. In fact, this is not the case for real neurons. Because of spikefrequency and/or synaptic adaptation, nullclines are dynamic objects that depend on the history of activity, not just the activity at a single point in time. To incorporate this history into our analysis, we assume that neurons adapt slowly compared with the time it takes a network to reach equilibrium. Consequently, at any instant in time the equilibrium firing rates occur at the intersection of the excitatory and inhibitory nullclines, just as they did above. However, the equilibria are not fixed: adaptation causes slow changes in the nullclines, and thus in the equilibrium firing rates.
This process, slow changes in the level of adaptation accompanied by rapid relaxation to a local firing rate equilibrium, can have two distinct outcomes. One is that the firing patterns change very little: the firing rates may simply shift slightly, or, perhaps, oscillate slowly as adaptation affects the equilibrium and vice versa. The other is that adaptationinduced shifts in firing rate may be large enough that firing rate equilibria are created and destroyed. This much more dramatic effect occurs as follows. Consider a network that starts in a state characterized by the nullclines given in Fig.5 A. The black dot at the intersection of the excitatory and inhibitory nullclines corresponds to a firing rate equilibrium at a fixed level of adaptation. As the level of adaptation increases, the nullclines shift. Small shifts lead to a bistable state (Fig. 5 B) but produce only a small change in firing rate. With larger shifts, however, the nullclines pull apart (Fig. 5 C), and bistability gives way to a single stable state at zero firing rate. When this happens the network firing rate crashes to zero, at which point the level of adaptation starts to decrease. When the adaptation has dropped enough that the equilibrium at zero firing rate disappears (Fig. 5 A via Fig.5 B), firing resumes and a new cycle starts.
The second behavior, which results in bursting, is very different from the first, which is characterized by a shift in firing rate and/or slow oscillations. Because it is the endogenously active cells that prohibit an equilibrium at zero firing rate, bursting requires the effective elimination of these cells. Because such elimination cannot happen if the adaptation is purely synaptic, synaptic adaptation alone cannot cause the periodic crashes to zero firing rate that are characteristic of bursting: even with total elimination of synaptic coupling, firing rates remain finite because of the existence of endogenously active cells (but see O'Donovan et al. 1998 for an alternate view in the context of a reduced firing rate model). Periodic transitions between low firing rates (involving only the endogenously active cells) and high rates (involving most of the cells in the network) may be driven by synaptic adaptation, but this kind of bursting is not, to our knowledge, typically observed.
In contrast to synaptic adaptation, spikefrequency adaptation (adaptation that results in a decrease in a neuron's firing rate during repetitive firing; see methods) can introduce a hyperpolarizing current sufficient to temporarily eliminate endogenously actove cells, ultimately leading to bursting. Thus, in the remainder of this paper, the only form of adaptation we consider is spikefrequency adaptation.
The probability of spikefrequency adaptation eliminating endogenously active cells is highest if there are few such cells to begin with. Consequently, we expect a transition from steady firing to bursting as the fraction of endogenously active cells decreases. In addition, spikefrequency adaptation should eliminate longlived metastable states. This is because those states rely on the existence of neurons close to threshold, but neurons that exhibit spikefrequency adaptation experience a drop in their resting membrane potential during repetitive firing, and thus are pushed well below threshold. These two observations lead to the following prediction: at a fixed, but nonzero, level of spikefrequency adaptation, a network fires steadily when there are many endogenously active cells, makes a sudden transition to bursting when the number of endogenously active cells falls below a threshold, and makes a second sudden transition to silence when all endogenously active cells disappear.
To test this prediction in a simplified setting, we investigated a firing rate model based on the Wilson and Cowan equations but augmented by spikefrequency adaptation, Eq. 15 ofmethods. In this model we regulate neuronal excitability with a single variable, θ_{max}, which can be thought of as the amount of constant depolarizing current that is injected into each neuron. As long as θ_{max} > 0, the fraction of endogenously active cells increases monotonically with θ_{max}, when θ_{max} = 0 the fraction of endogenously active cells vanishes, and as θ_{max} becomes negative, neurons are pushed below the threshold for the emission of an action potential. The behavior of the model is shown in Fig.6. Figure 6 A confirms that θ_{max}, and thus the fraction of endogenously active cells, controls firing patterns: the network fires steadily when θ_{max} is large (many endogenously active cells are present), makes a transition to bursting when θ_{max} falls below a threshold, and becomes silent when θ_{max} drops below zero and endogenously active cells vanish from the network. Figure 6 B summarizes the behavior of the model in the bursting regime: the mean excitatory firing rate,ν̄_{E}, periodically makes sudden jumps in firing rate, whereas the mean level of spikefrequency adaptation for the excitatory population, denoted G_{E} , increases slowly when the network is active and decreases slowly when the network is silent. Bifurcation diagrams showing the equilibrium firing rate as a function of the level of spikefrequency adaptation (the heavy black curve) and the G_{E} nullclines (the heavy gray line) are given in Fig. 6, C–E for three regimes: silence (C), bursting (D), and steady firing (E). The thin, gray dashed path in the bifurcation diagrams indicates the trajectory in firing rate/spikefrequency adaptation space. It is the sudden jumps in this trajectory, at the “knees” of the heavy black curve, that produce bursting.
STABILITY OF LOW FIRING RATE EQUILIBRIA.
Although endogenously active cells are necessary for the existence of a stable, low firing rate equilibrium, such cells do not guarantee either of these properties. As parameters change, a low firing rate equilibrium can become unstable via a Hopf bifurcation (Marsden and McCracken 1976), or it can be driven to high firing rate. In this section we investigate stability, then discuss conditions under which a network is both stable and fires at low rate.
The stability of a firing rate equilibrium can be determined by linearizing around a fixed point of Eq. 16 and solving the resulting eigenvalue equation (Rinzel and Ermentrout 1998). Consider first the case without adaptation. In that case, the two eigenvalues of the linearized firing rate equation, denoted λ_{±}, are
For an equilibrium to be stable both λ_{+} and λ_{−} must be negative. This requires that the determinant of the above matrix, D, be positive and the trace,T, negative. After minor manipulations of the determinant, we find that to satisfy the first condition, D > 0, we must have
The second condition for stability, T < 0, may be written
How does spikefrequency adaptation modify this picture? Although it would be straightforward to formally incorporate such adaptation into the above stability analysis, the resulting eigenvalue equation is not especially informative. However, the qualitative effects of spikefrequency adaptation are relatively straightforward to understand. This kind of adaptation results in negative feedback: increasing firing rates raises the level of spikefrequency adaptation, which leads to a reduction in firing rates; decreasing firing rates lowers the level of spikefrequency adaptation, which leads to an increase in firing rates. Because the negative feedback enters with a delay, spikefrequency adaptation makes a network more likely to oscillate. Its primary effect, then, is to change the threshold for oscillations; qualitatively, the strict inequality that guarantees stability, Eq. 19, should be replaced by an inequality of the form (1 − Φ_{I,I})/τ_{I}− (Φ_{E,E} − 1)/τ_{E} > ε, where ε depends primarily on the level of spikefrequency adaptation.
Coupling affects overall firing rate as well as stability. We can determine the relation between coupling and firing rate by first categorizing its effect on the nullclines and then examining how shifts in the nullclines affect firing rate. This is a straightforward application of previous analysis [e.g., increasingII coupling lowers the inhibitory nullcline in the (ν̄_{E},ν̄_{I}) plane, which in turn increases the excitatory firing rate]. The results, however, depend on the location of the equilibrium; i.e., whether it occurs on the unstable (positively sloped) branch of the excitatory nullcline, as in Fig. 4 E,or on the stable (negatively sloped) branch, as would occur if the inhibitory nullcline in Fig. 4 E were raised slightly. Inappendix we show that the excitatory nullcline has its minimum at a very small value of the excitatory firing rate. The precise value depends on single neuron and network properties, but for high connectivity networks typical of the mammalian cortex, we estimate in appendix that the minimum occurs below ∼0.1 Hz. This implies that if excitatory firing rates above ∼0.1 Hz are observed in a network firing at low rates, then the equilibrium must be on the unstable branch of the excitatory nullcline. In fact, networks exhibiting purely inhibitory activity are unusual: whole cell recordings in vivo (Calabresi et al. 1990;Metherate and Ashe 1993; Volgushev et al. 1992) and in our cultures (Latham et al. 2000) show a preponderance of EPSPs. In addition, there is recent experimental evidence indicating that the equilibrium in cortex occurs on the unstable branch (Tsodyks et al. 1997). Thus here and in the remainder of this paper we assume that all low firing rate equilibria occur on the unstable branch of the excitatory nullcline, as in Fig. 4 E.
With the location of the equilibrium set, it is straightforward to show the effect of coupling on firing rate. That effect can be summarized as follows: increasing sametype coupling (EE orII) raises firing rates, increasing oppositetype coupling (EI orEI) lowers firing rates. Combining these effects with the above results on stability, we see that the existence of a low firing rate equilibrium that does not oscillate requires lowEE coupling, high oppositetype coupling, andII coupling in an intermediate range: not so high that it drives up firing rates, but not so low that it causes oscillations.
SUMMARY OF THEORETICAL ANALYSIS.
The tool we used to analyze network behavior was a simplified firing rate model, in which the dynamics of networks containing greater than 10^{4} neurons was reduced to a small number of equations describing average quantities. Rather than using a particular firing rate model, we used geometrical arguments to derive generic network behavior. Our primary assumptions were that, on average, excitatory input to a neuron increases its firing rate, inhibitory input decreases it, and neurons exhibit spikefrequency adaptation. With these assumptions, we were able to show that 1) endogenous activity is necessary for low firing rates, 2) decreasing the fraction of endogenously active cells causes a network to burst, and 3) firing patterns, especially transitions between steady firing and oscillations, are dependent on network connectivity. A specific realization of a simplified firing rate model corroborated these predictions.
Simulations
To test the predictions made in the previous section, we performed simulations with large networks of spiking model neurons, as described in methods. We examined 1) the effect of the distribution of applied depolarizing currents,I_{a} , on firing patterns and 2) how relative connection strengths among excitatory and inhibitory populations affect network behavior.
We used two networks, denoted A and B. These networks differed primarily in their connectivity and the sizes of the PSPs of their constituent neurons. Network A was designed to simulate cortical networks. It contained 8,000 excitatory and 2,000 inhibitory neurons, and each neuron connected, on average, to 1,000 others. Connectivity was infinite range (meaning the probability of 2 neurons connecting did not depend on the distance between them; seemethods), and PSPs were on the order of 1 mV. Network B was chosen to be consistent with our experiments (Latham et al. 2000) and to test whether the predictions made for infinite range connectivity apply also to local connectivity, for which the connection probability depends on distance between neurons. It had fewer connections per neuron than Network A (200 instead of 1,000), local rather than infinite range connectivity, larger PSPs (on the order of 5 mV rather than 1 mV), and a higher fraction of inhibitory neurons (30% instead of 20%). A complete list of the parameters for each network is given in Tables 1 and 2 ofmethods.
Networks A and B behaved similarly, so in the bulk of this section we concentrate on Network A. At the end of the section we summarize the results of Network B.
ENDOGENOUS ACTIVITY.
For networks without spikefrequency adaptation, the theoretical analysis presented above led to the following predictions. If endogenously active cells are present, then a stable, low firing rate equilibrium is possible. If endogenously active cells are absent, a low firing rate equilibrium is still possible. However, it is metastable; the network will eventually decay to a zero firing rate state. In this case there is a further distinction between networks in which some cells have their resting membrane potential within one EPSP of threshold and networks in which none do. This distinction has primarily qualitative bearing on firing patterns. In networks containing cells that are one EPSP from threshold, a single action potential can ignite activity in a silent network, so finite firing rate states tend to be long lived. In networks with no cells within one EPSP of threshold, finite firing rate states tend to have shorter lifetimes.
Networks with spikefrequency adaptation differ from those without it in two ways. First, as discussed in detail in the previous section, networks with spikefrequency adaptation exhibit a transition from steady firing to bursting as the fraction of endogenously active cells decreases. Second, in networks containing cells that are one EPSP from threshold but no endogenously active ones, the lifetime of the metastable state is extremely short—on the order of the spikefrequency adaptation timescale, which is the inverse of the mean network firing rate.
To test this set of predictions, and especially to examine how networks with and without spikefrequency adaptation differed, we performed simulations in which we varied the distribution of applied current,I_{a} , both with and without spikefrequency adaptation. We used a boxcar distribution in whichI_{a} was restricted to values between 0 andÎ _{max}. This distribution, which is completely determined by Î _{max}, may be divided into three regimes. These correspond to the three regimes just discussed and to the ones listed in Theory.
 1
 Î_{max} <Î_{E1} , whereÎ_{E1} is the normalized applied current above which the resting membrane potential is within one EPSP of threshold. None of the cells have their resting membrane potential within one EPSP of threshold for the generation of an action potential, which implies that none are endogenously active.
 2
 Î_{E1} <Î_{max} <Î_{thresh}, whereÎ_{thresh} is the threshold for endogenous activity. Some cells have their resting potential within one EPSP of threshold, but none are endogenously active.
 3
 Î_{thresh} <Î_{max}. Endogenously active cells are present.
For values of Î _{max} ranging from below Î_{E1} to aboveÎ _{thresh} we ran simulations for 100 s and determined the excitatory and inhibitory firing rates as follows. For networks that fired for the full 100 s of the simulations, firing rates were simply averaged over neurons and time. For networks that burst, the averages were again over neurons, but only over the active phase of the burst. For networks that crashed to zero firing rate, we extrapolated to infinite time and assigned a value of zero to the firing rate.
Figure 7 A shows a plot of firing rate versus Î _{max} for networks whose neurons do not exhibit spikefrequency adaptation. As predicted, the networks fired steadily for the full 100 s of the simulations whenÎ _{max} exceededÎ _{thresh} (which placed the networks inregime 3, endogenously active cells present). In addition, long lived (>100 s) metastable states were observed both inregime 2, where some cells are within one EPSP of threshold, and regime 1, where no cells are within one EPSP of threshold. Also as predicted, the parameter regime that supported these longlived metastable states was small; it corresponded to a range in maximum applied current of 0.07/R _{cell}, or 2.3 pA for a 30 MΩ cell. This is only 2% (0.07/Î _{thresh}) of the rheobase current for a model cell whose resting membrane potential is 15 mV below threshold.
With spikefrequency adaptation present (Fig. 7 B), network behavior differed in two respects. First, the distinction betweenregimes 1 and 2 became irrelevant; in both regimes the lifetimes of the metastable states dropped considerably, to <5 s (data not shown). This was much shorter than the metastable lifetimes observed in Fig. 7 A, which were >100 s. Second, for Î _{max} aboveÎ _{thresh} the network burst. Bursting occurred because there is an intermediate, bistable state (e.g., the nullclines illustrated in Fig. 5 B) that supports both steady firing and silence. Transitions between steady firing and silence occur when the bistability gives way to a single stable state, as indicated by the nullclines in Fig. 5, A and C. The theoretical prediction is that these transitions should be networkwide; this is confirmed in Fig.8, which shows spike rasters for 200 neurons along with a singleneuron trace of membrane voltage.
Comparing Fig. 7 B with 6A, we see that the network simulations and the reduced firing rate model, Eq. 15, produced similar results. There was, however, a difference in behavior at the transition from bursting to steady firing: in the network simulations the firing rate varied smoothly at this transition, whereas in the reduced model the firing rate exhibited a downward jump. Thus, although the particular realization of the simplified firing rate model captured the main qualitative result (that there is a transition from steady firing to bursting to silence as the fraction of endogenously active cells decreases), it did not capture all the quantitative details. This is not surprising, because we made no attempt to tune the parameters of the reduced model to match the network simulations.
The boxcar distribution of applied current used in the above network simulations is convenient because it produces a sharp transition between networks with and without a sufficient number of endogenously active cells to ignite network activity. However, it raises the possibility that the agreement we saw between the predictions of the Wilson and Cowan model and the network simulations was an artifact of a current distribution with a sharp cutoff. To test this we performed two additional sets of simulations. In one we used a smoother distribution of applied currents than the boxcar used to produce the results summarized in Fig. 7: the distribution was flat for 0 ≤Î _{max} ≤ 3.5 and had a Gaussian tail for Î _{max} > 3.5. In the other, we used a boxcar distribution but introduced noisedriven fluctuations in the membrane voltage capable of generating endogenous activity. Both simulations produced identical results, and those results were largely in agreement with the sharp cutoff/no noise simulations. When there was no spikefrequency adaptation, as the width of the Gaussian tail or the noise increased we saw a transition from networks in which essentially only the endogenously active cells fired to networks in which most of the neurons were active. When spikefrequency adaptation was present, we observed transitions from silence to bursting to steady firing with increasing tail width or noise. The only new phenomenon was a narrow regime in which the network displayed irregular bursts. These bursts were associated with the small number of endogenously active cells, either in the tail of the distribution or produced by a relatively low level of noise. Those cells caused random transitions between the equilibria labeled “S” and “M” in Fig. 4 B.
CONNECTIVITY.
The theoretical predictions concerning the effect of coupling on stability and firing rate were 1) decreasingII and increasing EEcoupling both lead to oscillations in firing rate and 2) increasing sametype coupling (EE orII) causes mean firing rates to go up, whereas increasing oppositetype coupling (EI orIE) causes mean firing rates to go down.
To test these predictions we performed simulations in which we varied the relative number of connections among the excitatory and inhibitory populations. This was done by adjusting the bias parameters,B_{E} and B_{I} (seemethods), defined by B_{E} ≡P _{IE} ^{∞}/P _{EE} ^{∞} andB_{I} ≡P _{II} ^{∞}/P _{EI} ^{∞}, where P _{LM} ^{∞} is the probability of making a connection from a neuron of type M to a neuron of typeL. Increasing B_{E} increasesEI (excitatory to inhibitory) coupling and decreases EE coupling; increasingB_{I} increases II coupling and decreases IE coupling. Both occur without changing the mean number of connections per neuron.
In terms of the bias parameters, the above predictions imply that1) increasing B_{I} and decreasingB_{E} cause firing rates to go up and 2) decreasing both B_{E} and B_{I} lead to oscillations. In addition, because spikefrequency adaptation is most pronounced at high firing rates, we should see a transition to bursting when the firing rate is high enough.
Mean excitatory and inhibitory firing rates are plotted in Fig.9 versus time for a range of bias parameters. These plots confirm the above predictions: increasingB_{I} and decreasing B_{E} both produced higher firing rates, networks with smallB_{I} oscillated, and high firing rates were accompanied by bursting. Bursting, however, was not guaranteed: we increased B_{I} to 2.4 withB_{E} fixed at 1.0 without observing bursting, even though the excitatory firing rate at these parameters approached 40 Hz.
For the bursting network, Fig. 9 F, we plotted in gray the slow potassium conductance averaged over excitatory neurons. This quantity, which we denoteḡ _{K−Ca,E}, corresponds to the mean level of excitatory spikefrequency adaptation (ḡ _{K−Ca,E} ≡N _{E} ^{−1} Σ_{i∈E} ĝ _{K−Ca,i}, where the sum is over excitatory neurons; see Eq. 5c );ḡ _{K−Ca,E} is analogous to G_{E} in the reduced firing rate model (Eq. 15c ). Like G_{E} in Fig. 6 B,ḡ _{K−Ca,E} increases when the network is active and decreases when the network is silent.
The transition from oscillations to steady firing was gradual and never quite complete; even at high values of B_{I} , where the fixed points should be attracting, there was an oscillatory component to the firing rates. However, the amplitude of the oscillations decreased when the number of neurons increased: in Fig. 9,B, C, E, and F, the variance in the firing rate (for Fig. 9 F the variance only during bursts) dropped by a factor of ∼2 when we doubled the number of neurons (data not shown). This indicates that the oscillations at large B_{I} result from the finite size of the network, which allows fluctuations that are converted to oscillations by the dynamics. In contrast, doubling the number of neurons had virtually no effect on the variance in the firing rate when the inhibitoryinhibitory coupling was low enough to produce oscillations, as in Fig. 9, A andD.
Finally, to ensure that the oscillations in Fig. 9 A were not caused by spikefrequency adaptation, we performed simulations with the same parameters except that spikefrequency adaptation was eliminated (δĝ _{K−Ca} was set to zero; seemethods). We found a decrease in the oscillation period, from 180 to 120 ms, but no other change in the firing pattern.
LOCAL CONNECTIVITY.
The above simulations were repeated with parameters corresponding toNetwork B, in which the connectivity is local rather than infinite range (meaning neurons that are close together are more likely to connect than those that are far apart), the number of connections per neuron is smaller than in Network A, and the PSPs are larger (see Tables 1 and 2). The results are summarized in Figs.10 and11.
Figure 10 shows network firing rate versus the distribution of applied currents, which, as in Fig. 7, is parameterized byÎ _{max}. The results were substantially the same as for the network with infinite range connectivity: with no spikefrequency adaptation (Fig. 10 A), the network fired for the full 100 s of the simulation for a range ofÎ _{max} extending somewhat belowÎ_{E1} . With spikefrequency adaptation present (Fig. 10 B), the network burst forÎ _{max} close to, but slightly above,Î _{thresh}, and the lifetimes of the metastable state dropped considerably: forÎ _{max} <Î _{thresh}, the lifetimes were <14 s (data not shown).
Mean excitatory and inhibitory firing rates are plotted in Fig. 11versus time for a range of bias parameters. For the networks that burst (Fig. 11, B, C, and F), we plotted in gray the mean level of spikefrequency adaptation averaged over excitatory neurons, ḡ _{K−Ca,E}. Again the results were substantially the same as for the network with infinite range connectivity. There were, however, differences in the details. For example, we did not see regular oscillations for any of the examples shown: for all plots the variance in the firing rate dropped by a factor of ∼2 when we doubled the number of neurons, indicating that the observed fluctuations were caused by the finite size of the network. Oscillations did occur when we reducedII coupling, but not untilB_{I} was below ∼0.6. In addition, the local connectivity network burst more easily than the infinite range one and the bursting was more irregular. Bursting was observed forB_{E} = 1.0 and B_{I} ≥ 1.0, whereas in Fig. 9 bursting was not observed at all forB_{E} = 1.0.
These plots indicate that, in spite of the differences betweenNetworks A and B, the two networks generated strikingly similar results. This indicates that our model is robust to changes in connectivity (especially infinite range vs. local) as well as the number of connections per neuron and PSP size.
SUMMARY OF SIMULATION RESULTS.
We performed simulations with two very different networks: an infinite range, high connectivity network with small PSPs, and a local, low connectivity network with large PSPs. We also examined a broad range of parameters, including the distribution of applied currents, spikefrequency adaptation, and network connectivity. The results of all simulations were as predicted: for networks without spikefrequency adaptation, we observed transitions from silence to steady firing as the fraction of endogenously active cell increased. For networks with spikefrequency adaptation, there was an intermediate regime in which the networks burst. Finally, when the IIcoupling was strengthened, we observed transitions from oscillations to steady firing to bursting and an increase in firing rate.
DISCUSSION
We have investigated, both theoretically and through simulations with spiking model neurons, the intrinsic dynamics in large networks of neurons. The goal of this work was twofold: 1) to understand how dynamic interactions between excitatory and inhibitory neurons lead to stable, low firing rates, such as those widely observed in the mammalian central nervous system, and 2) to determine the conditions under which networks switch from steady firing to bursting.
An understanding of the mechanism for generating stable, low firing rates in large, isolated neuronal networks has been elusive (Abeles 1991; Amit and Treves 1989;Buhmann 1989; Treves and Amit 1989). The elusiveness stems from the difficulty in controlling the powerful recurrent excitation that exists in such networks. To compensate for this recurrent excitation, inhibitory feedback is required. To stabilize low firing rates, that feedback must be strong enough that the inhibitory firing rate is more sensitive to input from excitatory cells than the excitatory firing rate. Or, in the language of dynamics, the inhibitory nullcline in average firing rate space must be steeper than the excitatory one at the equilibrium (e.g., Fig. 4 E). We found that for isolated networks, the above condition occurs robustly at low firing rate only when endogenously active cells are present; without such cells networks are either silent or fire at high rate. This led to the following prediction: if low firing rates are observed in an isolated network, then that network must contain endogenously active cells. This prediction was confirmed by largescale network simulations, which included the exploration of a broad range of parameters to ensure that the simulations were robust (appendix ). It was also corroborated by experiments in cultured neuronal networks, as described in the accompanying paper (Latham et al. 2000).
Although endogenously active cells are necessary for the existence of a low firing rate equilibrium, they do not guarantee its stability. In particular, we found that a high level of spikefrequency adaptation leads to bursting: repetitive firing can introduce a hyperpolarizing current sufficient to temporarily eliminate endogenously active cells, resulting in a crash to zero firing rate; after the cells stop firing, the hyperpolarizing current decays and firing resumes. (Interestingly, our analysis implies that synaptic adaptation, because it does not affect the fraction of endogenously active cells, cannot produce bursting.) The probability of bursting is highest if there are few endogenously active cells to begin with, and for that reason the fraction of such cells plays a key role in determining firing patterns. Specifically, networks with no endogenously active cells are typically silent; at some finite fraction of endogenously active cells there is a transition to bursting; and at an even higher fraction there is a second transition to steady firing at low rate. This scenario was also confirmed by network simulations, and it was corroborated by experiments in neuronal cell culture (Latham et al. 2000).
Although the fraction of endogenously active cells plays the dominant role in determining the primary firing patterns (silence, bursting, or steady firing), another parameter, network connectivity, influences secondary features of the firing patterns. Specifically, when inhibitoryinhibitory coupling becomes too weak or excitatoryexcitatory coupling becomes too weak or excitatoryexcitatory coupling becomes too strong, a low firing rate equilibrium can be destabilized via a Hopf bifurcation. This leads to oscillations in firing rate that can occur in both the “steady” firing and the bursting regimes. In the latter case, the oscillations occur during the active phase of the burst.
These theoretical findings were based on the analysis of isolated networks with random, infiniterange connectivity. However, theoretical considerations indicate that they apply to networks that receive external input, and simulations indicate that they are valid for networks with local connectivity, in which the connection probability is a decreasing function of distance between neurons. The validity of the model for local connectivity suggests that these findings will hold for the more structured architecture that exists in the cortex.
Implications
Two firing patterns that are ubiquitous in the mammalian CNS are steady firing at low rates and rhythmic bursting. We have shown that, in isolated networks, both firing patterns are controlled largely by a single parameter, the fraction of endogenously active cells. As long as the fraction of such cells is above a threshold, steady firing is possible; below that threshold the network bursts. The threshold depends on the degree of spikefrequency adaptation, a property of neurons that has been shown to be modulatable [e.g., by neuromodulators (Burke and Hablitz 1996; Cole and Nicoll 1984; McCormick et al. 1993;Pedarzani and Storm 1993; Spain, 1994)]. Thus the model described here both accounts for the low firing rates observed in networks throughout the mammalian CNS and provides a natural mechanism for switching between steady firing and bursting. A switch between these two patterns is necessary for behavior that is activated episodically, such as locomotion, scratching, and swallowing (Berkinblit et al. 1978; Kudo and Yamada 1987; Zoungrana et al. 1997).
Although endogenously active cells may account for the observed firing patterns in mammalian neuronal networks, they are notnecessarily responsible for those firing patterns. This is because the brain is neither isolated nor comprised of a single network. Instead, it receives sensory input and consists of distinct areas that interact through longrange connections. Because external input to a network can drive cells even when they are not receiving input from within the network (making those cells effectively endogenously active), it is possible that the low firing rates observed in cortex are generated by sensory input, or, as has been proposed byAmit and Brunel (1997), by input from other brain areas. The source of the input does not affect our model, however; the model applies whether the input is external or endogenous.
The theory developed here provides a framework for understanding intrinsic firing patterns and their dynamics in large networks of neurons. This does more than simply explain observed firing patterns; it provides a link between basic properties of networks and a way to understand how firing patterns could be modified by patterned external input. This link is critical for developing models of how computations are performed by real neuronal networks.
Acknowledgments
We thank E. Neale for providing Eponembedded cultures containing the HRPinjected neurons used to measure axonal arborization and C. Del Negro, J. Rinzel, and M. Wiener for insightful discussions and comments on this manuscript.
Footnotes

Address for reprint requests: P. E. Latham, Dept. of Neurobiology, UCLA, Box 951763, Los Angeles, CA 900951763.

The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
 Copyright © 2000 The American Physiological Society
ROBUSTNESS TO VARIATION IN PARAMETERS
To exhaustively check the robustness of our model would require a complete exploration of the full parameter space, which is impractical for the 26dimensional space that describes our model. Instead, we varied a selected subset of the parameters. Our starting point was a set of parameters taken from two networks, Networks A and B. Recall that most of the parameters of those networks are given in Tables 1 and 2; the parameters whose values are not listed there were given the following values:Î _{max} = 5.0, B_{E} = B_{I} = 1.0, and δĝ _{K−Ca} = 0. This set of parameters resulted in mean excitatory firing rates of 5.59 and 4.51 Hz for Networks A and B, respectively, and inhibitory rates of 5.59 and 4.46 Hz. The firing patterns were steady; i.e., no bursting or oscillations. Starting with this initial set of parameters, the ones we varied to explore the robustness of our model were as follows: 1) the amplitude of the excitatory and inhibitory postsynaptic potentials, V _{EPSP} andV _{IPSP}, 2) the fraction of inhibitory cells, 3) the voltage threshold, V_{t} ,4) the cell time constant, τ_{cell},5) the mean number of connections per neurons, K,6) the number of neurons, N, 7) the distribution of applied current, and 8) for Network B (the one with local connectivity), the axonal spreads of the excitatory and inhibitory populations, ς_{E} and ς_{I}.
The first four items in this list had the strongest effect on firing rate: for both networks, a 50% increase in EPSP amplitude, a 50% decrease in IPSP amplitude, and a 25% decrease in the fraction of inhibitory cells all caused an increase in firing rate of ∼50%; increasing the distance between the nominal resting membrane potential and threshold, V_{t} − V_{r} , from 15 to 25 mV while keeping the fraction of endogenously active cells fixed resulted in a drop in firing rate of ∼50%, and doubling the time constant resulted in a drop in firing rate of ∼40%.
Increasing the mean number of connections per neuron,K, also had a moderately strong effect on firing rate: a 20% increase in K resulted in a decrease in firing rate of 12% for Network A and 14% for Network B.Somewhat surprisingly, increasing the amplitudes of both EPSPs and IPSPs did not have the same effect as increasing the number of connections: a 20% increase in V _{EPSP} andV _{IPSP} led to a 7% decrease in firing rate forNetwork A and a 4% increase in firing rate forNetwork B. The reason for the difference is as follows. Although increasing PSP amplitude and connectivity have approximately the same effect on mean synaptic drive, the former has a larger effect on variance. Because larger input variance leads to higher firing rate, increasing PSP amplitude is more effective in causing a cell to fire than increasing connectivity. This is why firing rates were higher with a 20% increase in PSP amplitude than they were with a 20% increase in the number of connections.
Increasing the number of neurons had almost no effect on firing rate (<7% change when the number of neurons doubled). However, there was a substantial reduction in fluctuations: doubling the number of neurons in Network A caused the variance in the firing rate to drop by a factor of 2.3, and doubling the number in Network Breduced the variance by a factor of 1.93. Both of these are close to the factor of 2 suggested by 1/N scaling.
The analysis in the main text was based on gain curves constructed from a unimodal distribution of applied depolarizing currents,I_{a} . Such a distribution implies that the neurons in the network have a continuous range of resting membrane potentials, and, if there are endogenously active cells, the active neurons have a continuous range of firing rates. Different distributions, especially multimodal ones, can lead to different nullclines and thus different network behavior (Wilson and Cowan 1972). We thus considered a bimodal distribution in which the normalized applied current, Î_{a} , at each neuron was either 0 mV (resting membrane potential 15 mV below threshold) or 5 mV (endogenously active). For the two bimodal distributions we looked at, 30 and 50% of the cells endogenously active, the firing rates changed by <5%. Thus, at least for these parameters, changing the distribution of applied currents had little effect on firing rate.
The effect of changing the axonal spread depended on whether we changed the spread for the inhibitory or the excitatory neurons. Increasing the normalized radius of the inhibitory axonal spread, ς_{I}, from 0.12 to 0.25 and then to 0.50 increased the firing rate by 50 and 132%, respectively. (Recall that the normalized radius of the network is 1.) These results are consistent with the view that inhibition is acting to control local hot spots of excitatory activity, so overly diffuse inhibition is not so effective. Increasing the excitatory axonal spread, ς_{E}, on the other hand, led to a decrease in firing rate that was fairly small: increasing ς_{E} from 0.12 to 0.25 and 0.50 dropped the firing rate by 4 and 13%, respectively. Our interpretation here is that longer range excitation allows the recruitment of additional inhibitory neurons, thus lowering firing rates.
In all cases, the above parameter changes produce changes in firing rate consistent with our model. This is indicative that the model is relatively robust. This should not be surprising, in view of Fig.4 E, which guarantees an equilibrium at reasonably low firing rate.
We end with a simulation closer to cortical parameters than we have been using: Network A with IPSP and EPSP amplitudes of −1 and 0.2 mV, respectively, 20,000 neurons of which 15% were inhibitory, a threshold voltage of −40 mV, corresponding to a nominal gap between resting and threshold of 25 mV, and a maximum for the boxcar distribution, Î _{max}, equal to 10 mV. The network with these parameters fired steadily with a mean rate of 0.9 Hz.
ESTIMATE OF THE MINIMUM OF THE EXCITATORY NULLCLINE
In this appendix we estimate the size ofν̄^{*}
_{E}, the average excitatory firing rate at the minimum of the excitatory nullcline. To do this we derive a differential equation for Φ_{E}(ν̄_{E},ν̄_{I}) valid whenν̄_{E} is small, solve it to determine the shape of the excitatory gain function, and then find the tangential intersection with the line Φ_{E}(ν̄_{E},ν̄_{I}) =ν̄_{E}. We start with the relation
When the response to an EPSP is linear, it is not hard to show that Δt ≥ τ_{cell}. We will assume that EPSPs are small enough that the linear approximation holds, in which caseEq. B2 can be replaced with the inequality
The quantity ν_{rh} given in Eq.B14a has almost the same factors as β; the only difference is that F′ is replaced by 1/I _{rh} and the expression is inverted. For typical cells,I _{rh} is at most 0.2 nA (McCormick et al. 1985). Combining this value with the above estimate for τ_{cell} V _{EPSP} K_{EE} /R _{cell}, we find that ν_{rh} ∼ 0.6 Hz.
Finally if excitatory and inhibitory neurons have similar PSP sizes and connectivities, ξ ∼ 1. Thus Eq. B13 can be written
Testing the validity of the above analysis requires that we construct the excitatory nullcline. This can be done in our simulations by slowly increasing the value of Î _{max} for the inhibitory neurons only, which shifts the inhibitory nullcline up without affecting the excitatory one. By recording the equilibrium firing rates as we shift the inhibitory nullcline, we can sweep out the excitatory nullcline. An excitatory nullcline constructed in this manner is shown in Fig. 12B. Consistent with the bound given in Eq. B16, the minimum of the inhibitory nullcline occurs at an excitatory firing rate of 0.014 Hz.