# Self-Tuning of Neural Circuits Through Short-Term Synaptic Plasticity

## Abstract

Numerous experimental data show that cortical networks of neurons are not silent in the absence of external inputs, but rather maintain a low spontaneous firing activity. This aspect of cortical networks is likely to be important for their computational function, but is hard to reproduce in models of cortical circuits of neurons because the low-activity regime is inherently unstable. Here we show—through theoretical analysis and extensive computer simulations—that short-term synaptic plasticity endows models of cortical circuits with a remarkable stability in the low-activity regime. This short-term plasticity works as a homeostatic mechanism that stabilizes the overall activity level in spite of drastic changes in external inputs and internal circuit properties, while preserving reliable transient responses to signals. The contribution of synaptic dynamics to this stability can be predicted on the basis of general principles from control theory.

## INTRODUCTION

Cortical neurons fire not only in response to sensory inputs, but also spontaneously, as part of an ongoing network activity of the neocortex. According to Steriade (2001), the rates of spontaneous firing recorded extracellularly from motor and association areas of awake monkeys and cats are typically in the range of 10–15 Hz. Baddeley et al. (1997) found that cells in area IT (recorded extracellularly in awake macaque) fire at a rate of 14 ± 8.3 Hz during blank-screen viewing. Intracellular recordings from regular-spiking neurons (presumed to be pyramidal cells) in motor, association, primary somatosensory, and visual association areas of awake adult cats yielded a spontaneous firing rate of 9.4 ± 1.7 Hz (Steriade et al. 2001). The functional role of this spontaneous firing of neocortical neurons is unknown. It was previously conjectured that persistent neuronal activity, which is “conspicuously absent in cerebellar and basal ganglia circuits,” is an essential feature of the neocortex, which enables it “to incorporate the past into the system's present state” (Buzsáki 2006). It was also previously conjectured to support “flexible cooperation among local and distant cell assembles,” which is “believed to underlie the efficacy of cortical performance and is an essential ingredient for cognition” (Buzsáki 2006). On a more technical level, it was pointed out that not only recurrent circuits of neurons in the neocortex, but dynamical systems in general support higher computational performance if they operate not in the ordered (or dissipative) regime, but closer to a critical state—which amounts to a substantial spontaneous firing activity in a neural circuit—so that also small external inputs can cause significant system responses (Buzsáki 2006; Legenstein and Maass 2007).

Spontaneous brain activity is commonly assumed to be generated by a combination of intrinsic electrophysiological properties of single neurons and synaptic interactions in the network (for a recent review see Destexhe and Contreras 2006). It was already pointed out in Abeles (1991) that “it seems to be very difficult to stabilize a network of inhibitory and excitatory neurons at very low levels of activity.” Through theoretical analysis and extensive computer simulations we show that the short-term dynamics of synapses—more precisely, the empirically found diversity of mixtures of paired-pulse depression and paired-pulse facilitation that depend on the type (excitatory or inhibitory) of pre- and postsynaptic neurons—endows networks of leaky integrate-and-fire (LIF) neuron models with either conductance-based or current-based synapses, with a remarkable stability of spontaneous network activity, even for rather low average firing rates, such as 10 Hz.

It is well known that synapses in the neocortex do not respond in the same way to each spike in a train of spikes. Rather, the amplitude of the postsynaptic response [excitatory postsynaptic potential (EPSP) or excitatory postsynaptic current (EPSC)] to a presynaptic spike depends on the history of presynaptic firing activity. If the response amplitude for the second spike in a pair of spikes is smaller than that for the first one, one calls this effect *paired-pulse depression*. If the response amplitude for the second spike is larger, one calls this *paired-pulse facilitation*. Actually, however, most neocortical synapses exhibit a mixture of depression and facilitation and the amplitude of the postsynaptic response depends in a complex manner on the temporal pattern of several preceding presynaptic spikes. This effect is commonly referred to as synaptic dynamics or short-term plasticity. In contrast to long-term synaptic plasticity this short-term plasticity does not usually depend on the pattern of postsynaptic firing and its effect is not long-lasting (more precisely: it depends only on the temporal pattern of presynaptic spikes during the last few seconds). We refer to Abbott and Nelson (2003), Thomson (2003), and Abbott and Regehr (2004) for recent reviews of biological data and proposed functional roles of short-time synaptic plasticity. The dynamics of cortical synapses can be fitted quite well by the phenomenological model from Markram et al. (1998), where the short-term dynamics of synapses is characterized by three parameters: *U* (which roughly models the release probability of a synaptic vesicle for the first spike in a train of spikes), *D* (time constant for recovery from depression), and *F* (time constant for recovery from facilitation). Hereafter we call this set of parameters UDF. An essentially equivalent model was previously proposed in Varela et al. (1997).

In contrast to the static synapse of an artificial neural network, a dynamic synapse does not just provide a fixed amplification of presynaptic activity, but rather implements a nonlinear filter that could potentially serve a number of different purposes, such as cortical gain control (Abbott et al. 1997), burst detection (Lisman 1997), or temporal integration of information (Haeusler and Maass 2007; Maass et al. 2004). In Tsodyks et al. (1998) it was shown that depressing synapses can create in a mean-field model for a population of excitatory neurons a stable fixed point for its activity and that dynamic synapses can create a variety of rhythmic and irregular activity patterns if one adds a population of inhibitory neurons.

One intriguing aspect of biolocial data on synaptic dynamics is that this short-term dynamics—more precisely, the parameters UDF—differ from synapse to synapse. Moreover, the UDF values form clusters within the three-dimensional (3D) UDF parameter space and the cluster to which the UDF value of a particular synapse belongs depends on the type (excitatory or inhibitory) of its presynaptic neuron and in some cases also on the type of the postsynaptic neuron (Galarreta and Hestrin 1998; Gupta et al. 2000; Markram et al. 1998; Thomson 1997). It has been difficult to identify a possible functional role for the rules that assign a particular region in the 3D UDF parameter space to different types of synaptic connections. Herein we show that the assignment of different ranges of UDF values to different types of synapses makes it possible to implement a self-tuning principle for the firing rate of a cortical circuit. This self-tuning principle enables neural circuits to respond to external perturbations with a characteristic transient response in the firing rate of excitatory neurons and then returns to its previous firing rate within a few hundreds of milliseconds back into a given target range. The firing rate of inhibitory neurons is automatically adjusted by the synaptic dynamics so that it compensates the external perturbation. A similar self-tuning property can be demonstrated for changes within the circuit, such as those caused by long-term synaptic plasticity or changes in the concentration of neurotransmitters.

We show that this self-tuning principle in models of cortical networks of neurons is related to an abstract self-tuning principle recently proposed for much simpler dynamical systems in control theory (Moreau and Sontag 2003a,b). Their abstract self-tuning principle enables some simple two-dimensional (2D) dynamical system to return in a reliable manner to a particular operating regime (e.g., the vicinity of a bifurcation), in spite of external perturbations. They considered the following dynamical system with external input *v*(*t*) that has just two variables: *x* (firing rate) and μ (synaptic strength) (1) (2) The parameter μ_{0} is here assumed to be unknown and conditions^{1} on the functions *f* and *g* are given under which the variable *x* moves (in the absence of external input) to a steady state *x** with ẋ = 0 (i.e., μ = μ_{0}) regardless of the value of this parameter μ_{0}. The system expressed by *Eqs. 1* and *2* is obviously quite far removed from models for recurrent neural networks consisting of excitatory and inhibitory neurons with nonnegative firing rates. However, it is remarkable that its self-tuning property relies on the assumption that the strength μ of the positive feedback in *Eq. 2* varies as a function of *x*, unlike the usual “static” synaptic weights in artificial neural networks. Furthermore, the required assumptions on the functions *f* and *g* in *Eq. 2* imply that μ decreases when *x* grows beyond its steady-state value *x**. Thus the resulting dynamics of the “synaptic weight” μ has some properties in common with synapses between excitatory neurons in the cortex (which have been found to be usually depressing), although apparently no relation to synaptic dynamics had been intended (or even discussed) in Moreau and Sontag (2003). Herein we show (see the end of results and the Supplementary material^{2} that this control principle can be generalized and adapted to more complex dynamical systems that consist of two interacting modules (e.g., a population of excitatory neurons and a population of inhibitory neurons) that have profoundly different functional roles. It turns out that an application of the control principle to such a distributed system requires not only an activity-dependent modulation of “weights” μ(*x*), but *different* rules for the modulations μ_{mn}(*x*) of different weights, that depend on the type *n* of the source of the feedback μ_{mn}(*x*_{n})·*x*_{n} and the type *m* of the target of the synaptic connection (where *m*, *n* range over excitatory and inhibitory neurons). Thus in applications to models for cortical circuits, this principle requires differential synaptic dynamics for different types of synaptic connections.

More precisely, our theoretical analysis predicts that the UDF parameters that characterize the dynamics of synapses belong to a specific region within the UDF space (which we call the “N-volume”) if a strengthening of this type of synaptic connection tends to increase the firing rate of excitatory neurons in the circuit, or into another region (“P-volume”) if a strengthening of this type of synaptic connection tends to decrease the firing rate of excitatory neurons. A comparison of these predicted ranges of UDF values with empirically found UDF values gives a quite satisfactory (although not perfect) match. This is remarkable because many aspects that are relevant for the dynamics of cortical circuits of neurons (different input–output behaviors of excitatory and inhibitory neurons), further differentiation of neuronal dynamics according to subclasses of inhibitory (Gupta et al. 2000) and excitatory neurons, subnetworks formed by particular subtypes of neurons (Yoshimura and Callaway 2005; Yoshimura et al. 2005), and modulation of neuronal dynamics through neuromodulators (Gulledge and Jaffe 1998, 2001) are not reflected in the network models consisting of LIF neurons that we examine herein.

Altogether this article provides a novel explanation for the functional role of short-term synaptic dynamics, in particular also for the empirically found diversity of synaptic dynamics in dependency on the type of pre- and postsynaptic neurons. In addition it contributes new ideas and principles for the investigation of a research topic, obviously of primary importance—the remarkable stability of cortical circuits of neurons against numerous perturbations and parameter drifts. This stability has so far not been reproduced with either artificial neural networks or with biologically more realistic models for cortical circuits and thus an understanding of its prerequisites may provide important insight into design principles of cortical networks. The general program to investigate fundamental aspects of the design of neural circuits by focusing on their stability properties has already produced remarkable insight into another type of neural circuit: the pattern-generating pyloric circuit in the crustacean stomatogastric ganglion (for a recent review see Marder and Goaillard 2006). It turns out that this circuit is endowed with intricate homeostatic mechanisms that move a high-dimensional vector of internal parameters (conductances, etc.) to a different setting if one method for producing the desired firing pattern becomes unavailable, such as because of changes in the concentration of neuromodulators. Thus it is essential for the functioning of this pattern-generating circuit that it can produce “multiple solutions to the production of similar behavior” (Marder and Goaillard 2006). We propose that analogous design principles are implemented in cortical circuits of neurons, which have to maintain a stable computational function in spite of drastic changes in the intensity of external inputs and in the concentrations of neuromodulators (that affect the excitability of neurons within the network; Gulledge and Jaffe 1998, 2001). More specifically, we propose that the short-term dynamics of synapses is essentially involved in design principles that produce a stable function of cortical networks.

## METHODS

### Neuron models

We used two leaky integrate-and-fire neuron models (LIF): one with current-based synapses and the other with conductance-based synapses. The current-based LIF neuron is defined by (3) where *V* is the membrane voltage; τ_{mem} is the membrane time constant; *V*_{rest} is the resting membrane voltage; *R*_{m} is the membrane resistance; *J*_{E,j}(*t*) and *J*_{I,j}(*t*) are the *K*_{E} and *K*_{I} synaptic currents from the excitatory and inhibitory synapses, respectively; and *I*^{ext}(*t*) is an external input composed of mean input current *I*^{inject} and a white noise current *I*^{noise}(*t*), with zero mean and SD σ^{noise}. The LIF neuron with conductance-based synapses is defined by (4) where *C*_{m} is the membrane capacitance; *g*_{leak} is the leak conductance of the neuron; and *g*_{E,j}(*t*) and *g*_{I,j}(*t*) are the *K*_{E} and *K*_{I} synaptic conductances from the excitatory and inhibitory synapses, respectively. The constant *E*_{E} is the reversal potential for the excitatory synapses and the constant *E*_{I} is the reversal potential for the inhibitory synapses. The dynamics of each synapse over time are defined by (5) where *j*(*t*) is the instantaneous synaptic current or conductance, depending on the model, and *W* is the synaptic weight. The currents or conductances decrease exponentially with time constant τ_{syn} and increase instantaneously by adding *W* to the running value of *j*(*t*) whenever a spike occurs in the presynaptic neuron at time τ_{sp}. Herein we compare the use of static synapses to dynamic synapses and so the synaptic weight *W* was constant in some simulations and in other simulations allowed to vary according to known biological rules of synaptic plasticity [i.e. *W*(*t*)]. In the case of dynamic conductance-based synapses we varied the maximal conductance and in the case of dynamic current-based synapses we varied the maximal current. When static synapses were used (i.e., when *W* is constant) we use the notation *J* to denote the synaptic weight and always state whether the simulation used conductance- or current-based synapses. When dynamic synapses were used [i.e., when *W*(*t*) is implied] we use the notation μ to denote the synaptic weight.

We chose values for these LIF neurons that qualitatively reflect the high conductance, in vivo UP state measured in in vivo studies of cortical neurons (Destexhe et al. 2003). The membrane resistance was *R*_{m} = 10 MΩ, the membrane time constant was τ_{mem} = 10 ms, and, in the case of neurons with conductance-based synapses, the capacitance was set to *C*_{m} = 10 ms/10 MΩ = 1 nF. The neurons’ resting potential was *V*_{rest} = −60 mV and the firing threshold was *V*_{thresh} = −50 mV. When the neuron spiked, the membrane voltage was reset to *V*_{reset} = −60 mV, where it remained for a refractory period of τ_{refr} = 3 ms. The excitatory synaptic time constant τ_{syn} was τ_{e} = 4 ms; the inhibitory value was τ_{i} = 8 ms. In the case of neurons with conductance-based synapses *g*_{leak} = 1/*R*_{m} = 100 nS and the reversal potentials for the excitatory and inhibitory neurons were *E*_{E} = 0 mV and *E*_{I} = −80 mV. We modeled background synaptic inputs (from neurons outside of the simulated circuit) through *I*^{ext} = *I*^{inject} + *I*^{noise} with a constant *I*^{inject} = 2.455 × 10^{−9} A and values of *I*^{noise} drawn from a 0-mean Gaussian with σ^{noise} = 6 × 10^{−9} A SD. *I*^{ext} caused an average membrane potential of −55.4 mV and SD of 4.3 mV in the absence of recurrent synaptic feedback (and when the threshold *V*_{thresh} was temporarily raised to 0 mV to prevent spikes). This resulted in a firing rate of approximately 20 Hz for each neuron in the absence of recurrent synaptic input *I*^{rec} and was chosen because it is the median value of the firing rates of irregular and tonic discharges reported in in vivo conditions in waking animals (Destexhe et al. 2003; Steriade et al. 2001) and is thus suitable for studying stability involving both up- and downregulation of firing rates.

### Spiking network model

In our simulations we created sparse, random-spiking networks of 5,000 neurons, with 4,000 excitatory neurons and 1,000 inhibitory neurons. Neurons were either excitatory or inhibitory. The network models contained either purely LIF neurons with conductance-based synapses or purely LIF neurons with current-based synapses; we refer to the former as “conductance-based networks” and the latter as “current-based networks.” We use the term *sparse* to denote that the number of excitatory connections *K*_{e} and inhibitory connections *K*_{i} to a given neuron are much less than both the total number of excitatory neurons *N*_{e} or inhibitory neurons *N*_{i} in the network. We use the term *random* to denote that the connections between neurons are chosen randomly and follow no other design principle. Following Vogels and Abbott (2005), the neurons were connected randomly to each other with a 2% connection probability. The network connectivity between neuron types is shown in Fig. 1.

### Dynamic synapse model

We used two equivalent forms of dynamic synapse models for networks with spiking and nonspiking neurons. The first set of expressions constituting the *Eq. 6* system is taken from Markram et al. (1998),^{3} and describes for the case of spiking neurons the magnitude of the synaptic weight μ_{k} (a current or conductance, depending on the network model) for the *k*th spike in a spike train with interspike intervals Δ_{1}, Δ_{2}, …, Δ_{k−1} (6) where *u* is the running variable for synaptic utilization and *R* is the running variable for synaptic availability, with *u*, *R* ∈ [0, 1]. The constants *U*, *D*, and *F* (for curtness further denoted as *UDF* or *AUDF* when the strength scaling factor *A* is included) represent the release probability for the first spike, the depression time constant, and facilitation time constant, respectively. The synaptic weight for the *k*th spike is then defined by μ_{k} (i.e., *W* := μ_{k} in *Eq. 5*) and has an arbitrary strength scale set by *A*.

An equivalent, continuous mean-field model for dynamic synapses was developed in Tsodyks et al. (1998), which can be used in nonspiking mean-field models with continuous time. The mean-field equations for dynamic synapses are given by (7) where *x*(*t*) represents the instantaneous firing rate of the presynaptic neuron; *U*^{1} is simply an algebraic term introduced for clarity; and μ, *u*, *R*, *U*, *D*, and *F* have the same meaning as in the *Eq. 6* system. Note the magnitude μ(*t*) of the synaptic weight at time *t* depends not only on the current presynaptic firing rate *x*(*t*), but through the auxiliary variables *u* and *R* also on the history of this presynaptic firing rate.

The steady-state equations for this continuous dynamic synapse model yield in the case of a constant presynaptic firing rate *x*(*t*) = *x** (8) where the * notation denotes steady-state values of *u*, *U*^{1}, *R*, and μ. In particular for *m*, *n* ∈ {*e*, *i*} the value μ*_{mn} (*x*_{n}) defined according to *Eq. 8* denotes the *steady-state synaptic weight* from any neuron in population *n* to any neuron in population *m* as a function of the presynaptic firing rate *x** = *x*_{n}.

### Linking UDF parameters to network stability

We show that the fundamental property of dynamic synapses for creating a network with a stable firing rate is the relationship between the slopes of the steady-state synaptic weight curves μ*_{mn} (*x*_{n}) for different synapse types. We searched for UDF parameters that generate a negative derivative for the connections *E* → *E* and *I* → *I* because for our parameters an increase in the strength of either leads to a higher firing rate *x*_{e} of the excitatory neurons and a positive derivative for the steady-state curves for the connections *E* → *I* and *I* → *E* because an increase in their strengths leads to a lower firing rate *x*_{e} (see Supplemental materials for further explanation). More specifically, we searched for UDF settings that generated either a strictly positive or strictly negative derivative between the frequencies 10 and 100 Hz of the presynaptic firing rate and that additionally were in a physiological range (see Table 1). We uniformly sampled the UDF space in the range *U* ∈ (0, 1], *D* ∈ (0, 1] s, and F ∈ (0, 1] s (at increments of 0.014 in each dimension) and found two topologically simple volumes of parameter space that satisfied these requirements. These were designated “P-volume” for the positive derivative and the “N-volume” for the negative derivative. We randomly chose three assignments R1, R2, and R3 to the UDF values of the four synapse types to create three separate dynamic synapse UDF parameters sets. Thus each parameter set *R*_{i} (*i* = 1, 2, 3) consisted of two points from the N-volume, which were randomly assigned as the mean for the UDF-parameters of the synaptic connections *E* → *E* and *I* → *I*, and two points from the P-volume, which were also randomly assigned as the mean to the synaptic connections *E* → *I* and *I* → *E*. Each individual *R*_{i} parameter set was used separately from the others in the network simulations. Additionally, we used experimental data on UDF parameters from Gupta et al. (2000) and Markram et al. (1998) for all four synapse types (see Table 1). All self-tuning simulations were carried out for these parameters based on experimental data, as well as all three parameter assignments R1, R2, and R3 based on a theoretical analysis of desirable network performance. Performance differences between the R1–R3 parameter settings were small and we averaged the responses across the parameters sets. These UDF parameters were fixed for all self-tuning experiments and only the scaling factors *A*_{mn} of the dynamic synapses were modified on a per-simulation or per-network basis.

Altogether synaptic parameters were chosen as follows. All synapses were given a uniform delay of 0.1 ms. The means of the scaling factor parameters (the *A* terms in *Eqs. 6* and *7*), mean *UDF* parameters, and mean values for the initial conditions for dynamic synapses *u*_{0} and *R*_{0} were set as described earlier. After selection of a mean value, each actual value was drawn for each synapse in the network from a Gaussian distribution with SD of 10% of the mean value (e.g., all UDF parameters for every synapse were chosen from a Gaussian distribution with a SD equal to 10% of the mean *UDF* parameters, which depended on the synapse type). If the actual value would have been set below zero, then the value was redrawn from a uniform distribution between zero and two times the mean.

### Self-tuning simulation setup

The self-tuning simulations we performed involved measuring and comparing the equilibrated (steady-state) average firing rates of excitatory neurons with static synapses and then with dynamic synapses for three network models: conductance-based networks, current-based networks, and a rate-based mean-field network (see following text for the definition). There were three types of perturbation simulations run, with two parameters perturbed in each case:

*1*) mean input current strength and SD of current injection

*2*) excitatory synaptic strength and inhibitory synaptic strength

*3*) percentage excitatory neuron inactivation (a temporary cessation of function potentially caused by a variety of circumstances such as neuromodulators) and percentage inhibitory neuron inactivation. This resulted in 841 networks for the input perturbation experiments and 900 networks for the synaptic weight perturbation and neuronal inactivation experiments. Each network was generated with random connections and first run in the control case with static synapses. The network was then regenerated with the exact same parameters, but a different random number seed, and run with dynamic synapses. To test the robustness of the ideas presented, each network was tested with four different dynamical synapse parameters sets, three based on theoretical considerations called R1, R2, and R3 and the parameters based on experimental data, thus requiring 841 + (841 × 4 × 3) = 10,933 separate network simulations for the input perturbation experiments and 900 + (900 × 4 × 3) = 11,700 separate network simulations for experiments with synapse strength and neuronal inactivation perturbations. To measure the firing rates we let the control networks with static synapses run for 1,500 ms and the average firing rates of excitatory and inhibitory neurons were measured for the last 1,000 ms. The networks with dynamic synapses ran for 2,000 ms and the steady-state firing rates were measured for the last 1,000 ms, which allowed extra time for the dynamic synapses to equilibrate. The measured firing rates for the theoretically derived UDF values were then averaged across all three dynamic synapse parameters sets R1, R2, and R3. Network simulations were performed using the CSIM neural simulator (Natschläger et al. 2003) and MatLab (The MathWorks, Natick, MA). The simulation time step was 0.1 ms. The mean-field simulations were also performed using MatLab.

### Correlation function

Network synchrony plays a large role in the behavior of networks of sparsely connected, spiking neurons (Brunel 2000). We were interested in separating asynchronous irregular activity from other types of network activity. We used the voltage traces of randomly sampled neurons to analyze the network correlation and then applied the standard cross-covariance function. The covariance coefficient for the sequences *y* and *z*, with means ȳ and z̄, defined at the zeroth lag, is given by To measure the network correlation, we created two random groups of neurons from the excitatory neuron population (excepting differences in firing rate, the inhibitory neurons behave in same manner as the excitatory neurons). The two groups each consisted of 5% or 250 neurons. The cross-covariance coefficient was computed for every pair of neurons, one from each group (excluding identical neurons, which occurred by chance) and the average cross covariance coefficient was taken as a measure of the network correlation.

### Mean-field models for sparsely connected neural circuits

For our analysis of network stability we developed neuronal population models—that is, mean-field firing rate models—for sparse, random networks (van Vreeswijk and Sompolinsky 1996, 1998). They model populations of current-based neurons with static and dynamic synapses and are based on a balance of excitation and inhibition that creates temporally irregular spiking. We assume that there are *N*_{e} excitatory neurons and *N*_{i} inhibitory neurons. *K*_{mn} is the number of synaptic connections from neurons in pool *n* to neurons in pool *m* (we assume that *K*_{mn} ≪ *N*_{e}, *N*_{i}). We are interested in networks with randomly chosen sparse synaptic connections because they provide reasonable first approximation models for cortical circuits.

Because the firing rate of a neuron is dependent on both the mean input current and the variance of the input current, we refer to its input–output function as the FMS (firing rate, mean input, SD) surface, which we denote by *F*. A sparse, random network has a parameter range that allows for asynchronous irregular activity (Brunel 2000; for a review, see Vogels et al. 2005) and the synaptic currents to a given cell are uncorrelated in time for this regime, aside from a potentially fluctuating population firing rate. In the steady-state condition, which is when all the dynamics of the system have been allowed to equilibrate, this allows one to use the time average of the synaptic input for estimating the synaptic input at each point in time. Thus in the asynchronous condition, and assuming Poisson spiking statistics, one can write down the average recurrent synaptic current *I*^{rec} and recurrent synaptic input variance σ^{rec2} for each neuron in their respective pools *m* (Shriki et al. 2003) as (9) where *m*, *n* ∈ {*e*, *i*} represent indices into the post- and presynaptic pools, respectively. So the total average current and total input current variance into a neuron in a pool *m* is (10) where *I*^{inject} and σ^{noise2} represent the mean and variance of external input to the neuron, and I_{m}^{tot} and σ_{m}^{tot2} are the mean and variance of the total current injection to each cell (see methods).

To use these formulas to create a working neuron model one can easily sample the FMS surface of a single current-based neuron by injecting current with an arbitrary mean and Gaussian noise component and then use the *Eq. 9* system to translate the parameters (*J*_{ee}, *J*_{ie}, *x*_{e}, *J*_{ei}, *J*_{ii}, *x*_{i}) into (*I*^{tot}, σ^{tot}). Using this technique we sampled the FMS surface *F* of a current-based LIF neuron with the same neuronal parameters as the neurons used in our current-based spiking networks. Using the samples generated from this neuron we used smoothing splines to create a nonparametric approximation F̄ to *F* that could be evaluated at any given point to yield a good approximation of the postsynaptic firing rate (Fig. 2). Because the excitatory and inhibitory neurons in our current-based networks have the same neuronal parameters, we used the same neuron model for both types. Finally, the mean-field model for a network of excitatory and inhibitory neurons in an asynchronous network, which also represents a sparse, random network under the same assumptions, is defined by (11) where *x*_{e} is the average firing rate of the excitatory neurons and *x*_{i} is the average firing rate of the inhibitory neurons. For a current-based LIF network with τ_{e}, τ_{i} < τ_{mem}, the correct time constant of the model is the neuronal membrane time constant τ_{mem}.

This model can also be used to analyze system dynamics, even though the expressions in the *Eq. 11* system were developed from an analysis of the steady state, because the neuronal dynamics of a current-based neuron acts merely as a low-pass filter. A major assumption of this approach is that all combinations of (*J*_{ee}, *J*_{ie}, *x*_{e}, *J*_{ei}, *J*_{ii}, *x*_{i}) lead to asynchronous, irregular network firing, which is not always the case for sparse, random-spiking networks. Thus for the mean-field model to correctly predict the behavior of the spiking network, the range of parameters must be limited to ensure this assumption and, consequently, the network correlation and coefficient of variation (CV) of interspike intervals were measured.

Finally, we introduce dynamic synapses (*Eq. 7* system) into the sparse, random mean-field model of the *Eq. 11* system. The model is defined by the expressions in *Eq. 11* with slightly different definitions for *I*_{m}^{rec} and σ_{m}^{rec2} (12) where have replaced the static synaptic weight value *J*_{mn} with the dynamic synaptic weight value μ_{mn}(*x*_{n}, *t*).

## RESULTS

### Conditions on dynamic synapses to create network stability

The goal of introducing dynamic synapses into the sparse, random model is to set up a stable fixed point at a desired target network firing rate *x*_{e} = *x*_{i} = *x**, so that *x*_{e} is robust to various types of perturbations (i.e., the circuit is self-tuning). We view self-tuning as the ability of a system to ultimately return a system value to given range, despite a perturbation in system parameters or input. We define the output of the network to be the average excitatory firing rate *x*_{e}, which is the firing rate that is actively controlled, perhaps by increasing or decreasing *x*_{i}. Although sparse, random networks also have steady-state firing rates as a function of static synaptic weight values *J*_{mn}, we will show that introducing dynamic synapses with relatively general constraints endows the network with much greater robustness under a variety of perturbations. This improved robustness is achieved if the UDF parameters (the parameters of the dynamic synapse model; see methods) of the dynamic synapses are chosen to satisfy the following three conditions.^{4}

*1*) Maintain the synaptic weight values if the presynaptic firing rate is equal to the desired firing rate.

*2*) Increase the strength of the *E* → *E* and *I* → *I* synapses and decrease the strength of the *E* → *I* and *I* → *E* synapses if the presynaptic firing rate is below the target firing rate.

*3*) Decrease the strength of the *E* → *E* and *I* → *I* synapses and increase the strength of the *E* → *I* and *I* → *E* synapses if the presynaptic firing rate is above the target firing rate.

The first constraint involves creating the fixed point at a target firing rate by setting the scaling factor *A* of the dynamic synapse model according to the formula (13) where *J*_{mn} represents specific synaptic weight values of a corresponding network with static synapses and *R*_{mn}* (*x**) and *U*_{mn}^{1*} (*x**) are the steady-state values for the internal dynamic synapse variables as a function of presynaptic firing rate, here set to the desired steady-state firing rate (see *Eqs. 6* and *7* in methods). *Equations 6* and *13* ensure that the steady-state dynamic synapse weight is equal to the static synaptic weight [μ_{mn}* (*x**) = *J*_{mn}] for a presynaptic firing rate *x**. We refer to μ_{mn}* (*x**) as the steady-state value for μ_{mn} (see methods, *Eq. 8*) and designate the resulting function of *x** as the *steady-state synaptic weight curve* (see Fig. 3). In the asynchronous range, when static synaptic weight values (*J*_{ee}, *J*_{ei}, *J*_{ie}, *J*_{ii}) are set for a sparse, random network, the network has a specific steady-state firing rate associated with those synaptic strengths, *x** (van Vreeswijk and Sompolinsky 1996, 1998). Because the network already has a fixed point with firing rate *x** for these static weights, we simply scale the dynamic synapses to take these static weights at *x**. We do this by setting the value of *A*_{mn} according to *Eq.13*, which gives μ_{mn}* (*x**) = *A*_{mn}R_{mn}* (*x**)*U*^{1*}(*x**) = *J*_{mn}[*R*_{mn}* (*x**)*U*^{1*}(*x**)/*R*_{mn}* (*x**)*U*^{1*}(*x**)] = *J*_{mn}, thus returning the firing rate *x**.

Satisfying the second and third constraints involve setting the UDF parameters of the dynamic synapses such that if there is a perturbation in the network that affects the steady-state firing rate then the dynamic synapses attempt to compensate and bring *x*_{e} back to *x**. The UDF parameters are responsible for the shape of the steady-state synaptic weight curve. Thus to create an attractor at *x**, which produces firing rate stability, we choose UDF parameters such that the synapse types *E* → *E* and *I* → *I* have negative derivatives in their steady-state synaptic weight curves and the synapse types *E* → *I* and *I* → *E* have positive derivatives in their steady-state synaptic weight curves (see Fig. 3). To test the validity of this analysis, we ran a parameter search over the range of physiological UDF values to determine whether there were simple volumes of UDF space that fulfill these two requirements through the range of firing rates 10–100 Hz. (See methods for a description of how and why these values were chosen.) Based on these theoretical considerations we chose three sets of UDF parameters (R1, R2, and R3) for each of the four synapse types randomly from the appropriate subvolume of UDF space: for synaptic connections *E* → *E* and *I* → *I* six points were chosen from the blue N-volume, for *E* → *I* and *I* → *E* six points were chosen from the red P-volume (see Fig. 4, *A* and *B*). All resulting UDF parameters are listed in Table 1. The three conditions explained earlier create a stable fixed point for the firing rates of the resulting dynamical system. At the target firing rate *x*_{e} = *x*_{i} = *x**, the dynamic synapses equilibrate to (*J*_{ee}, *J*_{ei}, *J*_{ie}, *J*_{ii}), which maintains the firing rate. At a firing rate less than *x** the synapses *E* → *E* and *I* → *I* increase their absolute synaptic weight values to values greater than *J*_{ee} and *J*_{ii} and the synapses *E* → *I* and *I* → *E* decrease their absolute strength to less than *J*_{ei} and *J*_{ie}, leading to an increased excitatory firing rate. At a firing rate greater than *x** the synapses *E* → *E* and *I* → *I* decrease their absolute synaptic weight values to less than *J*_{ee} and *J*_{ii} and the synapses *E* → *I* and *I* → *E* increase their absolute synaptic weight values to greater than *J*_{ei} and *J*_{ie}, leading to a decreased excitatory firing rate.

In addition, we took experimentally found UDF values reported in Markram et al. (1998) and Gupta et al. (2000) and compared their contribution to network stability with that of the theoretically deduced parameters that were selected as previously described. We sampled random points from Gaussian distributions with means and SDs as reported in these articles and plotted the resulting UDF values in Fig. 4, *C*–*F*. The UDF values for *E* → *E* synapses from Markram et al. (1998) (blue circles in Fig. 4, *C* and *D*) lie clearly in the N-volume. The UDF values for *E* → *I* synapses (green circles) from that study lie much closer to the P-volume, but are not contained in it. The UDF values of GABAergic synapses of type F1 (facilitating), F2 (depressing), and F3 (recovering) reported in Gupta et al. (2000) are contained in the N-volume (see green, cyan, and black pluses in Fig. 4, *E* and *F*), but those for types F1 and F3 are close to the P-volume. This ambiguous position of UDF parameters for GABAergic synapses could be related to the fact that, from a theoretical perspective for improved homeostatic plasticity, the *I* → *I* and *I* → *E* connections may have either positive or negative derivatives depending on other network parameters (see Supplemental materials). In Wang et al. (2006) several additional clusters of UDF values for *E* → *E* synapses in prefrontal cortex are reported. These values (plotted in Fig. 4, *G* and *H*) all fall clearly into the N-volume (into which they should fall according to the proposed theoretical analysis).

### Network parameter ranges

To find ranges of synapse strengths for static synapses that were compatible with low firing rates and low network correlation in current-based or conductance-based networks, we varied the excitatory and inhibitory synaptic weights of conductance-based and current-based networks (see methods and Fig. 5). Specifically, we varied two synaptic weight values, one excitatory synaptic weight *J*_{e} and one inhibitory synaptic weight *J*_{i}, and set the all synaptic weight values such that *J*_{ee} = *J*_{ie} = *J*_{e} and *J*_{ei} = *J*_{ii} = *J*_{i}. So that results between conductance- and current-based networks could be compared we scaled the conductance-based synapses so that the conductance- and current-based synapses had equal strength when the conductance-based synapses were multiplied by their respective driving forces at the neurons’ resting potential. We found a broad parameter range of synaptic weights that could potentially support dynamic synapses as a network feedback mechanism in the asynchronous, irregular regime. However, because we measured the firing rate, network correlation, and coefficient of variation of the interspike interval (ISI CV) for only a 2D manifold of the four-dimensional (4D) space of synaptic weights (*J*_{ee}, *J*_{ei}, *J*_{ie}, *J*_{ii}), these parameter maps serve only as a guide with which to start our simulations because the parameter space becomes 4D when the dynamic synapses are allowed to vary the synaptic strengths according to individual dynamic synapse formulas with different AUDF parameters. We always started our networks with synaptic weight initial conditions from these 2D maps, so that the networks started in a setting where the network behavior is asynchronous irregular and the feedback current magnitudes are on par with or smaller than the level of the background current injection. Based on these maps, we used the synaptic weight parameter range shown in Fig. 5, *A*, *D*, and *G* as the starting point for the rest of our conductance- and current-based spiking network simulations, as well as our sparse, random mean-field simulations of the *Eq. 11* system, all of which used dynamic synapses. Specifically, *J*_{ee}, *J*_{ie} ∈ [0, 0.1] × 10^{−9} A and *J*_{ei}, *J*_{ii} ∈ [0, 1.67] × 10^{−9} A for current-based spiking networks and the current-based mean-field model and *J*_{ee}, *J*_{ie} ∈ [0, 1.67] × 10^{−9} S and *J*_{ei}, *J*_{ii} ∈ [0, 10] × 10^{−9} S for the conductance-based spiking networks and the dynamic synapses were set to these values using *Eq. 13*.

As a verification of our current-based mean-field model (*Eqs. 9*–*11*) without dynamic synapses, we used the same settings as in the networks of current-based spiking neurons (see Fig. 5*G*) for the mean-field model and compared the average firing rate. The mean-field model was in excellent agreement with the current-based spiking network simulations across the very wide parameter range sampled, with a mean absolute error (MAE) of 0.8 Hz and a maximum error of 3.8 Hz over the parameter range.

### Interaction of synaptic dynamics with network activity

To give an idea of how dynamic synapses interact with the network, we discuss an example in detail. The example in Figs. 6 and 7 shows a burst in a network of current-based spiking neurons followed by a convergence to a steady-state firing rate (a conductance-based example would work nearly identically, except that the effect on the synaptic conductances would be harder to visually interpret as a result of the effect of the driving force on each synapse). We also show a sparse, random mean-field simulation using the same parameters. A network was chosen that normally fires at 20 Hz in the absence of dynamic synapses, *J*_{ee} = *J*_{ie} = 0.05 × 10^{−9} A and *J*_{ei} = *J*_{ii} = −0.1 × 10^{−9} A. The network was initialized with the dynamic synapse parameter initial conditions set to a 5-Hz steady-state firing rate and the desired firing rate for the network was set to 10 Hz, by adjusting the scaling factors *A*_{mn} in the *Eq. 8* system. The firing rate initial conditions for the network were also set to 5 Hz, as best as possible. Because a firing rate of 5 Hz gives an overall positive network gain when the target firing rate is set to 10 Hz, the firing rate shot up at the beginning of the simulation (Fig. 6, *A* and *B*). The dynamic synapses adapted to this new firing rate and brought it back down, ultimately reaching nearly the steady-state of 10 Hz for the excitatory population, whereas the inhibitory population firing rate remained near 20 Hz. Thus we see that an excitatory firing rate very near the target firing rate x_{e}* was reached from quite different initial conditions and then maintained. During the simulation, the average dynamic synapse weights were modified and caused the changes shown in the network firing rates (Fig. 6, *C*–*J*). We can assert that the dynamic synapses caused the change in network firing rate (and not the other way around) because the time constants of the dynamic synapses are much longer than any neuronal time constants. Again there is excellent agreement between the current-based mean-field model and the current-based spiking network simulation.

A spike raster of a random subset of neurons in the current-based spiking network example (Fig. 7*A*) shows that the network functioned in the asynchronous range, aside from correlation induced by a common network firing rate. Figure 7, *B*–*D* shows data from a single example neuron in the simulation. The total synaptic current hovered around the background current and the recurrent synapses subtly influenced this current. When the network achieved the steady-state firing rate, the total current was slightly below the normal background current (Fig. 7*B*), thus creating an average firing rate of 10 rather than 20 Hz without dynamic synapses. The individual synaptic currents in Fig. 7*C* show differences in magnitude of three- to fourfold as a result of the short-term plasticity.

### Self-tuning in networks of spiking neurons

To investigate the self-tuning properties of dynamic synapses on spiking networks we asked whether dynamic synapses could help tune a spiking neural network to a steady-state target firing rate even if the external input was perturbed because it occurs, for example, in primary sensory areas of the cortex in response to sensory stimuli. To test this, we used a conductance-based spiking network (*J*_{ee} = *J*_{ie} = 0.4 × 10^{−9} S and *J*_{ei} = *J*_{ii} = 8.48 × 10^{−9} S), a current-based spiking network (*J*_{ee} = *J*_{ie} = 0.013 × 10^{−9} A and *J*_{ei} = *J*_{ii} = −0.18 × 10^{−9} A), and a current-based mean-field model (*J*_{ee} = *J*_{ie} = 0.013 × 10^{−9} A and *J*_{ei} = *J*_{ii} = −0.18 × 10^{−9} A), all with a steady-state firing rate of 10 Hz. We additionally performed all of our self-tuning tests with the current-based mean-field model because it aids us in understanding the self-tuning properties of networks with dynamic synapses and shows, in particular, that the self-tuning properties of a circuit with dynamic synapses are a property of the dynamic synapse model and not something hidden in the many details and parameters of a complicated spiking neural network simulation. Every simulation was performed using both the parameters based on experimental data and the R1–R3 dynamic synapse parameters that were derived from our theoretical considerations (see methods). The scaling factors *A*_{mn} of the dynamic synapses were set only once such that the steady-state dynamic synapse curves intersected at 10 Hz. Thus each simulation had identical dynamic synapse parameters. We then perturbed in small increments both the mean and SD of the external input from −50 to +50% of their normal values (*I*^{inject} = 2.455 × 10^{−9} A and noise drawn from a 0-mean Gaussian, σ^{inject} = 6 × 10^{−9} A SD). The results, which are the steady-state excitatory firing rates from the last 1 s of each simulation, are shown in Fig. 8. All the networks that had a steady-state firing rate between 9 and 11 Hz (a difference of ≤1 Hz from the target excitatory firing rate) are shown in blue, networks with firing rates between 8 and 9 and 11 and 12 Hz (a difference between 1 and 2 Hz) are shown in cyan, and networks with firing rates between 7 and 8 and 12 and 13 (a difference between 2 and 3 Hz) are shown in green. In Fig. 8, *A*, *D*, and *G* we show the control case with static synapses for the three different network models for comparison. There were few networks that supported a 10-Hz steady-state firing rate. The maximal firing rates were >40 Hz and many networks were essentially shut down with a firing rate very close to or equal to 0 Hz. Thus the steady-state firing rates of networks with static synapses are quite dependent on mean current injection and injected noise level. When we ran the same simulations with dynamical synapses turned on (Fig. 8, remaining panels), the large firing rates were reduced. We examined two different assignments of UDF values to the four different types of synaptic connections in the network: values that are sufficient for inducing stability according to the previously (see first section of results) described analytical approach (Fig. 8, *B*, *E*, and *H*) and values taken from experimental data (Fig. 8, *C*, *F*, and *I*). Interestingly, for this perturbation test the dynamic synapse parameters based on experimental data somewhat outperformed the dynamic synapses parameters based on the analytical approach.

We emphasize that when viewing changes in external input as a perturbation to the network we are examining only the steady-state response of the network and that there is indeed a strong transient response to external input that could provide the excitation necessary for cortical computations, while still allowing for homeostatic regulation on this very short timescale. A transient response to any network perturbation results from the fact that dynamic synapses do not equilibrate instantaneously and thus the response will follow the timescale of the dynamic synapses. This is shown in Fig. 9 for the case of conductance-based spiking networks with dynamic synapse parameters based on the experimental data shown in Table 1 (Fig. 9, *A*–*C*) and R1 dynamic synapse parameters (Fig. 9, *D*–*F*) (the R2 and R3 parameters were similar). Figure 9, *A*–*F* shows 50-ms excitatory firing rate “snapshots” for each network at times 1–50, 251–300, and 451–500 ms. Additionally, Fig. 9*G* shows a single network's excitatory and inhibitory response to a constant input with constant noise variance experimentally observed UDF parameters; Fig. 9*H* shows the same network with R1 dynamic synapse parameters. In the case of experimentally observed synapse parameters the network compensated for the changed external input in approximately 600 ms, resulting in an excitatory steady-state firing rate near 11 Hz and an inhibitory steady-state firing rate of 24 Hz. In the case of the R1 dynamic synapse parameters the network compensated for the changed external input in approximately 150 ms, resulting in an excitatory steady-state firing rate and an inhibitory steady-state firing rate of approximately 24 Hz. This 150-ms period is in the same range as the duration of typical transient rate responses to visual stimuli in primary visual cortex (see, e.g., Fig. 2 in Lamme and Roelfsema 2000). The duration of transient rate responses reported for higher visual areas (see, e.g., Rainer et al. 2004) is somewhat longer and lies between the durations found in these network simulations for R1 dynamic synapses and dynamic synapses based on experimental data.

We next examined the effect of long-term changes in synaptic weights on firing rate stability in networks with dynamic synapses. We used the parameter range of static synaptic weight values already shown in Fig. 5 and assumed that these synaptic weights were a perturbation of some unknown but correct synaptic weights that would lead to a steady-state firing rate of 10 Hz. Taking each network with its given, possibly perturbed static synaptic weights, we then used *Eq. 13* to set the synaptic scaling factor for spiking networks with dynamic synapses assuming a target firing rate of 10 Hz. That is, the *A*_{mn} values were set to *A*_{mn} = *J*_{mn}/[ R_{mn}* (10 Hz)/U_{mn}^{1*} (10 Hz)] and were thus scaled by a single, constant scale factor across simulations using the same dynamic synapse parameters.

The results of the synaptic weight perturbations for conductance-based spiking networks, current-based spiking networks, and current-based mean-field models are shown in Fig. 10. In Fig. 10, *A*, *D*, and *G* are the control cases with static synapses, shown for comparison. With static synapses the steady-state firing rates for networks with high excitatory synaptic weights are in excess of 200 Hz. However, as shown in Fig. 10, *B*, *C*, *E*, *F*, *H*, and *I*, in networks with dynamic synapses, regardless of conductance-based (*A*–*C*) or current-based (*D* and *E*) synapses and regardless of which dynamic synapses parameters are used, the excessive firing rates disappear. Most important, in all but the one case of current-based networks with the dynamic synapse parameters based on experimental data, the number of perturbed networks that nevertheless had firing rates very close to the target firing rate increased dramatically. The continuous mean-field predictions of the dynamic synapses applied to the networks with synaptic weight perturbations are shown in Fig. 10, *G*–*I*. The mean-field simulations show excellent performance across nearly the entire range of perturbations and are qualitatively similar to spiking network simulations using dynamic synapses. Network correlation did not play a role in the effect the dynamic synapses had on the networks.

One may notice that some networks that had a firing rate between 9 and 11 Hz without dynamic synapses no longer had this firing rate when dynamical synapses were turned on (compare Fig. 10, *D* and *E*, *bottom left corner*). Furthermore, the mean-field model with dynamic synapses (compare Fig. 10, *G* and *H*) expands this range of target firing rates, whereas the current-based spiking network does not. Typically, the average excitatory firing rate drops by 1 or 2 Hz when dynamic synapses are turned on in the spiking network. The reason for this is somewhat subtle and it has to do with the spiking networks’ synaptic weight values, which are not always in exact agreement with the mean-field model synaptic strength values^{5} . Although this effect shows up in all three self-tuning results we show, as is clear from the figures, its effect is small, resulting in differences between the spiking networks and the mean-field models of only a few Hertz at most and, nevertheless, the results with dynamic synapses show great improvement over the control case with static synapses.

Why should simply scaling the *A*_{mn} for dynamic synapses according to *Eq. 13* have a helpful effect on the target firing rate of the network if there is an unknown perturbation of the factor *J*_{mn} in that same equation? Assume, for example, that there are static synaptic strengths (*J*_{e}*, *J*_{i}*) that lead to a target firing rate of *x**. Assume further that there is an unknown perturbation of these synaptic strengths that can be written as (J_{e}* + ε_{e}, J_{i}* − ε_{i}). Because we have increased the excitation and decreased the inhibition we should expect an increase in the target firing rate; that is, the perturbed synaptic strengths (J_{e}* + ε_{e}, J_{i}* − ε_{i}) lead to *x** + *x*_{ε}. If we now examine the same network and synaptic weight perturbation but use dynamic synapses we see a change in behavior. As already outlined we set the *A*_{mn} according to *Eq. 13*, so that μ_{me}* (*x**) = J_{e}* + ε_{e} and μ_{mi}* (*x**) = J_{i}* − ε_{i} because we do not know that there is a perturbation. Now the situation with dynamic synapses is quite different from with static synapses because at *x** the dynamic synapses equilibrate to values that support a higher firing rate than *x**, specifically *x** + *x*_{ε}. At a firing rate of *x** + *x*_{ε}, however, because the derivatives of the dynamic synapses have the *correct* signs, the network supports a firing rate below *x** + *x*_{ε}. Thus we can conclude that the firing rate must be between *x** and *x** + *x*_{ε}, which is better than the case of static synapses, where the network firing rate is completely dictated by the perturbed synaptic weights and thus has a firing rate of *x** + *x*_{ε}. The degree to which the final firing rate will be closer to *x** or *x** + *x*_{ε} depends on the magnitude of these derivatives. We refer to the last part of the Supplementary material for an analytical comparison of network stability with static and dynamic synapses.

As a final self-tuning test we also investigated the effects of systematic changes in the excitability of subpopulations of neurons in the simulated circuit. Diverse changes in the excitability of neurons were previously reported to be caused by neuromodulators (especially dopamine) in various cortical areas (Gulledge and Jaffe 1998, 2001; e.g., see Fig. 2*B* in first reference). Because such changes in the excitability of neurons cannot be modeled directly in the context of LIF neurons, we simply inactivated (i.e., deleted) randomly selected subsets of neurons in the circuit. For continuity, we chose the same networks as in the input perturbation self-tuning simulation: conductance-based spiking networks (*J*_{ee} = *J*_{ie} = 0.4 × 10^{−9} S and *J*_{ei} = *J*_{ii} = 8.48 × 10^{−9} S), current-based spiking networks (*J*_{ee} = *J*_{ie} = 0.013 × 10^{−9} A and *J*_{ei} = *J*_{ii} = −0.18 × 10^{−9} A), and the current-based mean-field models (*J*_{ee} = *J*_{ie} = 0.013 × 10^{−9} A and *J*_{ei} = *J*_{ii} = −0.18 × 10^{−9} A), all with a steady-state firing rate of 10 Hz. We randomly chose excitatory and inhibitory neurons in these networks to inactivate, with the total number of inactivated neurons between 0 and 70% of their respective pool totals, in small increments. The current-based mean-field model received an analogous perturbation by modifying the *K*_{mn} parameter in *Eq. 9* according to the neuronal inactivation percentage in the spiking networks. The control simulations with static synapses are shown in Fig. 11 *A*, *D*, and *G*; results with the R1–R3 dynamic synapse parameters are shown in Fig. 11, *B*, *E*, and *H*; and results with the dynamic synapse parameters based on experimental data are shown in Fig. 11, *C*, *F*, and *I*. For the networks tested, the dynamic synapse parameters based on experimental data did not have a significant effect but the R1–R3 dynamic synapse parameters showed improvement over the control, with the vast majority of networks within 2 Hz of the target firing rate of 10 Hz. Again, network correlation effects did not play a role in the results of the simulations.

### Choosing other target firing rates

Implicit in the demonstrated self-tuning properties is the ability to set a specific fixed-point firing rate using dynamic synapses, through use of the scaling in *Eq. 13*. The network perturbations can then be viewed as a perturbation away from this fixed point. We thus wondered, in principle, how close a spiking network could get to a preprogrammed fixed point, as set by *Eq. 13*, regardless of what the initial synaptic strengths of the network were. Briefly, we used five networks from the current-based, spiking network synaptic parameter range of Fig. 5*D*. The networks had *J*_{e} and *J*_{i} as follows (in nA): blue [0.025, −0.05], red [0.025, −0.15], cyan [0.05, −0.1], magenta [0.075, −0.05], and green [0.075, −0.015], leading to corresponding firing rates of 20, 12, 20, 100, and 20 Hz with static synapses. We then chose as the target firing rates every integral firing rate in the range of 1–120 Hz and used *Eq. 13* to appropriately set the dynamic synapses in the five networks. We do not mean to suggest that cortical networks create fixed points at such high firing rates, rather only to test the principle behind *Eq. 13*. The results, which are the steady-state excitatory firing rates, are shown in Fig. 12*A*. All networks showed an ability to enhance their firing rates toward the desired target firing rate, although the two networks with the weakest excitatory synapses (denoted by blue and red) were unable to follow the target firing rate closely at >20 Hz. Another complication arose in that the two networks with the strongest excitatory synapses (green and magenta), although able to closely follow the target firing rate, also showed significant network correlation at >20 Hz, shown in Fig. 12*B*. Despite these caveats, the results clearly demonstrate that the ability to tune the steady-state firing rate of a network is a general property of the dynamic synapse configuration. The results lend further evidence to the idea that self-tuning properties of spiking networks with dynamic synapses are caused by setting a firing rate fixed point, to which the networks attempt to return after a perturbation.

### Theoretical analysis

There is little hope that one can give a complete theoretical justification for the demonstrated self-tuning properties of sparsely connected circuits of spiking neurons with dynamic synapses or for the associated mean-field model defined by *Eqs. 7*–*9*. The reason lies in the nonlinearity of these models and the fairly large number of variables that are involved. However, the demonstrated self-tuning properties of these complex systems are clearly related to known self-tuning properties of substantially simpler systems for which a theoretical analysis is feasible. These analytical results will be discussed in the following.

### An analytical result on the assignment of differential dynamics to different types of synapses in a distributed circuit

In the first section of results we proposed a heuristic for choosing the signs of the derivatives of the synaptic weight curves μ_{mn}* (*x*_{n}) at *x**. We then demonstrated through numerical simulations of circuits of spiking neurons that this choice was justified. This heuristic can also be justified analytically for a 2D mean-field model, similar to that described in *Eq. 11* (assuming dynamic synapses with very small derivatives and instantaneously equilibrating dynamic synapses). One can prove that the derivatives d_{mn} = (dμ_{mn}/d*x*_{n})(*x**) should have the following signs for the excitatory population to have homeostatic properties with respect to changes in the intensity of external input currents: d_{ee} < 0, d_{ii} > 0, d_{ie} > 0, d_{ei} < 0 (see Supplementary material for details). These signs agree with the previously proposed heuristic rules for the assignment of depressing and facilitating synapses to the four types of synaptic connections (see first section of results; note that the negative sign is included in μ_{ei} and μ_{ii}). For the excitatory population to also have homeostatic properties with respect to changes in the intensity of external input currents to the inhibitory population the analytically derived condition is somewhat more complicated where β_{e} is the derivative of the excitatory *F*–*I* curve evaluated at the fixed point *x**. Taken together these conditions imply that regardless of the size of μ_{ee}(*x**), the *E* → *E* connection should always be depressing and the *E* → *I* should always be facilitating to allow the circuit to return to a steady-state firing rate in spite of a changed level of external input. Additionally, if μ_{ee}(*x**) > β_{e}^{−1} the *I* → *E* synapses should be facilitating and the *I* → *I* synapses should be depressing; otherwise, the *I* → *E* synapses should be depressing and the *I* → *I* synapses should be facilitating. The fact that this analytical result does not point to a general rule for the optimal dynamics of synapses from inhibitory neurons (from the perspective of firing rate stability) could be seen as a possible explanation for the diverse distribution of experimentally observed UDF parameters for these synapses (see Fig. 4, *E* and *F*). We additionally give justification for our self-tuning heuristic for the parameter range used in our current-based networks in the Supplementary materials.

### A theoretical analysis of the impact of changes of synaptic weights on stability properties of networks with static and dynamic synapses

One can prove for the same mean-field model with excitatory and inhibitory neurons that the steady-state firing rate *x** is less affected by small changes in synaptic weights (resulting for example from LTP or LTD) if the synaptic connections are dynamic and tuned using an appropriate heuristic. An analytical derivation is provided in the second part of the Supplementary material, with specific reference to the fact that our self-tuning heuristic is appropriate for the parameter regime used in the current-based spiking networks herein. An additional example figure (Fig. S1) is also given.

## DISCUSSION

We have shown that the inclusion of more realistic models for synapses, which reflect their experimentally found short-term plasticity, provides models for cortical neural networks with interesting stability properties. In particular, it enables such models to return after a large variety of perturbations to a low but nonzero spontaneous firing rate, even if the perturbation, such as a change in the level of external input, a change in synaptic strength, or even neuron deletion, is of a longer-lasting nature. This surprising stability property of network models with dynamic synapses, which has apparently not been reproduced in models with static synapses, may provide a possible explanation for the surprisingly stable low but nonzero rate of spontaneous firing reported for cortical neurons in a variety of studies. Several other homeostatic processes, such as synaptic scaling, LTP and LTD, and genetic regulation of receptor numbers are likely to support the stability of the rate of spontaneous firing of cortical neurons on a larger timescale (Turrigiano and Nelson 2004). However, none of these mechanisms would apparently be able to support stability on the timescale of seconds, say, in response to a changed level of sensory input or to a changed concentration of neuromodulators in a cortical circuit. On the other hand, we showed in Fig. 9 that the stability property endowed by short-term synaptic plasticity leaves a sufficiently large time window of one to several hundred milliseconds during which the firing rate in the circuit is affected by the external perturbation. Thus information about such external inputs can be transmitted to other cortical circuits and integrated into cortical computation. In fact, the dynamic response to an increased external input shown in Fig. 9*H* is on a timescale similar to that of typical responses to visual inputs in primary visual cortex (see, e.g., Fig. 2 in Lamme and Roelfsema 2000). Even on a larger timescale, however, the information about the external (or internal) perturbation remains accessible to the neural system, although the firing rate of excitatory neurons is returned through synaptic dynamics to a given target level. The same firing rate of excitatory neurons is then accompanied by a different firing rate of inhibitory neurons (see Fig. 9, *G* and *H*) and this rate may therefore contain substantial information about the nature and level of the external perturbation for a much longer time period. This observation also implies that the same firing rate of excitatory neurons can be produced by a virtually infinite combination of concentration levels of neuromodulators and activity levels of inhibitory neurons, analogously to the large sets of different circuit parameters that were reported to support the generation of a target firing pattern in the pyloric circuit in the crustacean stomatogastric ganglion (Marder and Goaillard 2006).

The target value of 10 Hz for the spontaneous firing rate excitatory neurons in cortical circuits on which we have focused in our network simulations was chosen rather arbitrarily, although it lies within a range of spontaneous firing rates reported in a number of experimental studies (see introduction). We show in Fig. 12 that a large range of other target values can be achieved by choosing a suitable global scaling parameter for synaptic weights, which might for example be under genetic control. However, to demonstrate stability at really low rates (say, 1 Hz), one needs to add neuron models to the network that fire spontaneously even in the absence of synaptic input (or during a release from inhibition); otherwise, the network cannot recover from a temporary pause in firing and will remain indefinitely silent.

We have shown in this article that the stability of the firing rate of excitatory neurons endowed by short-term synaptic dynamics is rooted in general principles of control theory, recently proposed for much simpler dynamical systems. In fact, activity-dependent rescaling of feedback strength was previously postulated as a powerful mechanism for self-tuning in control theory for purely mathematical reasons (Moreau and Sontag 2003). We demonstrated that this theoretical analysis of stability properties provided by dynamic synapses can be extended to the case of a distributed system that consists of several interacting dynamical modules, such as an interacting population of excitatory and inhibitory neurons in a cortical circuit, each with thousands of neurons. In particular, we have exhibited an analytical method for deriving rules regarding the type of synaptic dynamics that should be assigned to different types of synapses for the purpose of network stability. This theoretical approach may help to provide an explanation for the diversity of the empirically found short-term plasticity of connections between modules with different dynamic roles (such as excitatory and inhibitory neurons). It turns out that this theoretical analysis provides clearer predictions for the dynamics of synapses from excitatory neurons than for GABAergic synapses. This might provide an explanation for the experimentally found diversity in the dynamics of GABAergic synapses.

Finally, we emphasize that the investigation of the relationship between stability properties of networks of spiking neurons and the dynamics of specific types of synapses that we have initiated in this article should be seen only as a first step. In a second step one also needs to investigate through computer simulations more detailed network models consisting of different types of excitatory and especially inhibitory neurons with specific firing properties and more specific connectivity patterns. A quite challenging open problem is whether one can also extend the theoretical analysis of stability properties induced by dynamic synapses to more complex networks.

## GRANTS

D. Sussillo was supported in part by a Fulbright Research Grant from the U.S. State Department. W. Maass was partially supported by the Austrian Science Fund (Projects P17229 and S9102-N13) and by the European Union Project Fast Analog Computing with Emergent Transient States (FP6-015879).

## Acknowledgments

The authors thank Drs. Stephan Haeusler and Eduardo Sontag for fruitful discussions that greatly aided this paper and Dr. Rodney Douglas for very helpful critical comments.

## Footnotes

↵1 It is shown in Moreau and Sontag (2003) that it suffices to assume that

*1*)*f*is decreasing,*2*)*g*is strictly increasing, and*3*) there exists some*x** such that*f*(*x**) =*g*(μ_{0}).↵2 The online version of this article contains supplemental data.

↵3 The variable

*u*_{k−1}in the third expression in the Eq. 6 system was erroneously replaced by uk in Markram et al. (1998); see the discussion in Maass and Markram (2002).↵4 See the Supplemental materials for a theoretical justification of these choices.

↵5 In a spiking network without dynamic synapses there is a distribution of firing rates, whose mean we have used as the steady-state firing rate of the network. In these networks even the tails of the neuronal firing rate distribution have the same synaptic weight values

*J*_{mn}. However, in the spiking network with dynamic synapses turned on, the lower part of the firing rate distribution will typically have synapses that are stronger than the set value μ_{mn}(*x**) =*J*_{mn}for the target firing rate and the higher part of the distribution will typically have synapses that are weaker than*J*_{mn}. Because the steady-state dynamic synapse efficacy curves were constructed only to make sure the derivatives had the right signs there is no reason that the expected value of the steady-state synaptic weight across all synapses from neuron type*n*to*m*should exactly equal the mean-field average [i.e., that ∫*P*(*x*_{n}) μ_{mn}(*x*_{n}) =*J*_{mn}, where*P*(*x*_{n}) is the distribution of firing rates for neuron type*n*]. Furthermore, changing the target firing rate may change the shape of the steady-state firing rate distribution of the network, thus making it effectively impossible to cancel this effect if one desires to tune the same network to different firing rates. Overall, this effect results in no more than a 1- to 2-Hz difference between the mean-field prediction of the effect of dynamic synapses and the spiking network results with dynamic synapses.

- Copyright © 2007 by the American Physiological Society