## Abstract

We investigated spontaneous activity and excitability in large networks of artificial spiking neurons. We compared three different spiking neuron models: integrate-and-fire (IF), regular-spiking (RS), and resonator (RES). First, we show that different models have different frequency-dependent response properties, yielding large differences in excitability. Then, we investigate the responsiveness of these models to a single afferent inhibitory/excitatory spike and calibrate the total synaptic drive such that they would exhibit similar peaks of the postsynaptic potentials (PSP). Based on the synaptic calibration, we build large microcircuits of IF, RS, and RES neurons and show that the resonance property favors homeostasis and self-sustainability of the network activity. On the other hand, integration produces instability while it endows the network with other useful properties, such as responsiveness to external inputs. We also investigate other potential sources of stable self-sustained activity and their relation to the membrane properties of neurons. We conclude that resonance and integration at the neuron level might interact in the brain to promote stability as well as flexibility and responsiveness to external input and that membrane properties, in general, are essential for determining the behavior of large networks of neurons.

## INTRODUCTION

Neocortical networks of neurons in the brain have two remarkable properties: they can produce activity patterns in the absence of any sensorial input (Arieli et al. 1995) and exhibit vigorous responses when stimulated (Frazor et al. 2004). The internal activity produced by the brain independently of external stimulation is present even during sleep cycles (Evarts 1964; Steriade 2006) or during sensory deprivation (Celikel et al. 2004; Dupont et al. 2003). It was previously suggested that self-sustained activity contributes to higher cognitive processes, such as working memory, decision making, and goal-directed behavior (Wang 2003), and it was recently shown that spontaneous brain activity plays an important role in synaptic maturation during development (Gonzalez-Islas and Wenner 2006).

There are two different concepts associated with self-sustained activity: spontaneous activity and persistent activity. *Spontaneous activity* is related to ongoing dynamics that is not triggered by external events (Arieli et al. 1995), whereas *persistent activity* is triggered by an external event/stimulus and is maintained in a task-relevant manner (Miyashita and Chang 1988). Persistent activity is produced on top of already existing spontaneous activity. Our main interest in this study was to investigate possible mechanisms underlying spontaneous activity in models of spiking microcircuits and their implications on the *responsiveness* to external inputs.

Spontaneous activity and the responsiveness to external inputs are very much interrelated. To be able to maintain robust ongoing activity in the absence of stimulation, a neural circuit must possess some homeostatic mechanisms (Davis 2006; Turrigiano and Nelson 2004) that avoid the “dying out” (activity completely ceases after some period of time) or the “explosion” (the network becomes epileptic) of activity. In turn, such regulation mechanisms might impair the responsiveness of the circuit to external stimuli. This issue becomes critical if the circuit receives weak inputs and must amplify them to attain a robust response. Indeed, recent evidence suggests that thalamic inputs to the cortex are quite weak. They represent <10% of the total number of afferent connections in layer IV (Douglas and Martin 2004). Nevertheless, cortical networks are very responsive to inputs (Frazor et al. 2004) and they exhibit stable spontaneous activity that interacts with input sensory signals to produce a combined effect (Arieli 2004; Arieli et al. 1996; Tsodyks et al. 1999). We suggest that stable spontaneous activity and responsiveness to inputs might be supported by very different mechanisms and that there might be a complex interplay between the two.

### Spontaneous activity

There have been numerous attempts to model self-sustained activity in artificial spiking neural networks. Although persistent activity relies on specific neuronal circuitry (Durstewitz and Seamans 2006), spontaneous activity is usually modeled using an external, nonspecific “background” input (Hansel and Mato 2001; Latham et al. 2000; Plesser and Gerstner 2000; Roxin et al. 2004; van Vreeswijk and Sompolinsky 1996). The background input is unrelated to the stimulus and is thought to arise mostly from distant cortical or subcortical structures (Amit and Brunel 1997; Durstewitz and Seamans 2006).

A background input is necessary for producing spontaneous activity because stable networks of integrator neurons have a tendency to become quiescent without external excitation (Hansel and Mato 2001). The background input can be modeled either deterministically (van Vreeswijk and Sompolinsky 1996) or stochastically (Brunel and Hakim 1999; Hansel and Mato 2001; Latham et al. 2000; Plesser and Gerstner 2000; Roxin et al. 2004). The vast majority of models use the stochastic approach. A random, usually Poisson, process is used in such cases to account for the external background drive that allows the network to maintain a baseline activity level and avoid “dying out.” Such a strategy was also motivated by the known spontaneous, apparently random, spiking of cortical neurons that would, at least in part, be induced by the random background fluctuations (Softky and Koch 1993; Tuckwell 1989). Indeed, there is increasing evidence that spontaneous neurotransmitter (spike-independent) axonal release induces miniature synaptic potentials (minis) that play an important role in promoting spontaneous network activity during sleep and anesthetized states (Paré et al. 1997, 1998; Timofeev et al. 2000). However, in the awake state, the common belief that neurons fire in a stochastic, unreliable way has been recently challenged (Gur and Snodderly 2006); thus the influence of the random background is probably differentially expressed depending on the cortical state.

It is very likely that in the cortex many mechanisms contribute to the establishment of stable spontaneous activity. Spontaneous miniature synaptic events and noncortical sources of excitation provide a “lower bound” for cortical activity, preventing it from “dying out.” On the other hand, the neocortical circuitry is highly recurrent—thus the danger of self-amplified “runaway excitation” leading to the “explosion” of activity (Vogels et al. 2005). This raises the need for an “upper bound” on neural activity (Davis 2006). The dynamic regulation of network activity is supported by numerous homeostatic mechanisms, such as spike-timing–dependent plasticity (STDP) with hard or soft bounds (Abbott and Nelson 2000), short-term synaptic facilitation/depression (Abbott and Regehr 2004), synaptic scaling (Turrigiano and Nelson 2004), intrinsic neuronal plasticity (Zhang and Linden 2003), or spike-frequency adaptation (Benda and Herz 2003; Gabbiani and Krapp 2006). Among these, intrinsic plasticity and synaptic scaling are slow, compared with the response timescale of cortical networks (tens of milliseconds). Synaptic facilitation/depression and STDP can be relatively fast and act at the level of synapses. At the cellular level, spike-frequency adaptation is a fast process that could help networks avoid firing saturation (Gabbiani and Krapp 2006). To the best of our knowledge, in the context of spontaneous activity, no fast homeostatic mechanism relying on intrinsic membrane properties has been put forward so far. Here, we propose membrane resonance as a possible candidate.

### Responsiveness to inputs

It is generally recognized that fast homeostatic processes supporting the stability of network activity also impair its responsiveness to external stimuli (Vogels et al. 2005). Slower regulating mechanisms such as intrinsic plasticity and synaptic scaling are not expected to directly interfere with response dynamics on a short timescale. However, fast mechanisms such as short-term depression and spike-frequency adaptation can in principle prevent robust excitatory waves from propagating because they reduce the gain in the network (Turringiano and Nelson 2004). Such control mechanisms can dramatically reduce the responsiveness of the circuit to external stimulation. Reconciling stable spontaneous dynamics with network responsiveness is still an unresolved issue (for a review, see Vogels et al. 2005).

In cortical networks, the regulation of the “lower bound” of activity is achieved both by integrating “background/external” excitatory signals (minis/noncortical drive) and through intrinsic dynamical properties of the network, such as reverberating activity. The “higher bound” is regulated through network properties (that stem from a plethora of mechanisms ranging from synaptic to cellular and structural properties) and also through the global modulation of the cortical state. Here, we first study the possibility of regulating both the “lower” and “higher” bounds of activity exclusively through network properties, in the absence of any external drive. So far only a few studies reported success in this direction (Kumar et al. 2005; Muresan et al. 2005) because of the known difficulty of stabilizing the activity of strongly coupled recurrent neural microcircuits (Vogels et al. 2005) for the general case (stable reverberating circuits can be produced using specialized circuitry; see Ben-Yishai et al. 1995; Durstewitz and Seamans 2006; Hansel and Mato 2001). We propose next that membrane properties of individual neurons could play a very important role in both spontaneous activity and network responsiveness. We explore the properties of different neural models that are commonly used in building large-scale networks of spiking neurons. More specifically, the leaky integrate-and-fire model (Dayan and Abbott 2001) and two variants of the Izhikevich (2003) model (regular-spiking and resonator neurons) are investigated here. The differences in terms of excitability and frequency-dependent response will be emphasized. To bring the different models of neurons to comparable activity regimes, we first calibrate the synaptic strength required to obtain the same postsynaptic potential (PSP) peak to an incoming spike. Based on such calibrations, we are able to compare large networks of integrate-and-fire, regular-spiking, and resonator neurons and investigate their ability to produce stable spontaneous activity in the absence of any external drive. We also investigate the responsiveness of such networks to external inputs and we discuss different patterns of activity that are produced by the integrative or resonant behavior of network neurons. We finally investigate the relationship of our findings with other known processes contributing to spontaneous activity such as the influence of miniature synaptic potentials and synaptic delays.

## METHODS AND RESULTS

### Integration or resonance?

Since the early studies of Sherrington (1906) it has been mostly believed that neurons behave as integrators. The integrative property of neurons implies that responses of neurons to stimulation increase monotonically with the increasing input frequency.

However, recent experimental evidence suggests that, apart from their integrative properties, neurons also exhibit frequency-dependent behavior or resonance, having preferred stimulation frequencies in the range of 5–20 Hz for pyramidal neurons and 5–50 Hz for interneurons (Fellous et al. 2001; Hutcheon and Yarom 2000). The resonant property implies that neurons have a maximal response to a preferred stimulation frequency. It was shown earlier that the original Hodgkin–Huxley (HH) model (Hodgkin and Huxley 1952) behaves more like a resonator than an integrator (Izhikevich 2000).

In spite of such evidence, most modeling studies on networks of spiking neurons use the simple leaky integrate-and-fire (IF) model. Although it is computationally easy to simulate, the model does not exhibit realistic dynamics, comparable to that of cortical neurons (Izhikevich 2004). The IF neuron belongs to class 1 excitable systems and monotonically increases its response with the increasing stimulation frequency. The IF formalism models only one variable: the membrane potential (*v*) of neurons (Dayan and Abbott 2001) (1) where *v* is membrane potential, τ is the membrane time constant (typically 10 ms), *R* is membrane resistance (taken here to be 10 MΩ), and *I* is total postsynaptic (input) current.

The HH models fit the dynamics of real neurons better than the linear integrate-and-fire formalism, which cannot reproduce the variable threshold behavior of cortical cells. However, the original HH model is computationally demanding, rendering it unfeasible for the study of large networks of neurons (Izhikevich 2004). A more efficient model was recently proposed (Izhikevich 2003). It is based on two-dimensional reduced Hodgkin–Huxley approximations and is very efficient in terms of computational complexity. The Izhikevich model (Izhikevich 2003) discussed here uses two variables to describe the state of the neuron: the membrane potential (*v*) of the neuron and a recovery variable (*u*) (2) (3) where *v* is membrane potential, *u* is the recovery variable, *I* is total postsynaptic current, and *a* and *b* are parameters.

When the membrane potential reaches a value >30 mV, a spike is recorded and the membrane potential is reset to its resting value, while the recovery variable is increased by a given amount (4) where *c* is resting potential and *d* is a parameter that is added to the recovery variable.

Depending on the concrete value of the four parameters (*a*, *b*, *c*, *d*), the model can reproduce a plethora of behaviors of cortical neurons, such as adaptation, postinhibitory rebound, and so forth (Izhikevich 2003). Of special interest to our study is the regular-spiking (RS) regime (*a* = 0.02, *b* = 0.1, *c* = −70 mV, *d* = 8) and the resonator (RES) regime (*a* = 0.1, *b* = 0.26, *c* = −70 mV, *d* = 2). Whereas in the regular-spiking regime the neuron behaves intermediately between an integrator and a resonator, exhibiting spike-frequency adaptation, in the resonator regime the membrane potential tends to oscillate after stimulation and the neuron can be switched between silent and spiking modes by appropriately timed stimuli (Izhikevich 2002).

We model the total input current (*I*) that a neuron receives as the sum of all postsynaptic currents (*psc _{i}*) produced at each synapse

*i*(5) (6) where

*psc*is the postsynaptic current contributed by synapse

_{i}*i*,

*A*is an amplitude parameter that determines the maximal amplitude of the

_{syn}*psc*,

*W*is the synaptic strength (

_{syn}*W*∈ [0..1]),

_{syn}*g*is the instantaneous synaptic conductance,

*E*is the reversal potential of the synapse (taken as 0 mV for excitatory synapses and −90 mV for inhibitory synapses),

_{syn}*v*is the membrane potential of the postsynaptic neuron, and τ

_{post}_{syn}is the time constant for the decay of the synaptic conductance after the neurotransmitter release (typically 10–20 ms).

In our simulations, for one given network, and one given model of neuron, there is only one synaptic amplitude (*A _{syn}*) for all excitatory/inhibitory synapses, respectively. Only the synaptic weights (

*W*) are different, yielding different coupling efficacies for different synapses (see

_{syn}*Eq. 5*). The synaptic amplitude can be regarded as a scaling factor that allows one to calibrate the range of coupling efficacies in the entire network.

Each time an afferent spike reaches the synapse, the instantaneous conductance is increased by a constant value (here 1) (7) Throughout this study, most models (except the Izhikevich neuron) have been integrated iteratively, using the Euler method, with a step of 1 ms. To ensure numerical stability for the Izhikevich model, the integration step of the membrane potential equation (*Eq. 2*) is 0.5 ms (see Izhikevich 2003). The relatively large step for the Euler integration provides a reasonable precision for the particular models studied here and has the advantage of allowing for large networks to be simulated in reasonable time.

We consider next three models of neurons: integrate-and-fire (IF), regular-spiking (RS), and resonator (RES). In the first step of our analysis we assessed the frequency–response properties of these three models.

### Subthreshold behavior

Biological neurons can exhibit different dynamics depending on the regime in which they operate (Hutcheon and Yarom 2000). In subthreshold conditions, many neurons display marked resonant properties (i.e., have preferred input frequencies to which they respond maximally). A common technique for characterizing subthreshold behavior is to stimulate the neuron with a subthreshold sinusoidal current having gradually increasing frequency (also called ZAP function) and computing the frequency-dependent impedance of the neuron (Puil et al. 1986). We used here the following subthreshold input stimulus (8) where *I*(*t*) is the time-varying input current injected into the neuron, α is a constant used to scale the frequencies (taken to be 2 × *PI* × 10^{−7}), and β is the exponent that determines how fast the frequency increases (chosen with a value of 3 here).

The frequency-dependent impedance *Z*(*f*) in the subthreshold regime can be computed by dividing the frequency spectra of the responses (membrane potential fluctuations around the resting potential) by the frequency spectrum of the input current, for each frequency component *f* (Puil et al. 1986) (9) where *Z*(*f*) is the input impedance of the neuron for subthreshold stimulation with frequency *f* and *FFT*(*f*) denotes the fast Fourier transform value for the frequency component *f*.

We have stimulated IF, RS, and RES neurons with an input ZAP current lasting for 1,024 ms (a duration that facilitates easy computing of FFT, our sampling bin size being 1 ms) and computed the frequency spectra of the membrane potential deviations from rest for the three types of neurons (Fig. 1). For each neuron type, we next determined the frequency-dependent impedance by dividing the spectra of the membrane potentials by the spectrum of the input current and taking the complex amplitude as a value for the impedance (Puil et al. 1986).

The membrane potential response and impedance reveal a passive behavior for the IF neuron, just as expected for the case of a resistance/capacitor (RC) electrical circuit (Hutcheon and Yarom 2000). Remarkably different, the RS neuron's impedance has a very broadly tuned peak at about 40 Hz, remaining rather unselective for all frequencies. Its impedance is low compared with that of the other neurons. The RES neuron is a clear resonator, acting as a band-pass filter (Hutcheon and Yarom 2000). It is well tuned to a central frequency of 52 Hz and has cutoff frequencies of nearly 25 and 84 Hz (see Fig. 1).

### Suprathreshold behavior

The subthreshold behavior is not guaranteed to be maintained in suprathreshold regimes (Hutcheon and Yarom 2000). Because the interesting regime is when neurons are often crossing the threshold and fire action potentials, it is very important to study how they respond with output spikes to input spike trains.

To determine the suprathreshold behavior of IF, RS, and RES neurons we considered a simple experiment: one neuron of each type was stimulated with input spike trains having various frequencies. The input spike trains had been generated using a homogeneous Poisson process with a fixed rate (the input frequency) yielding an exponential distribution of the interspike intervals (ISIs) (Softky and Koch 1993). The Poisson model for the input spike train was chosen for simplicity because the same results would be obtained with a perfectly regular spike train having the same firing rate. However, in the latter case, the response function is not as smooth as that in the former case. Using a Poisson input enables the averaging over multiple trials and thus a smoother approximation of the response function (for the perfectly regular case, averaging is irrelevant because there is no variability in the input).

Input frequencies were varied from 5 to 100 Hz in 5-Hz steps. For each frequency of interest, a Poisson input spike train was produced, in a window with a length inversely proportional to the frequency (see Fig. 2). The window size was increased for low frequencies such that, on average, a model neuron received roughly an equal number of input spikes for all frequencies. In addition, we varied the strength of the synapse that delivered the postsynaptic current to the model neurons.

We monitored the total number of postsynaptic spikes induced by the input spike trains for various input synapse amplitudes. For each input frequency *f* and each synaptic amplitude *A _{syn}*, the response of the model neurons Ψ(

*f*,

*A*) was computed as an average of the total number of postsynaptic spikes over 200 trials (each trial with a different instantiation of the input spike train) (10) where Ψ is the response of a model neuron,

_{syn}*f*is the frequency of the Poisson input spike train,

*A*is the amplitude of the afferent synapse used to stimulate the model neuron,

_{syn}*N*is the number of trials (200 in our case), and

_{t}*N*(

_{s}*i*) is the number of postsynaptic spikes in trial

*i*.

In addition to the response function Ψ, we computed the firing rate *R* of the model neurons by dividing the response function by the length of each window *W _{s}* corresponding to each frequency

*f*, as indicated in Fig. 2,

*right*(11) Results in Fig. 3 show very different behaviors of the three models. The integrate-and-fire model behaves as expected, monotonically increasing its response with the increasing input frequency: the higher the stimulation frequency, the higher the response of the IF neuron (Fig. 3,

*top left*).

The regular-spiking and resonant neurons behave quite differently. The first observation is that their response is both nonlinear and nonmonotonic. Both RS and RES models exhibit two different regimes of activity that depend on the coupling strength (synaptic amplitude). For weak coupling, the responses are increased for frequencies between 25 and 60 Hz, indicating resonance phenomena (Fig. 3, *middle left* and *bottom left*). Resonance is more pronounced in the case of the RES model (Fig. 3, *bottom left*). For intermediate coupling, both models show an abrupt transition (indicated by arrows in Fig. 3, *left*), more pronounced for RES neurons, shifting the preferred frequency to very low frequencies, whereas high frequencies are heavily dampened. In this second activity regime, which holds for strong coupling as well, neurons tend to become increasingly less responsive as the input frequency is increased. At a closer inspection, the response landscapes of the two models are quantitatively different. The most prominent feature of the RES model is its relatively good response even for weak coupling and its more saturated regime for strong coupling. These features will become important for network homeostasis, as we shall discuss later.

We must mention that the response function Ψ depicted in Fig. 3, *left* isolates the frequency response component for suprathreshold stimulation. However, this does not mean that the firing rate of RS and RES neurons is not increasing for higher stimulation frequencies. The firing rate *R* of model neurons, shown in Fig. 3, *right*, reveals more qualitative differences between models. Although all neurons tend to increase their firing rate for higher input frequencies, the RS and RES neurons have a more moderate slope for the increase. Moreover, the RES neuron exhibits vigorous responses even for weak coupling, whereas its firing rate has a tendency to saturate for high-input frequency and strong coupling.

The response functions (Ψ) and firing rate responses (*R*) of the models suggest that the IF neurons have purely integrative properties. The RS neurons behave intermediately between integrators and resonators, having a quite rapid increase in firing rate as a function of input frequency and coupling. Finally, the RES model exhibits pronounced resonance, while having reliable responses for weak coupling and saturating for high coupling and high input frequency.

### Excitability

To compare networks based on different neural models, one must first ensure that the level of excitability is similar across all networks. In other words, for the same network architecture but different models for individual neurons, we must have different synaptic strengths to obtain the same level of activity. To achieve a better understanding of the pronounced differences in excitability, we stimulated the model neurons (IF, RS, RES) with a single afferent spike. The postsynaptic current is delivered once by an inhibitory synapse (*A _{syn}* = 0.01,

*E*= −90 mV, τ

_{syn}_{syn}= 15 ms) and once by an excitatory synapse (

*A*= 0.01,

_{syn}*E*= 0 mV, τ

_{syn}_{syn}= 20 ms). For each model neuron, the hyperpolarization/depolarization of the membrane is recorded, relative to the resting potential. The peak of the membrane voltage excursion is computed as well, to give a quantitative estimate of the different levels of excitability.

The RES model is the most excitable, showing large oscillations of the membrane potential after stimulation (Fig. 4). At the other extreme, the RS neuron is much less excitable than IF and RES. For example, the peak of the depolarizing PSP of the RES model (2.6 mV) is more than fivefold larger than in the case of the RS neuron (0.51 mV). The weak response of RS neurons is explained by their low input impedance (Fig. 1).

An important observation has to be made here. Networks relying on different models of neurons are comparable only if the relative synaptic strengths are properly scaled to account for the excitability ratio between models. Because the two-dimensional reduced HH model for RS and RES neurons is highly nonlinear, we do not expect that the postsynaptic potentials would scale linearly as a function of the synaptic amplitude *A _{syn}*. Thus we need to compute, for all synaptic amplitudes of interest, the actual ratio between the PSP peaks of the different models, to allow for a realistic calibration. As shown in Fig. 5, the RES model exhibits a strong nonlinear scaling in its response, relative to the RS model's response and a more moderate nonlinearity with respect to the IF model. At the same time, the IF–RS pair scales linearly and has a constant ratio of excitability for all synaptic amplitudes.

Using the results in Fig. 5, one can calibrate the synaptic amplitudes for networks of IF, RS, and RES neurons, such that the maximal PSP amplitude per afferent spike is roughly the same in all networks. However, because of the different dynamical behaviors of the models to different input spike statistics, the calibration will not be perfect. The PSP is the same only if initially the membrane is at rest. The actual PSP is also a function of the current state of the postsynaptic neuron and the scaling becomes somewhat imprecise for general neuron dynamics. Moreover, because the synaptic strengths in the network can take different values for different synapses, the real amplitudes of the postsynaptic currents that each neuron receives will be different (see *Eq. 5*). This implies that the actual ratio of excitability for individual neurons might be different. Because of the nonlinear scaling between some of the models, a perfect calibration of the networks is not possible. Since we shall compare networks with exactly the same architecture, we expect that such effects will not be dramatic. In general, however, it is not clear how a perfect calibration could be achieved.

In any case, the plots in Fig. 5 show that the IF and RS networks can be properly calibrated, the IF and RES networks can be satisfyingly calibrated, whereas the RES–RS pair is the hardest to calibrate and thus to compare because of the nonlinear scaling of their excitability ratio.

For clarity, throughout the rest of the paper, synaptic coupling strengths are reported relative to a reference, that is, the coupling strength in RES networks (*A _{syn}*

*-RES*). The coupling in the other networks is by default rescaled to obtain the same excitability level using results in Fig. 5 and the following formula (12) where the ratio between peak PSP amplitudes is reported in Fig. 5.

### Microcircuits

In the following, we explored in what way the properties of individual neurons influence network dynamics. We investigated large networks of IF, RS, and RES neurons also referred to as “neural microcircuits” (Grillner et al. 2005; Maass et al. 2002).

The investigated microcircuits consist of 1,000 neurons, wired randomly with conductance-based synapses (see *Eqs. 1*–*7*). Among neurons, 80% are excitatory (IF, RS, or RES) and 20% inhibitory (fast-spiking [FS]; see Izhikevich 2003). Neurons have, on average, a 5% probability of synapsing with any other neuron in the network, such that the average number of synapses in the circuit is on the order of 50,000. The wiring is nonhomogeneous, in the sense that the number of afferent synapses per neuron varies, depending on the random instantiation. We also studied, as control conditions, homogeneous circuits, where the number of afferents per neuron is constant (only the identity of afferents is random) and also other connectivity ratios (2, 3, and 4%).

The synaptic strengths (*W _{syn}*) are randomly drawn from a uniform distribution, taking values in the range (0..1]. The synaptic amplitudes (

*A*) are calibrated differently, depending on the experiment, as will be explained later. The networks are simulated using the “Neocortex” neural simulator developed by the authors (Muresan and Ignat 2004).

_{syn}There is a high degree of randomness involved in building such networks. The spatial distributions of excitatory/inhibitory neurons, the synaptic strengths, and the connectivity patterns are all randomly instantiated. Very different networks can be produced and these might exhibit very different behaviors. In the absence of any genetically guided architecture, our networks are just random guesses. Moreover, because the behavior of networks with different architectures can be dramatically different it becomes very difficult to generalize the observed phenomena. Nonetheless, our goal here is to compare how the firing property of individual neurons can yield qualitatively different network behaviors (Fig. 6). For this purpose, it is possible to compare networks that have the same architecture and differ only in terms of the neuron models used to build them.

To compare the networks’ behavior that is induced by the different response properties of neurons, we always built three identical networks for each experiment. In each case, the networks have the same architecture, the only difference being the type of excitatory neuron used: IF, RS, and RES, respectively. To ensure that the three networks are architecturally identical, once the synaptic connections for a network are instantiated, they are copied identically to the other two networks. In all three cases, the inhibitory interneurons are modeled as fast-spiking (FS) neurons (Izhikevich 2003). As mentioned, the three identical networks are instantiated for every experiment, such that an IF-based network, for example, is not the same in two different experiments (as a result of the random instantiation).

To calibrate the three types of networks, the amplitudes (*A _{syn}*) for excitatory and inhibitory synapses are rescaled. All excitatory synapses targeting an excitatory neuron, for example, share the same synaptic amplitude (the same is true for inhibitory synapses, but with a different synaptic amplitude value). The only difference between different synapses of the same type (excitatory or inhibitory) is their weight. Because

*A*values for excitatory and inhibitory synapses are global parameters of the network, one is able to calibrate entire networks, by scaling them according to the described calibration technique (see

_{syn}*Excitability*and

*Eq. 5*).

### Self-sustainability and stability

We first assessed the ability of different networks to produce self-sustained, spontaneous activity in the absence of any external input. Intuitively, a recurrent network is able to sustain its activity better (in the total absence of any external input) if the excitatory synapses are stronger. However, the stronger the synapses, the more likely it is that the network will be rendered unstable, eventually leading to “explosive” activity behavior (epileptic activity).

We investigated 2,000 triplets of networks, each network in a triplet having the same architecture but different models for excitatory neurons (IF, RS, and RES) and different scaling amplitudes for the synapses (*A _{syn}*). The architecture that is used to build a triplet is randomly instantiated, as described previously. In addition, we used another population of 100 input neurons, each connected randomly to 2% of neurons in the network.

For each triplet, the networks are tested for their ability to sustain spontaneous activity after a brief steplike input. The input is used here only for initializing the activity in the networks and is *not* to be considered an external event triggering persistent activity. The concept of persistent activity associates an external event/trigger that would change the already existing spontaneous activity to the specific multistable activity encoding the task (Miyashita and Chang 1988). Thus even though there is an initial input, we are still studying spontaneous activity considering the input as a mere initialization of the network activities.

The input neurons were forced to spike, for 20 ms at the beginning of each trial, according to independent Poisson processes with a mean firing rate of 30 Hz. For the remaining 200 ms (the trial length is 220 ms), the networks evolve freely. For each network, we monitored the time-dependent population firing rate (Π) (13) where *n*(*t*, *t* + Δ*t*) is the total number of spikes in the network during the time interval *t* → *t* + Δ*t*; *N* is the number of neurons in the network (here 1,000); and Δ*t* is the time bin used to integrate the model (here 10^{−3} s).

During 200 ms of free evolution without any input, we computed the time of activity survival for all three network types in a triplet, as a function of synaptic amplitude (Fig. 7*A*). The synaptic amplitude of the RES model is taken as a reference, whereas the amplitudes for IF and RS networks are scaled according to the scaling factors determined in *Excitability* (see Fig. 5; both excitatory and inhibitory synapses are scaled). The duration of activity survival is considered to be the duration of “normal” activity, until either a “die out” or an “explosive” event is detected. The “die out” event is defined as the moment in time after which the network no longer fires any spike. The “explosive” event is considered to occur when the network starts firing at unusually high rates (>300 Hz) and maintains this high activity for ≥10 consecutive time bins (10 ms), indicating saturation. We also computed the percentage of networks that exhibit explosive behavior as a function of the synaptic coupling (Fig. 7*B*).

The results reported in Fig. 7*A* are averaged over 2,000 triplets of networks. Each of the 2,000 triplets is tested for all synaptic amplitudes, yielding a total of 20,000 tests. As expected, the IF networks are the poorest in maintaining self-sustained activity. Regardless of the coupling strength, the IF-based networks’ activity can survive, on average, only <30 ms after the cessation of input stimulation. Their activity either dies out or starts to explode very early, even for moderate synaptic couplings (Fig. 7*B*). The RS-based networks are not significantly better in sustaining their activity compared with IF networks. Over a certain threshold of the synaptic coupling, RS networks tend to become very unstable, most networks becoming “explosive” (Fig. 7*B*). This is partly explained by the fact that synaptic amplitudes in RS networks are much higher than those in the other two cases, as a result of the scaling to reach similar excitability (according to Fig. 5, RS excitatory synaptic amplitudes have to be almost fourfold stronger than for the case of IF networks during calibration). The impedance of RS neurons, relatively constant over different input frequencies (Fig. 1), is also likely to play a role in such unstable behavior. Also worth noting is the very abrupt transition of IF and RS networks from stable to “explosive” dynamics (Fig. 7*B*).

The most interesting behavior is exhibited by RES networks. For very weak coupling, their activity does not survive after the cessation of input. However, for moderate coupling they start to sustain their spontaneous activity increasingly more reliably, after the initialization phase (with input). There is a threshold-like transition from quiescent to self-sustained behavior, as suggested by Fig. 7*A*. The large SD in Fig. 7*A* for a certain coupling (*A _{syn}*

*-RES*= 0.003) indicates that some RES networks can sustain their activity for a long time, whereas for some others the activity dies out rather quickly, depending on the network architecture. For strong coupling, however, the RES networks become deterministically self-sustained. They can robustly maintain their activity for as long as the experiment lasts (here 200 ms). Most RES networks with moderate and strong coupling can sustain their activity, usually infinitely long (Muresan et al. 2005). Moreover, RES networks never displayed explosive behavior for the range of synaptic couplings considered here (Fig. 7

*B*).

Because only the IF and RS networks have “explosive” activity regimes, we next investigated the dynamics of their population rate (Π), immediately before and after an “explosion” was recorded. We wanted to assess the dynamics of the “explosive” process, attaining insight into how the saturation of dynamics in the two types of networks comes about. We recorded the population rate starting 10 ms before and ≤10 ms after the “explosive“ event was detected (Fig. 8). The threshold for “explosion” detection was 300 Hz, the event being validated only if the network remained over the threshold for ≥10 ms. In Fig. 8*A*, the IF and RS networks’ population rates were averaged over all networks exhibiting “explosive” activity and all synaptic coupling strengths. The slope of the population rate increase is higher for IF networks than that for RS networks, indicating a faster activity buildup.

Next, for each synaptic coupling strength for which “explosion” occurs, we determined the dynamics of the population rate for IF (Fig. 8*B*) and RS (Fig. 8*C*) networks. The coupling strengths reported are the reference synaptic strengths of the RES networks (*A _{syn}*

*-RES*) as in Fig. 7, whereas the IF and RS real coupling strengths (

*A*

_{syn}*-IF*and

*A*

_{syn}*-RS*) are rescaled according to Fig. 5, as discussed earlier.

The first observation is that the IF networks’ activity becomes “explosive” for smaller reference couplings than RS networks (consistent with Fig. 7*B*). Furthermore, for the same reference coupling, the slope of the rate increase is significantly higher for IF networks (Fig. 8, *B* and *C*). For example, at a reference coupling of *A _{syn}*

*-RES*= 0.01, the slope of population rate increase for IF networks is 233 Hz/ms, compared with only 183 Hz/ms for RS networks (although the real coupling strength is much higher for RS networks:

*A*

_{syn}*-RS*= 0.05 compared with

*A*

_{syn}*-IF*= 0.0153).

At closer inspection, results in Fig. 8*B* reveal even more interesting aspects. For weaker couplings, the population rates are relatively high 10 ms before the “explosive” event, which indicates a slower buildup of the activity. In such cases, a network must already have significant activity before “exploding,” to be able to cross the threshold. Moreover, the population rate increases almost linearly in relatively low coupling regimes (*A _{syn}*

*-RES*= 0.04) for IF networks and moderate coupling (

*A*

_{syn}*-RES*= 0.06) for RS networks. For strong coupling, both types of networks exhibit “explosive” activity that increases exponentially fast in the first few milliseconds (in spite of low initial activity), and then saturates, giving rise to a sigmoidal-like “explosion curve” (Fig. 8,

*B*and

*C*). The slope of the rate increase depends on the coupling strength. Coupling is thus a major factor determining how unstable a network is (Fig. 7

*B*) and how fast its activity “explodes” (Fig. 8,

*B*and

*C*).

To explain the behavior of the IF, RS, and RES networks, we need to recall the response landscapes presented in Fig. 3. The IF neuron does not exhibit any particular frequency preference, acting more like a passive RC circuit. Low-input frequencies cause it to hardly fire, whereas high-input frequencies produce vigorous firing. As a result, when IF networks enter low-activity regimes the neurons tend to fire even less, pushing the network into a cascaded cessation of activity. On the other hand, when a group of IF neurons becomes very active, it tends to entrain more and more neurons with it, producing an avalanche effect and leading to the saturation of the network. Because the slope of the rate increase for IF neurons is high (Fig. 3, *top right*), “explosive” activity settles in very easily when coupling in the network is sufficiently high. Inhibition cannot compensate for this because inhibitory neurons represent only 20% of neurons in the network and have on average the same synaptic efficacies as those of excitatory synapses. If inhibitory synapses were rendered stronger the IF networks’ activity would die out very fast. The commonly used solution to this problem has been to reduce the excitatory coupling in the network and add some nonspecific background current to excitatory neurons, such as to keep the network alive (Amit and Brunel 1997; Brunel and Hakim 1999; Hansel and Mato 2001; Latham et al. 2000; Plesser and Gerstner 2000; Roxin et al. 2004).

RS networks are somewhat more stable than IF networks for moderate coupling regimes. As their excitability landscape suggests (Fig. 3, *middle right*), RS neurons tend to have a slightly smaller slope of rate increase, while being more responsive for low frequencies than IF neurons. However, for strong coupling, the RS networks become very unstable (Fig. 7*B*). This is probably attributable to the relatively high-input impedance of the RS neurons in the high-frequency range. The “explosion” mechanism is similar to that of IF networks, but the slope of the population rate increase is smaller (Fig. 8*C*).

Finally, RES networks are the most stable and able to sustain robust spontaneous activity. This arises from two key properties. First, RES neurons are easy to excite even for weak inputs (Fig. 3, *bottom right*). When the activity in the RES network diminishes, it is still easy to reignite it, if a minimal group of neurons in the network provides enough input to the others. Second, when the activity in the network builds up, RES neurons tend to become increasingly less responsive, as indicated by their firing rate activity landscape (Fig. 3, *bottom right*). Thus RES neurons can maintain a stable activity regime that keeps the network alive. For low and moderate synaptic amplitudes, the RES neurons tend to settle into a preferred low-coupling regime, with a firing rate of 30–50 Hz, matching the resonant frequency of the membrane (Figs. 1 and 3, *bottom left*, low-coupling resonant regime). If the synaptic coupling is increased, the RES networks can fire sustained at quite high rates (60–80 Hz) without becoming epileptic. We have to mention that RES networks are usually able to maintain their spontaneous activity for indefinitely long periods. Another interesting phenomenon is that the coupling with inhibitory neurons as well as the membrane resonance of RES neurons produces remarkable oscillatory activity (Fig. 6).

Two major factors influence the stability of the studied networks: the synaptic connectivity and the size of the network. Results reported here are taken for the case of relatively highly coupled networks (5%) and nonhomogeneous connectivity (each neuron has a different number of afferents, depending on the random instantiation of the network). We have studied, as control conditions, networks with lower connectivity (2, 3, and 4%) as well as with homogeneous (constant number of afferents per neuron, but randomly selected) and nonhomogeneous architectures. All the reported results are qualitatively preserved with remarkable accuracy. Quantitatively there are differences that are expected. For lower connectivity, the thresholds for “explosive” activity of IF and RS networks and the threshold for self-sustainability of RES networks are higher (expected, because lower connectivity is similar to scaling down synaptic efficacies). The activity of IF and RS networks has a lower probability of becoming “explosive,” although their self-sustained time drops significantly. For smaller network sizes, the “explosion” thresholds increase, whereas the self-sustained durations of IF and RS networks decrease. Additionally, the transition from stable to unstable (IF and RS) and from quiescent to self-sustained (RES) regimes becomes less sharp compared with Fig. 7. This is because smaller networks with fewer synapses have a more heterogeneous structure and behavior (some are more “explosive,” some are less), yielding a smoother average compared with Fig. 7, *A* and *B*.

A more important observation is related to the oscillatory behavior of the networks. It seems that homogeneous RES networks are much more prone for developing oscillations than their heterogeneous counterparts. This phenomenon is probably related to the homogeneous/uniform number of synapses per neuron in homogeneous networks, yielding a balance of excitation across the population and favoring the development of synchronized assemblies promoted by resonance. Revealing the exact mechanism for the development of such oscillations goes beyond the scope of the present study, although these results emphasize the importance of the network architecture.

### Network excitability

A very important issue related to network dynamics is the response property to external stimulation. A well-known property of visual sensory cortices is the pronounced response of cells to preferred visual stimuli, referred to as the “on response” (Frazor et al. 2004).

We wanted to investigate next in what way the response properties of various types of neurons (IF, RS, and RES) influence network responsiveness to external input. It is worth mentioning that the “on response” is a phenomenon related to the changes in firing rates of neurons immediately after stimulus onset, relative to the baseline spontaneous activity. To be able to study the response properties of various networks to external input, we needed to ensure first that the networks exhibit spontaneous dynamics.

Because only RES networks are self-sustained without input, the only way to compare the responsiveness of all networks is to endow IF and RS networks with spontaneous activity as well. As outlined earlier, it is very difficult if not impossible to obtain IF and RS networks with stable spontaneous activity without external drive. We will next introduce a background current to these two networks such that they would exhibit spontaneous dynamics, in line with the common practice (Brunel and Hakim 1999; Hansel and Mato 2001; Latham et al. 2000; Plesser and Gerstner 2000; Roxin et al. 2004).

The first requirement is that coupling strengths in IF and RS networks are weak enough to prevent “explosive” behavior (see Fig. 7*B*). We next chose weak synaptic couplings for these two classes of networks, respecting the excitability ratios in Fig. 5 (*A _{syn}*

*-IF*= 0.0009,

_{exc}*A*

_{syn}*-IF*= 0.0014,

_{inh}*A*

_{syn}*-RS*= 0.0035,

_{exc}*A*

_{syn}*-RS*= 0.005). Consequently, weak coupling leads to a fast loss of activity (Fig. 7

_{inh}*A*) and introduces the need for background current to enable spontaneous, ongoing activity. Each IF and RS neuron thus receives an additional, random background current at each time step, drawn from a uniform distribution between 0 and

*I*. The total input current to an IF or RS neuron now becomes the sum of the synaptic and background currents (14) where

_{max}*I*is the input current to the neuron (see

*Eqs. 1*and

*2*);

*psc*is the postsynaptic current delivered by an afferent synapse

_{i}*i*(the sum runs over all afferent synapses); and

*I*is the background input current, randomly drawn from a uniform distribution between 0 and

_{bck}*I*at each time step (1 ms).

_{max}For the RES networks an intermediate coupling (*A _{syn}*

*-RES*= 0.003) is chosen such that the network would maintain a robust self-sustained activity that is usually in the range dictated by the resonant frequency for RES neurons (30–50 Hz). The background current amplitudes for IF and RS networks were adjusted during a calibration phase, such that the spontaneous population rates (Π) of these networks would closely match the reference RES population rate, with a tolerated error of ±5%. On average, the background current amplitudes of IF and RS networks required to attain the reference firing rate were

*I*

_{max}*-IF*= 4.73 nA and

*I*

_{max}*-RS*= 27.30 nA, respectively. The calibration indicates that RS networks required even more input current relative to IF networks than predicted from Fig. 5. We need to keep this in mind because it indicates that RS networks receive on average even more background noise, relative to IF networks, for the same spontaneous firing rate.

After calibration, all three networks maintain ongoing activity at the level of about 30–40 Hz. Such a high frequency does not quantitatively match the spontaneous firing rate of cortical networks, which is about 1–5 Hz (Amit and Brunel 1997). However, we were interested here in activity regimes where RES networks can robustly self-sustain activity in the absence of any input. For the particular model used in this study, the range is around 30–60 Hz. In any case, a direct quantitative comparison to cortical dynamics would be tentative and speculative.

After a period of stabilization of activity, the networks are stimulated using a population of 100 input neurons that project to all three networks. The input connections of the networks are structurally identical, whereas the synaptic amplitudes are calibrated to address the excitability issue discussed in *Excitability*. Thus all networks receive the same stimulation pattern. The networks are stimulated for 50 ms with a Poisson input of 20 Hz (see Fig. 9*A*). Then, they are allowed to freely evolve for another 1,450 ms, after which the process is repeated. Each such 1,500-ms interval represents a trial. During the trial, we recorded the population rates (Π) and average membrane potentials.

We investigated the response properties to input by averaging the population rates over 100 trials, aligned to the beginning of the trial (similar to peristimulus time histograms; see Gerstner and Kistler 2002), for 50 networks of each type. Results shown in Fig. 9*A* suggest that the most responsive network is by far the IF network. Because of the rather strong depolarizing background current required for spontaneous activity and the internal network bombardment, IF neurons have an average membrane potential of −53.69 mV (Fig. 9*B*), between the resting potential of −70 mV and the threshold of −45 mV. This phenomenon probably leads to a so-called high conductance state, in which neurons become more responsive to input as the slope of the gain function is modulated (for a review, see Destexhe et al. 2003). Note that even though we use a linear leaky integrate-and-fire model, the conductance-based model of the synaptic currents yields an “effective membrane conductance” that is modulated by the depolarization level of the neuron, giving rise to the “high conductance state” (Rudolph and Destexhe 2006).

Both the “high-conductance states” and recurrent self-excitation of subgroups of neurons could be the cause for such a robust response of IF networks to input. As indicated in Fig. 8, *A* and *B*, for strong coupling and without background current, the recurrent excitation produces rapid activity buildup that is hardly compensated by inhibition. This indicates that the recurrent groups are a major source for such a high responsiveness. Furthermore, the relatively slow convergence of the population activity back to the baseline of 30 Hz (Fig. 9*A*) suggests that activity reverberates for a while as a result of these intrinsic recurrent groups in the network.

Although RS networks are bombarded with background currents >5.5-fold larger in amplitude compared with IF networks, they tend to be much less responsive. As the excitability landscape in Fig. 3 (*middle left*) suggests, RS neurons have some weak resonance properties and tend to dampen high-input frequencies in the suprathreshold regime. In the subthreshold regime, for low frequencies, RS neurons have relatively reduced input impedance (Fig. 1). These properties probably impair or slow down the formation of recurrently excited groups, for weak coupling in the network (consistent with Fig. 7*B*). Moreover, RS networks display a rapidly decaying tonic response followed by a very prominent dip in the activity after the stimulation ceases (Fig. 9*A*). The latter phenomenon might be related to the adaptation properties of RS neurons (Izhikevich 2003) and is clearly reflected in the hyperpolarizing phase of the average membrane potential trace (Fig. 9*B*). Adaptation is also likely to be involved in the progressive reduction in response during the stimulation (Fig. 9*A*, from 20 to 50 ms). Overall, the effect of the input stimulation is maintained for quite a long time, close to that of IF networks (Fig. 9*A*, from 0 to 120 ms), although it consists of both an excited and a hyperpolarized phase.

RES networks are the least responsive, having a quite weak response to input stimulation. The firing rate of such networks increases less, relative to IF or RS networks, albeit the effective input drive is similar (calibrated based on Fig. 5, as mentioned earlier). Moreover, the effect of stimulating RES networks quickly vanishes after the stimulation is removed (they return close to the baseline level almost simultaneously with cessation of the input, displaying a dampened oscillation around baseline; see Fig. 9*A*). This indicates a high reluctance of RES networks to rate changes induced by the stimulus. They have low responsiveness to input and exhibit fast restabilization of the activity to the homeostatic baseline level. Altogether, such evidence suggests that RES networks are prominently homeostatic, with very stable activity, however unresponsive to external drive.

### Activity patterns

The patterns of activity in the three types of networks are dramatically different as also suggested in Fig. 6 (in spite of the fact that they have the same architecture). We wanted to assess the relation between the response properties of IF, RS, and RES neurons and the resulting activity patterns that are produced by the three types of networks. We considered the same protocol as presented in the previous subsection *Network excitability*, with triplets of networks firing at a baseline level of about 30–40 Hz and being stimulated with 20-Hz Poisson inputs lasting 50 ms. In each trial, we also monitored, in addition to population rates and average membrane potentials, the population interspike interval (ISI) randomness, a new measure that we introduced to characterize the activity patterns in the networks.

The population ISI randomness (*S _{ISI}*) measures the degree of disorder in the ISIs of a population of neurons during a given time window. In some sense, the

*S*is similar to the concept of entropy. We defined the population ISI randomness as follows (also see the appendix). The spikes of a population of neurons are recorded in a sliding window of 150 ms. All ISIs of all neurons in the window are computed and we denote the total number of ISIs with

_{ISI}*N*, which depends both on the number of neurons considered and on the number of spikes they fired within the time window. Next, the values of ISIs are clustered such that each cluster contains all ISIs that are within a range of ±10% of the value of the cluster center. A number

_{ISI}*Nc*of independent clusters is determined (see appendix for details). The population ISI randomness indicates the fraction of independent clusters relative to the total number of ISIs (15) In other words,

_{ind}*S*shows how many independent values of the ISIs (with a tolerance of 10%) are required to describe the whole window, for all neurons. One could see it as a ratio of information compression. If all neurons were firing with the same ISI (for example, being aligned to a common oscillation cycle), one number would be enough to represent all the ISIs in the window and

_{ISI}*S*would be the lowest (1/

_{ISI}*N*). If each neuron fired with a different ISI at different moments such that all ISIs in the window were significantly different, the randomness measure would approach a value of 1.

_{ISI}The advantage of the population ISI randomness measure is that it is able to quantify the degree of disorder in the activity of a population of neurons, independently of the ISI distribution. Other techniques of estimating the degree of randomness, such as computing the coefficient of variation *Cv* (Softky and Koch 1993), deal only with one neuron at a time and are applicable only if the ISI distribution is unimodal. The RS and RES networks frequently engage in oscillatory behavior such that the ISI distributions can be bimodal (one peak for the intercycle distance and one peak for the in-cycle bursting). In some sense, the population ISI randomness would correspond to computing the coefficients of variation in different subdomains of the ISI distribution that are centered on the local peaks of the distribution.

The disadvantage of the population ISI randomness measure is that it is influenced by the firing rate. Even for stationary firing processes, a higher firing rate yields a lower randomness because of the clustering of ISIs that is more precise for smaller values of ISIs. Because our three networks fire with the same spontaneous rate, comparison of the three *S _{ISI}* values is not affected by this issue during the stationary nonstimulated phase (Fig. 10

*A*).

After the initial stimulation phase, the networks undergo severe reorganization. This is reflected by the change in *S _{ISI}*, depicted in Fig. 10

*A*. As mentioned earlier, because of the strong rate increase, the ISI population randomness decreases in all cases because ISIs become smaller. However, after this nonstationary phase, networks get back to the stationary spontaneous activity during which

*S*is constant. Interestingly, the IF networks have the highest stationary ISI randomness, even though RS networks have the strongest background random drive (recall the remark from the calibration of input currents to provide spontaneous activity in IF and RS networks). This indicates that IF neurons are not capable of coupling into preferential firing patterns—unlike RS neurons, which can take advantage of their adaptation and weak resonant properties to produce preferential ISI intervals and can couple into robust oscillatory firing patterns (Fig. 6). As expected, RES networks have the lowest ISI randomness because there is no random background current and because neurons have preferred ISIs as a result of their membrane resonance, inducing a highly ordered activity (Fig. 6).

_{ISI}We also computed the ISI distributions of IF, RS, and RES networks during the stationary phase of a trial. IF networks have the highest spread of ISI distribution (Fig. 10*B*), indicating the highest disorder, just as predicted by the ISI population randomness. The ISI distribution of RS networks is more compact and less spread (Fig. 10*C*) because RS neurons fire more regularly than their IF counterparts. In the case of the RES networks, the precision of firing is remarkable, neurons having preferred ISIs (Fig. 10*D*) in the range of 25–40 ms, close to the period of subthreshold membrane oscillation (Fig. 4) and consistent with their frequency preference (Fig. 1).

The ISI population randomness and ISI distributions suggest that resonant properties endow a network with more regular and precise firing, favoring the onset of oscillatory activity. On the other hand, integrative properties endow a network with more degrees of freedom, enabling it to accommodate random patterns of activity.

### Relationship with other mechanisms contributing to stable spontaneous activity

So far we have investigated in what way the membrane properties of network neurons influence the stability of reverberating microcircuits, their responses to stimulation, and their patterns of activity. We next considered other potential mechanisms that can contribute to stable spontaneous activity of microcircuits. First, we investigate the importance of miniature synaptic potentials in providing a “lower bound” for the activity of RS circuits. Then, we search for other potential sources of stability providing the “higher bound” of network dynamics, such as synaptic delay mechanisms.

##### MINIATURE SYNAPTIC POTENTIALS (MINIS).

Following a previous study of Timofeev and colleagues (2000) and recent evidence about the properties of minis (Paré et al. 1997, 1998) we next considered the RS circuits used in *Self-sustainability and stability* and added one extra excitatory synaptic conductance to each RS neuron (mini conductance). To simulate spontaneous release, the mini conductances are changed (increased), on average with a frequency of 20 Hz (see Paré et al. 1997). Each spontaneous conductance increase has amplitude that is randomly drawn from a uniform distribution with a maximum specified value Δ*g-max* (in *Eq. 7*, for the case of mini conductances, *g* is increased with a random amplitude having a specified maximum of Δ*g-max* with values of 8, 8.5, 9, 9.5, and 10, depending on the experiment). The mini-conductance parameters were chosen such that the amplitude of mini-PSPs was in the physiological range of 0.1 to a few millivolts (*A _{syn}*

*-mini*= 0.0035 μS,

*W*is randomly drawn from a uniform distribution with a mean of 1.56 ± 0.88, τ

_{syn}_{syn}= 10 ms,

*E*= 0 mV). Each network is simulated for 100 s (10

_{syn}^{5}steps) and results are averaged across 50 random networks (Fig. 11).

Two crucial parameters influence the behavior of these networks: the coupling strength within the network (*A _{syn}*

*-RS*) and the average amplitude of the mini-PSPs (controlled by Δ

*g-max*). For weak coupling and small amplitude of minis, the RS networks fire in a tonic fashion with very low firing rates (0.2–0.9 Hz), dependent on the amplitude of the minis (Fig. 12

*B*). As the coupling in the network is increased, for a relatively narrow range of parameters (

*A*

_{syn}*-RS*∈ [0.02, 0.03]), the activity becomes bursty (Fig. 12

*C*), with high peaks of the population rate and burstlike firing of RS neurons (Fig. 11). If the coupling is further increased, the RS networks become “explosive,” just as reported earlier in Fig. 7

*B*. Note that the coupling values for the transition from tonic to bursting regimes correspond to the critical region of transition from “dying out” to “explosive” behavior of the RS networks in Fig. 7

*B*(corresponding to a reference

*A*

_{syn}*-RES*between 0.04 and 0.07).

At the onset of burstlike firing, the bursts occur rather irregularly with very long interburst intervals (on the order of tens of seconds) and have low amplitudes (computed from the population rate). As the network coupling is increased, bursting becomes more reliable and has higher amplitudes (Fig. 12, *C*, *D*, and *E*). If, in addition the amplitude of minis is increased, the bursting process becomes increasingly more periodic (Fig. 12, *G* and *H*).

There are several interesting observations. First, although the mean population rates (Fig. 12*A*) increase rapidly with increasing network coupling and mini amplitude, the mean baseline population rates (not including the bursts) increase steadily and much more slowly (Fig. 12*B*). This indicates that the average network activity, excepting the bursts, scales slowly with network coupling and mini amplitude. The network activity becomes high only in the vicinity of bursts indicating a “runaway-like” recurrently amplified, self-excitation process that is network dependent. This argument is supported even further by the fact that the burst amplitude does not seem to depend on the amplitude of the miniature PSPs. In Fig. 12, *D* and *E*, the means of the burst amplitude distributions, for the same coupling but different mini-PSP amplitudes, are very close to each other. The means of these means and their SDs are shown in Fig. 12*F* computed for the three couplings where bursting occurs (0.025, 0.0275, 0.03), across all mini-PSP amplitudes. The mean burst amplitudes increase linearly and their SD (computed across different mini amplitudes) is small. On the other hand, the number of bursts for the same coupling strength depends critically on the mini-PSP amplitudes (see the dependency of the size of the distributions in Fig. 12, *D* and *E* on the mini amplitudes).

These results suggest that the bursting of the network is not driven (although it might be initiated) by a chance synchronous accumulation of miniature PSPs but is rather a recurrent-activity self-amplification process that is network dependent. The burst process is a property of the network, which is consistent with previous findings (Timofeev et al. 2000). The amplitude of miniature PSPs, on the other hand, influences the probability that the network will burst and, in addition, whether this bursting is periodic (Fig. 12, *G* and *H*). The position of the peak of the interburst interval (IBI) distributions does not seem to depend on the network coupling but only on the mini-PSP amplitudes, although the sharpness of this peak is modulated by the network coupling (Fig. 12, *G* and *H*).

To summarize, depending on the values of the two critical parameters (network coupling and mini-PSP amplitudes) RS networks can produce three activity regimes: tonic, aperiodic bursting, and periodic bursting (Fig. 12*I*). The bursting and its dynamics (amplitude) are properties of the network, whereas the probability of initiating a burst and the periodicity of bursts depend on the amplitudes of the miniature synaptic potentials. Finally, the spike-frequency adaptation of RS neurons is very likely to be involved in extinguishing these bursts and preventing “explosion,” for the narrow coupling, at the threshold between stable and “explosive” dynamics.

##### SYNAPTIC DELAYS.

We investigated next the influence of synaptic delays on the networks described in *Self-sustainability and stability*. We considered exactly the same type of networks and experimental setup and additionally introduced synaptic delays. We defined a maximum propagation delay for the most distant pair of neurons, ranging from 20 to 100 ms.

Resonant networks remain very stable and become self-sustained even earlier than shown in Fig. 7, for a reference coupling of *A _{syn}*

*-RES*= 0.001.

The self-sustained duration of IF networks’ activity does not seem to be significantly modulated by delays (Fig. 13*A*) and, although there is a weak trend toward longer self-sustained durations, there is no dramatic improvement compared with the “no delay” condition (see Fig. 7*A*). On the other hand, the RS networks’ self-sustained durations are significantly modulated by synaptic delays (Fig. 13*B*). For relatively long propagation delays, they can sustain activity up to the entire experiment duration (200 ms). However, we must mention that this phenomenon is by no means comparable in robustness to the self-sustained regime of resonant networks, which can sustain activity for indefinitely long periods.

Regarding “explosive” behavior, IF and RS networks become less “explosive”—and thus more stable—and the synaptic coupling for which runaway activity occurs is strongly modulated by the amplitude of synaptic delays (Fig. 13, *C* and *D*). Longer delays favor more stable dynamics and the “explosive” process progresses more slowly (the slopes of the population rate increase during “explosion” are smaller than those in Fig. 8).

Our results suggest that integration alone cannot take full advantage of synaptic delays in maintaining robust ongoing activity, except for the fact that IF networks become less “explosive.” The excitation in the network is quickly lost by integrator neurons because of a temporal spread of action potentials arising from delays, which impairs synchronization and affects integration. Probably other additional factors would be required, such as long synaptic time constants, corresponding to *N*-methyl-d-aspartate–mediated (NMDA) neurotransmission, for example. In contrast, RS networks’ activity is much stronger modulated by synaptic delays. The frequency adaptation and weak resonant properties of RS neurons, together with the synaptic delays that prevent “explosion,” allow RS networks to become quite robustly self-sustained. As a final conclusion, synaptic delays provide a robust “upper bound” of activity by preventing activity “explosion.” However, they are not sufficient to stabilize the “lower bound” of network activity in general, for which the membrane properties of the neurons become crucial.

## DISCUSSION

We have shown that the response properties of different neural models can dramatically influence the behavior of networks based on such models. In particular, we studied two basic properties of the membrane dynamics: integration and resonance. We discussed how different models have very different behavior in terms of excitability and response property as a function of the temporal structure of the input spike trains. At one extreme, the integrate-and-fire model prefers high-frequency inputs and shows no adaptation. At the other extreme, resonant neurons have preferred input frequencies, dampening high-input bombardment while being quite sensitive to weak inputs. This property has important consequences for the homeostasis of network activity. We outlined the ability of networks built with resonant neurons to robustly maintain spontaneous activity in the absence of any stimulation. In turn, this homeostatic regime impairs the responsiveness of resonant networks to external stimulation. We concluded that resonance facilitates the stability of network dynamics at the cost of sacrificing responsiveness, whereas integration provides reliable responsiveness but sacrifices stability.

Additionally, we investigated the relationship between membrane properties and other processes potentially contributing to stable spontaneous activity: miniature synaptic potentials and synaptic delays. Miniature synaptic potentials can endow networks of regular-spiking neurons with spontaneous activity. The spike-frequency adaptation of these neurons together with recurrent excitation lead to burstlike firing, often observed in vivo during slow-wave sleep states (Steriade 2006; Steriade et al. 1993; Timofeev et al. 2000). Bursting is a property of the network, whereas the number of the bursts and their periodicity depend on the amplitude of mini PSPs. Regarding synaptic delays, we have shown that they can provide an upper bound on neural activity, preventing “explosive” dynamics. However, even with synaptic delays, membrane properties become essential for providing stability at the “lower bound” of neural activity and allowing network reverberations to sustain ongoing dynamics.

### Generality of results

The present study relies on specific models of neurons and it should be made clear that there is no attempt here to quantitatively model brain activity. The IF model is probably the most popular and is used for large-scale simulations of neural microcircuits, being well known for its purely integrative properties. The Izhikevich RS and RES models exhibit other interesting dynamical behaviors such as adaptation and resonance, being relevant to our discussion. We explicitly studied the properties of these specific models and there is no guarantee that they would directly translate to the brain. For example, the spontaneous firing rate attained by our resonant networks was in the range of 30–50 Hz, rather incompatible with the 1- to 5-Hz spontaneous activity of the cortex. It could be suggested that such an elevated firing rate would be more appropriate to account for persistent rather than spontaneous activity. However, persistent activity involves multistability (Durstewitz and Seamans 2006), such that the sustained activity pattern could be switched on or off. In our case, the resonant networks maintain their activity in a homeostatic manner and there is no multistability involved. The spontaneous firing rate of our resonant networks is stable around 30–50 Hz because of the particular choice for the model that resonates at such high frequencies. It is known, however, that thalamocortical neurons can display resonance at low frequencies as well (Hutcheon and Yarom 2000). This low-frequency resonance (1–2 Hz) matches well the spontaneous firing frequency in cortical networks (1–5 Hz).

Regarding the RS model, we have to mention that the classification into spiking classes has been done according to the response of cells to step input currents. In the cortex, cells can often change their firing properties such that regular-spiking cells can become intrinsically bursting during sleep or anesthesia (Timofeev et al. 2000). The fact that in our model RS cells have weak resonant properties and that in the cortex pyramidal cells are often regular-spiking cells should not be considered as evidence that excitatory neurons in the cortex do not resonate. Actually, real cells can switch between different dynamical modes and the pyramidal neurons (sometimes classified as RS cells) can display resonant behavior (Fellous et al. 2001; Hutcheon and Yarom 2000). The types of neurons used in our simulations are “toy models” compared with the complexity and versatility of biological neurons. No direct correspondence should be made between our model neurons and cortical neuron types.

Nonetheless, because most of our results are rather qualitative, based on comparing different networks, the presented findings become general enough to allow for meaningful conclusions. We only compare how integration and resonance would affect the activity of otherwise identical networks of spiking neurons and show that these two properties endow large-scale neural circuits with very different dynamical behaviors. We have to mention that the presented phenomena were remarkably robust for all network architectures and sizes that we tested. For the case of resonant networks, we show that robust self-sustained activity can be attained in the absence of any external stimulation. We are not aware of any other study showing that such robust ongoing activity can be produced in artificial neural microcircuits without background input. This opens new possibilities for studying completely deterministic networks whose spontaneous activity is solely maintained by the interactions in the network.

### Membrane properties, network dynamics, and cortical processes

As we have seen, the membrane properties of neurons in large networks dramatically determine their behavior. Integration and resonance endow networks with responsiveness and versatility on one hand and temporal precision and stability on the other, respectively. However, other neural properties that stem from membrane dynamics, such as spike-frequency adaptation, can contribute to the production of specific patterns of network activity. For example, the adaptation and the intermediate response property (between integration and resonance) of RS neurons lead to oscillatory network bursting under the influence of miniature synaptic potentials. Moreover, various network properties, such as synaptic delays, have differential effects depending on the membrane properties of the constituent neurons. In our study, we have shown that in spite of strong network coupling and synaptic delays, purely integrative networks are not able to sustain activity for long durations, as opposed to RS networks that take advantage of both. In the context of the homeostasis of network activity, there are probably many more, other relevant processes, acting on multiple levels, from synapses to global neuromodulation. They are differentially expressed during various cortical states and their interaction is most likely nontrivial. It remains a future challenge to investigate the role of each such process individually and, even more so, in a collective framework.

There are several remarks that have to be made, before concluding this study. First, neurons were considered for a long time to behave as integrators and this is reflected by the overwhelming number of studies based on the integrative paradigm, especially in modeling. Other properties of neurons, such as resonance, have only relatively recently been emphasized and explicitly explored. In the experimental field there is accumulating evidence that real neurons have rather pronounced resonant properties. Resonance has been found in the somatosensory cortex (Hutcheon et al. 1996; Ulrich 2002), prefrontal cortex (Fellous et al. 2001), entorhinal cortex (Erchova et al. 2004; Schreiber et al. 2004), thalamus (Puil et al. 1994), hippocampus (Leung and Yu 1998), and interneurons (Pike et al. 2000). It has been suggested that resonance could play an important role in selective communication between neurons (Izhikevich 2002) and that neurons can communicate their frequency preference to postsynaptic targets leading to a complex interplay with their firing rate (Richardson et al. 2003). It was also shown that the intrinsic properties of the cellular membrane could have a major impact on frequency-dependent information flow in the hippocampus (Erchova et al. 2004). Taken together, these facts emphasize the importance of resonance in neural circuits. In addition, we suggested here that resonance could play a very important role in the homeostasis of network activity and that it affects quite significantly the response properties of recurrent networks to input stimulation.

Second, there is rapidly accumulating evidence that neural systems possess homeostatic regulatory mechanisms, spanning several levels of organization (Davis 2006; Turrigiano and Nelson 2004). Such regulatory processes can act at the level of synapses as well as within the cell itself, changing its global properties. They can be slow (intrinsic plasticity, synaptic scaling) or relatively fast (short-term plasticity, spike-frequency adaptation). To the best of our knowledge, resonance has not yet been explicitly suggested to play a homeostatic role in regulating network activity. As we have shown here, resonance can dramatically contribute to the stabilization of network dynamics and we suggest that it could be a very robust homeostatic mechanism. Resonance is a global property of the cell's response (not local, like synaptic mechanisms) and it is fast. We also have to mention that the homeostatic regulation of network dynamics through resonance is more a passive process as opposed to the common notion that homeostasis is an active process involving the change of cellular and synaptic properties over time (Turrigiano and Nelson 2004).

Third, resonance at the membrane level can facilitate networks to engage into oscillatory behavior. Neurons with the same preferred interspike intervals tend to periodically lock into coherent oscillatory patterns. We also find that resonant networks display marked oscillations (Fig. 6) and our observations are consistent with the idea that intrinsic membrane properties dramatically affect the ability of networks to produce oscillatory patterns of activity (Buzsáki 2006; Geisler et al. 2005; Hutcheon and Yarom 2000; Lampl and Yarom 1997).

A very important aspect related to resonance is that it can be modulated. Recent studies suggest that resonant behavior is voltage dependent. For example, cortical pyramidal neurons have two sets of resonant frequencies (1–2 and 5–20 Hz), depending on the level of depolarization of the cell (Hutcheon and Yarom 2000; Hutcheon et al. 1996; Lampl and Yarom 1997; Puil et al. 1994). This finding is very important for the following reason. On one hand, it has been suggested that resonance underlies oscillatory behavior in cortical networks. On the other hand, there is accumulating evidence for the association between neural oscillations and attention (Fries et al. 2001). It is possible, in principle, that attention-related oscillations can be produced by modulating resonance through the adjustment of the average membrane potential. Similar suggestions, but in a different form, were made by Bazhenov et al. (2005), who proposed that resonant properties together with synaptic coupling can contribute to the control of information flow in cortical networks. The finding that robust oscillations can be mediated by resonance calls for future investigations of the complex interplay between membrane potential, resonance, and oscillatory behavior on one side and attention on the other.

Fourth, our results suggest that resonance induces more temporally structured activity than integration, at the expense of nonresponsiveness to external stimulation. Even with significantly high external noise, neural adaptation and resonance allow a network to produce more regular firing (RS in Fig. 10). Increased reliability of firing relying on membrane resonance phenomena were also described experimentally (Fellous et al. 2001) and theoretically (Bazhenov et al. 2005). Because resonance restricts the degrees of freedom for the firing of neurons, external stimulation causes temporal reorganization rather than firing-rate response in resonant networks (Fig. 9*A*). Conversely, networks featuring pure integration are very sensitive to external excitation and respond with increased firing rate. They are more prone to firing-rate signaling and exhibit less temporally organized activity (Figs. 9 and 10). These observations hint toward two different ways of representing information in cortical networks that could be supported by resonance and integration: temporal versus firing-rate coding.

The more temporally structured firing of resonant networks can be related to other important aspects. It was recently found that homeostatic mechanisms play a crucial role in the developing nervous system, regulating spontaneous activity (Turrigiano 2006; Turrigiano and Nelson 2006). However, it is not only the level of spontaneous activity that is important, but also the spatiotemporal patterns of activity produced therein (Torborg and Feller 2005; Turrigiano 2006). Our findings suggest that resonance can be involved both in maintaining homeostatic spontaneous activity and in shaping the spatiotemporal structure of the network dynamics.

Furthermore, as we have seen, resonant neurons can fire at remarkably regular intervals (Fig. 10*D*). Such regular firing patterns were previously observed in the pyramidal tract, in the early studies of Evarts (1964). Interestingly, later it became generally accepted that neurons in the cortex fire rather irregularly (Softky and Koch 1993; Tuckwell 1989) but recently the regularity of firing was shown to depend on the cortical state, as well as on the optimality of driving the neurons (Gur and Snodderly 2006). Because there is evidence that resonance can be modulated in a voltage-dependent way, we suggest that cortical neurons might switch between resonant and integrative behavior as their membrane potentials are influenced by the cortical state. As shown in our study, these two properties could induce regularity or randomness, respectively, in the activity of recurrent microcircuits.

Finally, we believe that integration and resonance are two different mechanisms that might be functionally relevant to the cortex. Integration is favorable to processing rate codes and promotes high sensitivity of recurrent networks to inputs. At the other extreme, resonance contributes to the homeostasis of network dynamics and supports more temporal-like codes, while promoting oscillatory behavior. These two mechanisms might interact in the brain to produce dynamics that are complex and sensitive to inputs, yet stable. In general, as we have seen in this study, membrane properties of neurons determine to a large extent how various processes influence network dynamics. The equation involving voltage-dependent resonance, integration, homeostasis, excitability, oscillations, attention, activity patterning, and many other phenomena is likely to be complex, but deserves thorough and sustained future investigations.

## APPENDIX

We present in more detail the algorithm used to compute the population ISI randomness, *S _{ISI}*. Let us consider a population of three neurons that fire spikes quite randomly, with ISIs given in Fig. 14.

To compute the population ISI randomness at time *t* using a sliding window of 150 ms, the first step is to compute the ISIs of neurons for the window centered at time *t*. Next, using all the ISIs in the window, a population ISI histogram (*H _{ISI}*) is computed (it takes all ISIs from all neurons). An example histogram is depicted in Fig. 14,

*bottom*. We use a 1-ms bin to compute the histogram.

Based on *H _{ISI}*, we compute the number of independent ISIs by clustering ISIs into groups that are spanning 10% the value of an ISI cluster center. The clustering algorithm, described below, sequentially iterates

*H*and tries to determine whether, for the current ISI, there is an already determined cluster center (with a lower ISI value) that is <10% apart from it. If such a cluster is found, then the ISI belongs to the previous cluster; otherwise, it becomes the center of a new cluster. The algorithm then moves to the next ISI and repeats the same procedure, until all the

_{ISI}*H*has been analyzed. We obtain a number of ISIs that are independent from each other (

_{ISI}*Nc*) within 10% of their respective values. For example, a cluster having a center at 20 ms would contain all ISIs between 20 and 22 ms, whereas a cluster with the center at 100 ms would contain all ISIs between 100 and 110 ms.

_{ind}The final value of *S _{ISI}* at time

*t*is the ratio between the number of independent ISIs and the total number of ISIs (see Fig. 14).

The exact algorithm is given below.

### Variables

H_{ISI}: Array containing the histogram of ISIs, with 1-ms binning

N_{ISI}: Total number of ISIs in the window

Nc_{ind}: Number of independent clusters

last: Auxiliary variable used to memorize the position of the last computed cluster center

W_{ISI}: Window size used to compute S_{ISI}

found: Flag variable checking whether any nonzero ISI count value H_{ISI}[k] is found at k between 0.9*i and i-1, where i is the value of the ISI being probed

left: Starting index for counter k (taken 0.9*i)

right: Ending index for counter k (taken i-1)

S_{ISI}: Population ISI randomness

### Algorithm (Computes *S*_{ISI} for a given window of size *W*_{ISI}):

_{ISI}

_{ISI}

Compute H_{ISI};

N_{ISI} := 0;

Nc_{ind} := 0;

last := -1;

FOR i:=1 TO W_{ISI} DO

IF (H_{ISI}[i] > 0) THEN

//Set the bounds to search for a cluster center smaller than i

found := FALSE;

left := Round (0.9*i);

right := i-1;

//Try to find an ISI smaller than i, within 10% of i

FOR k:=left TO right DO

IF(H_{ISI}[k] > 0) THEN found := TRUE;

ENDFOR;

//If an ISI is found, but it is not a cluster center

IF((found) AND ((i-last) > (i-left))) THEN

found := FALSE;

ENDIF;

//Make a new cluster center if a smaller one was not found

IF (NOT found) THEN

Nc_{ind} := Nc_{ind} + 1;

last := i;

ENDIF;

//Integrate the H_{ISI} to compute N_{ISI}

N_{ISI} := N_{ISI} + H_{ISI}[i];

ENDIF;

ENDFOR;

S_{ISI} := Nc_{ind}/N_{ISI}.

The algorithm searches for ISIs that occur in the window (they have *H _{ISI}* > 0). For each ISI,

*i*, it tries to determine whether there is a cluster center already determined, within <10% of

*i*, smaller than

*i*. If such a value exists, and it is a cluster center, then

*i*belongs to the last computed cluster denoted by

*last*. If no such value is found, then

*i*is an independent cluster center and

*Nc*is incremented,

_{ind}*last*being moved to

*i*. Additionally, the algorithm computes the total number of ISIs (

*N*) in the window by integrating the histogram

_{ISI}*H*.

_{ISI}Finally, the *S _{ISI}* at moment

*t*is computed by dividing the number of independent clusters by the total number of ISIs. The window is shifted by 1 ms forward and the procedure is repeated for moment

*t*+ 1. The time-resolved

*S*(Fig. 10

_{ISI}*A*) is obtained by computing

*S*for each moment in time spanning a trial.

_{ISI}## GRANTS

This work was supported by the Hertie Foundation.

## Acknowledgments

The authors thank D. Nikolić, W. Singer, R. V. Florian, E. M. Izhikevich, V. V. Moca, A. Iftime, O. F. Jurjut, J. Triesch, and I. Ignat for useful comments on earlier versions of the manuscript and interesting discussions.

## Footnotes

The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “

*advertisement*” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

- Copyright © 2007 by the American Physiological Society