Abstract
Synaptic transmission in the neocortex is dynamic, such that the magnitude of the postsynaptic response changes with the history of the presynaptic activity. Therefore each response carries information about the temporal structure of the preceding presynaptic input spike train. We quantitatively analyze the information about previous interspike intervals, contained in single responses of dynamic synapses, using methods from information theory applied to experimentally based deterministic and probabilistic phenomenological models of depressing and facilitating synapses. We show that for any given dynamic synapse, there exists an optimal frequency of presynaptic spike firing for which the information content is maximal; simple relations between this optimal frequency and the synaptic parameters are derived. Depressing neocortical synapses are optimized for coding temporal information at low firing rates of 0.5–5 Hz, typical to the spontaneous activity of cortical neurons, and carry significant information about the timing of up to four preceding presynaptic spikes. Facilitating synapses, however, are optimized to code information at higher presynaptic rates of 9–70 Hz and can represent the timing of over eight presynaptic spikes.
INTRODUCTION
Synapses form the communication channels between pairs of interconnected neurons. It has classically been assumed that the main role of a synapse is to notify the postsynaptic neuron that a presynaptic spike has occurred. However, this approach may underestimate the role of neocortical synapses in information processing in the brain. Electrophysiological recordings from interconnected pairs of neocortical neurons reveal that synaptic transmission is not static. Rather, synapses typically undergo substantial activitydependent changes in response to presynaptic spike trains so that the magnitude of a postsynaptic response (PSR) undergoes fast changes from one spike to another, depending on the presynaptic pattern of interspike intervals (ISIs) (Magelby 1987;Markram 1997; O'Donovan and Rinzel 1997;Stratford et al. 1996; TarczyHornoch et al. 1998, 1999; Thomson and Deuchars 1994; Thomson et al. 1993; Zador and Dobrunz 1997; Zucker 1989). This capacity enables synapses to encode temporal information about the timing ofpreceding presynaptic spikes in each single PSR.
Particularly, in depressing synapses, a short ISI is most likely to be followed by a small PSR, and a long ISI is likely to be followed by a large, recovered PSR (Fig. 1). Facilitating synapses demonstrate somewhat more complicated dynamics, but, in general, the response grows with successive presynaptic spikes (Markram et al. 1998).
The magnitude of the PSR is determined not only by the preceding ISIs, but also by the probabilistic nature of neurotransmitter release, resulting in trialtotrial fluctuations in the postsynaptic response (Allen and Stevens 1994; Korn et al. 1984; Larkman et al. 1997). The primary goal of this theoretical study was to extract the informative component from the total variability of the PSR and thereby to quantitatively explore the capacity of single responses of neocortical synapses to encode temporal information about the timing of prior presynaptic spikes. Toward this goal, it is natural to utilize methods from information theory, originally developed for the analysis of communication channels, as indeed synapses are (Borst and Theunissen 1999; Cover and Thomas 1991; Rieke et al. 1997; Shannon and Weaver 1948). Here we apply these tools to both deterministic and probabilistic phenomenological models of activitydependent synaptic transmission, which reproduce the average response of a neocortical synapse (Fig. 1) (Abbott et al. 1997; Grossberg 1969; Markram et al. 1998; Matveev and Wang 2000; Tsodyks and Markram 1997; Varela et al. 1997).
In recent in vitro studies it was found that the shortterm synaptic dynamics in the neocortex are specific to the types of neurons involved. For example, pyramidaltopyramidal connections typically consist of depressing synapses, whereas pyramidaltointerneuron connections typically bear facilitating synapses (Galarreta and Hestrin 1998; Gupta et al. 2000; Markram et al. 1998; Reyes et al. 1998; Stevens and Wang 1995; Thomson and Deuchars 1994). Here we study encoding of temporal information by both these types of synapses. In particular, we focus on the following questions.1) What is the dependence of information encoded by the synapse on the frequency of the presynaptic spikes? 2) How does the information depend on the biophysical parameters of the synapse? 3) How does the number of release sites affect information encoding by the synapse? 4) How many spike times are represented in a postsynaptic response?
METHODS
Phenomenological models of activitydependent synapses
THE DETERMINISTIC MODEL FOR DYNAMIC SYNAPSES.
This model is based on the concept of a limited pool of synaptic resources available for transmission (R), such as, for example, the overall amount of neurotransmitter at the presynaptic terminals. Every presynaptic spike, occurring at timet
_{sp}, causes a fractionU
_{SE} (analogous to the probability of release in the quantal model of synaptic transmission) of the available pool to be utilized, and the recovery time constant, τ_{rec}, determines the rate of return of resources to the available pool. In the depressing synapse, the synaptic parameters, U
_{SE} and τ_{rec}, are constant and together determine the dynamic characteristics of transmission. The fraction of synaptic resources available for transmission evolves according to the following differential equation
The model of a facilitating synapse is an extension of the model for the depressing synapse, with U
_{SE} being a dynamic variable increasing at each presynaptic spike and decaying to the baseline level in the absence of spikes
The experimental range of U _{SE} and τ_{rec}, obtained by fitting the model responses to recordings from depressing synapses between pyramidal cells in slices of rat somatosensory cortex, is 0.1–0.95 and 500–1,500 ms, respectively (Markram 1997). For facilitating synapses connecting pyramidal cells to inhibitory interneurons, experimental ranges of U1, τ_{rec}, and τ_{facil} are 0.012–0.086, 104–694 ms, and 550–3,044 ms, respectively (Markram et al. 1998).
Unless otherwise indicated, the typical set of parameters used throughout is {U _{SE} = 0.5, τ_{rec} = 800 ms} for depressing synapses and {U1 = 0.03, τ_{rec} = 300 ms, τ_{facil} = 1,800 ms} for facilitating synapses.
PROBABILISTIC MODEL FOR DYNAMIC SYNAPSES.
To account for trialtotrial fluctuations in synaptic responses, we use a probabilistic model for dynamic synapses. Many probabilistic models may be used to describe synaptic transmission (e.g., Larkman et al. 1997; Maass and Zador 1999; for a detailed comparison of different models seeMatveev and Wang 2000). The model used here is an extension of the classical quantal model of synaptic transmission (Allen and Stevens 1994; del Castillo and Katz 1954; Korn and Faber 1991; Korn et al. 1984; Stevens 1993), with dynamics of transmission included. The synaptic connection is composed ofN release sites. At each site there may be, at most, one vesicle available for release, and the release from each of the sites is independent of the release from all other sites. At the arrival of a presynaptic spike at time t
_{sp}, each site containing a vesicle will release the vesicle with the same probability, U
_{SE}. Once a release occurs, the site can be refilled at any time interval dtwith a probability dt/τ_{rec}. These two probabilistic processes (release and recovery) can be described by a single differential equation, which determines the probability,P
_{v}, for a vesicle to be available for release at any time t
To account for the variability observed in the quantal response amplitudes of single CNS synapses (Auger and Marty 2000;Bekkers 1994; Jack et al. 1990;Korn and Faber 1991; Larkman et al. 1997;Redman 1990), we assume that the postsynaptic response to the release of each vesicle (q) is not a constant value. Rather, it is chosen from a Gaussian distribution, with a mean μ and variance ς^{2}, which was cut off at the tails. The PSR is therefore determined as the number of vesicles that were released in response to the spike, multiplied by the correspondingq values from each of the release sites as chosen at the time of the spike.
In depressing synapses, U _{SE} is a constant, whereas in facilitating synapsesU _{SE} is a dynamic variable that evolves according to the same equation as in the corresponding deterministic model (Eq. 2 ).
It is evident by comparing Eqs. 1 and 3 that the probabilistic model is based on the deterministic model. In the probabilistic version, the probability of a vesicle being at a release site (P _{v}) is analogous to the fraction of resources available for release (R) in the deterministic version, and they both evolve according to the same differential equation. Similarly, the probability for the release of a docked vesicle in the probabilistic version is analogous to the fraction of available resources being released per spike in the deterministic version (U _{SE} in both cases). The advantage of using this specific model for probabilistic synaptic transmission is that not only is it based on the classical quantal model of release, but it is also consistent with the deterministic model in the sense that the average response of the probabilistic synapse converges to the response of the deterministic model. In addition, preliminary experimental results from rat neocortical slices support the validity of this probabilistic model.
Information theoretic analysis
Two information theoretic measures are utilized in this study (Borst and Theunissen 1999; Cover and Thomas 1991; Rieke et al. 1997; Shannon and Weaver 1948). The first measure is the entropy of a random variable that quantifies the amount of uncertainty one has about its value. For a discrete random variable X,which can take any value x from a particular set χ with probability p(x), the entropyH(X) in bits, is calculated as follows
The second measure is the mutual information,[I(X; Y)], between a pair of random variables X, Y. It is defined using theconditional entropy of X given Y,[H(X‖Y)]
This reduction in uncertainty about a single random variableX, due to the knowledge of another variable, is quantified by the mutual information and is given by the difference between the unconditional and conditional entropies of X
In situations where X is uniquely determined byY, knowledge of Y dictates a single possible value x of X, such thatp(X‖Y = y) is nonzero only at a single value x from χ. It then follows that the conditional entropy satisfiesH(X‖Y) = 0, and therefore
The entropy of a continuous random variable (as are the PSR and the ISIs) is computed, in practice, by dividing the range ofX into finite bins of a chosen precision and evaluating the resulting probability distribution of the corresponding discrete variable. The computed entropy will therefore depend on the precise choice of the bin size. However, if the bin size is set constant for both conditional and unconditional entropies, then the computed mutual information is independent of the bin size.
Information analysis of model synapses
We apply the formalism of information theory to phenomenological models of activitydependent synapses. In particular, we compute the mutual information between the PSR (X in Eqs.57 ) and the set of preceding presynaptic ISIs (Y). In the deterministic model, which describes the average behavior of a dynamic synapse, the magnitude of a PSR is determined uniquely by the history of the presynaptic spike times. Sufficiently long preceding spike trains determine the magnitude of the PSR with arbitrary precision. In this case, the information that PSRs contain about the preceding spike trains (the ISI vector) equals the unconditional entropy of the PSRs (Eq. 7 ). This information can therefore be calculated from the distribution of all PSRs, P(PSR) (seeEq. 4 ). The PSR distribution is evaluated from the histogram of simulated model responses to long presynaptic spike trains according to Eq. 1 . Since the magnitude of a deterministic synaptic response is a continuous variable, its entropy is strictly speaking infinite. In other words, a deterministic synapse can transmit an infinite amount of information about the timing of the preceding spikes in every PSR. The information becomes finite when the histogram is discretized by choosing a finite bin size, according to the finite precision with which PSRs can be measured. For subsequent comparison with biologically more relevant stochastic models, we are mostly interested in the frequency dependence of the obtained information and not in its absolute values. We therefore chose the bin size consistently in all simulations as 1% of the maximal response amplitude, i.e., A _{se}/100. We checked that the qualitative results are not sensitive to the exact choice of the bin size, as long as it is sufficiently small.
In the probabilistic model, the information content of PSRs can be calculated in the following way. Since failure of release from all sites provides the postsynaptic neuron with no information about presynaptic events, only release of one or more vesicles is considered. Note that failures do have the potential of transmitting information about the preceding pattern of spikes, but to use this information the postsynaptic neuron needs to know that the current presynaptic spike has nevertheless occurred. In the absence of a mechanism that ensures this knowledge, responses of zero amplitude cannot be informative. Therefore the probability for the release of n vesicles (nVes) is calculated according to a normalized binomial distribution, where only the values 1, … , n, … , N (number of release sites) are possible, and which is determined by P
_{r} − the release probability from each site
The mutual information between PSRs and the presynaptic spike trains, I(PSR; ISIs), is then computed as inEqs. 46, where X and Y are replaced by PSR and P _{r}, respectively. Due to the probabilistic release, the information will always be less than the unconditional entropy of the responses. We may quantify the impact of probabilistic release on information coding using the information efficacy measure, which we define as the ratio between the information and the unconditional entropy of PSRs. While in the deterministic model the information efficacy is always unity, it is less than unity for the probabilistic model.
RESULTS
Coding of information by depressing synapses
Information theoretic analysis was applied to models of neocortical depressing synapses to compute the information contained in a PSR about the preceding pattern of presynaptic spikes (Fig. 1) (Markram et al. 1998; Tsodyks and Markram 1997). Both deterministic and probabilistic models were used. Comparing these two types of models elucidates the impact of probabilistic release on the information content of synaptic responses. In both cases, the presynaptic inputs were Poisson spike trains, which were shown to closely mimic the spike activity of neocortical neurons in vivo (Softky and Koch 1993). The relevance of Poisson spike trains for temporal coding may be particularly high in light of the fact that their ISI distribution maximizes the entropy of ISIs for a given firing rate (Rieke et al. 1997). The interesting issue of how information coding is affected by deviations from the Poisson statistics (see, for example, Baddeley et al. 1997) is left for a future study.
DEPENDENCE OF INFORMATION ON THE PRESYNAPTIC FREQUENCY.
The dependence of temporal information encoded by the synapse on the average frequency of the presynaptic spike train is shown in Fig.2. The results are presented for the deterministic model (Fig. 2 A) and the probabilistic model (Fig. 2, B and C) with five release sites. For comparison, the dashed line in Fig. 2 A indicates the information contained in a PSR about the timing of the current spike that triggered this PSR, assuming that the synaptic delay is randomly distributed between 0 and 3 ms (Markram et al. 1997a). Although this value is the dominant term in the information content of a PSR, it is of no relevance to temporal coding of presynaptic spike patterns since it is not affected by the timing of precedingspikes, which is the focus of this study.
The main difference between the temporal information content of PSRs of a deterministic synapse (Fig. 2 A) and a probabilistic synapse (Fig. 2 B) is in their absolute value of information, which is two orders of magnitude larger in the deterministic synapse, at the chosen histogram bin size. This difference is expected, as probabilistic synapses with just a few release sites are far less reliable than the corresponding deterministic synapses. However, despite the difference in absolute values, the information encoded by both deterministic and probabilistic synapses about the timing of presynaptic spikes, peaks at the same frequency (vertical dotted line), which we denote as the optimal frequency,F _{opt}. This optimum is expected because at very low frequencies the responses are recovered, and therefore there is no information in the magnitude of PSRs. At very high frequencies, the responses are all depressed, and, again, information about the timing of the presynaptic spikes is lost. Moreover, plotting information efficacy as a function of the presynaptic firing frequency (Fig. 2 C) clearly shows that the probabilistic effect is not uniform over all frequencies. Rather, the probabilistic component of transmission causes maximal information reduction at very low and very high frequencies. The effect is minimal at the optimal frequency for encoding, such that at this frequency, not only is the absolute information encoded maximal, but also the information efficacy of the synapse is optimal. We therefore conclude that, at the optimal frequency, the synaptic dynamics are used most efficiently for encoding temporal information by single PSRs.
Another quantity to be considered is the “information rate,” i.e., the information encoded per time unit, rather than per PSR (Fig.2 D). Exact calculation of the information rate is nontrivial, since it must take into account the possible redundancy in the amplitudes of subsequent PSRs in terms of information about the previous spikes. The upper bound for the information rate can be estimated by ignoring this redundancy, as the product of presynaptic frequency and the information per PSR. We found that beyond the optimal presynaptic frequency, the decrease of information gradually approaches the curve inversely proportional to the frequency (Fig. 2 B). This observation indicates that the information rate saturates at high frequencies, i.e., that further increase of the presynaptic rate does not provide more information to the postsynaptic neuron (Fig.2 D). It is interesting to note that the frequency at which saturation occurs is close to the limiting frequency of the depressing synapses, as defined by Tsodyks and Markram (1997). Beyond the limiting frequency, the synapse loses sensitivity to the average firing rate. Both the optimal frequency defined above and the limiting frequency exhibit the same dependence on synaptic parameters.
Because each neocortical synapse is characterized by unique response dynamics (Tsodyks and Markram 1997), different synapses would be expected to have different optimal frequencies for information encoding. We therefore repeated the analysis for different combinations of synaptic parameters, all in the physiological range. The empirical results showed that a very good approximation for the optimal frequency for temporal information encoding is given by
For synaptic parameters within the physiological range (seemethods), F _{opt} for depressing pyramidalpyramidal synapses ranges between 0.7 and 20 Hz, with the majority of cases below 5 Hz. This result may be related to the fact that most of the time neocortical neurons are active at low, spontaneous, firing rates of a few spikes per second. Only rarely do they reach higher rates (Abeles 1991).
DEPENDENCE OF TEMPORAL INFORMATION ENCODING ON SYNAPTIC PARAMETERS.
We next considered the case of a presynaptic spike train with a fixed frequency and studied the dependence of the encoded information on synaptic parameters. We observed that synapses with different parameter combinations differ in their capacity for information encoding at a given presynaptic firing rate. We therefore studied whether there exists an optimal combination of synaptic parameters that maximizes information encoding at a given input frequency.
First we analyzed the dependence of the information on the time constant of recovery from depression, τ_{rec}, for a fixed value of U
_{SE}. The plots of the encoded information as a function of τ_{rec} for a fixed frequency F, have a clear peak (Fig.3
A). Thus at any presynaptic average firing rate, there is an optimal value of τ_{rec} (optimalτ_{rec}), which maximizes information encoding. Moreover, by repeating the analysis for many different firing frequencies (F) and for many values ofU
_{SE}, we found that optimalτ_{rec} is well approximated by the following relation, analogous to Eq. 14
The dependence of the optimal value of τ_{rec} onU _{SE} is summarized in Fig.3 B for a firing rate of 2 Hz. In agreement with Eq.15, there is a clear tradeoff betweenU _{SE} and τ_{rec}values, such that the larger the U _{SE}, the smaller τ_{rec} should be for optimal encoding. The optimal values for τ_{rec}, calculated for U _{SE} ranging from 0.1 to 0.9, are in broad agreement with experimental data obtained for pyramidalpyramidal connections in neocortical slices (160–1,500 ms) (Markram 1997).
Figure 3 C depicts the effect of theU _{SE} parameter on the encoded information. The information grows monotonically withU _{SE}, such that the optimal value forU _{SE} is always 1, i.e., the maximal value it may attain. The same result holds for all frequencies (not shown). Since the experimental values forU _{SE} are intermediate between 0 and 1, this finding suggests that the value ofU _{SE} is not tuned to maximize information encoding but is determined by some other factor. In contrast, the range of optimal values of τ_{rec}lies within the range found in neocortical slice preparations, suggesting that in neocortical depressing synapses, τ_{rec} is tuned for optimizing information encoding by the synapse.
DEPENDENCE OF TEMPORAL INFORMATION ENCODING ON THE NUMBER OF RELEASE SITES.
Synaptic connections between pyramidal neurons typically have several contacts (at least 3, with an average of around 5–6) (Larkman et al. 1997; Markram et al. 1997b). The results presented above were obtained for synapses with five release sites. To examine what bearing the variable number of release sites may have on information encoding, we studied the dependence of information contained in PSRs on the number of release sites in the probabilistic model. Repeating the calculation described above for a variable number of release sites, we found the same qualitative results as presented in Figs. 2 and 3 (not shown). However, the actual values of information strongly depend on the number of release sites.
In Fig. 4 A, temporal information encoded by a synapse about the presynaptic ISIs is plotted as a function of the number of release sites. The information grows steadily with the number of release sites. This is consistent with the fact that for infinitely many release sites, the model behaves as a deterministic one, for which the information diverges to infinity (seemethods). As can be seen in Fig. 4 B, not only the absolute values of information increase with the number of release sites, but also the information efficacy, i.e., the fraction of the informative component within the total entropy of the responses (seemethods). This dependence was found for a wide variety of model parameters that lie within the physiological range. The advantage of having multiple release sites from an information theoretic point of view was previously observed in models of linear synapses (Manwani and Koch 1999; Zador 1998). Our results suggest that if a synapse has a certain amount of neurotransmitter at its disposal, then, in terms of information coding, it is advantageous to divide the neurotransmitter into more release sites than putting more of it in each synaptic vesicle. In both cases, the average response would be the same. However, in the first case, the trialtotrial fluctuations decreases, and information efficacy therefore increases.
HOW MANY SPIKE TIMES ARE REPRESENTED IN A POSTSYNAPTIC RESPONSE?
So far we showed that a single synaptic response carries information about the timing of preceding presynaptic spikes. It is clear, however, that a synapse can only “report” about the timing of a finite number of such spikes. Hence we wondered how many spike times are represented in a PSR.
To address this question, we calculated the mutual information between PSRs and the times of preceding presynaptic spikes in the input train. In Fig. 5, the information content of PSRs is plotted against the sequential number of the preceding presynaptic spike (a larger number in abscissa implies that the spike occurred further back in time). In the case shown, the information in the PSR about the two most recent spikes is more or less the same, but information decreases rapidly for spikes that occurred further back in time. From the analysis of different model synapses with parameters in the physiological range, we found that the part of the curve in which the information about preceding spikes is comparable to the information about the timing of the most recent spike extends up to four preceding spikes. This finding suggests that depressing synapses can encode information about the timing of at most four preceding spikes (seediscussion).
Coding of information by facilitating synapses
The information analysis was also performed for models of facilitating synapses. The main results are similar to those found for depressing synapses. As in depressing synapses, in facilitating synapses each postsynaptic response carries information about the timing of preceding spikes. The amount of information contained in a single response depends on the synaptic parameters, as well as on the presynaptic firing rate. For each facilitating synapse there is an optimal input frequency at which the information contained in the synaptic response is maximal (see Fig.6 A).
For facilitating synapses with parameters in the physiological range, the optimal frequency of information coding lies between 9 and 70 Hz. The optimal frequency, F
_{opt}, of a facilitating synapse tends to be higher than that of depressing synapses. An extensive analysis of facilitating synapses with parameters in the physiological range shows thatF
_{opt} is proportional to the expression
We have further observed that, as in the case of depressing synapses, the information contained in a PSR of a facilitating synapse is proportional to the number of release sites (Fig. 6 B). Both the information and the information efficacy (not shown) increase nearly linearly with the number of release sites.
Figure 6 C depicts the mutual information between the PSR of a probabilistic facilitating synapse and the timing of preceding presynaptic spikes, plotted as a function of the sequential number of the spike in the past. As in the case of depressing synapses, the information decreases for spikes that have occurred far in the past. However, the main difference between depressing and facilitating synapses with parameters within the physiological range is that the region of the curve in which the computed information is comparable to the information contained about the timing of the most recent spike (or even larger) is more extended in facilitating synapses. This implies that while a depressing synapse carries significant information about the timing of at most four preceding spikes, a facilitating synapse is capable of representing the timing of at least eight preceding spikes.
DISCUSSION
The present theoretical study explores the capacity of single responses of neocortical synapses to encode temporal information about the timing of presynaptic spikes. This capacity results from the shortterm activitydependent changes in the amplitudes of the postsynaptic response that characterize different types of synaptic connections (Galarreta and Hestrin 1998; Gupta et al. 2000; Hempel et al. 2000; Markram et al. 1998; Reyes et al. 1998; Stevens and Wang 1995; Thomson and Deuchars 1994). The activity dependence of synaptic transmission can be captured by phenomenological models characterized by a small number of parameters, each of which has a clear functional meaning, such as the probability of release and time constants of recovery from depression and facilitation (Abbott et al. 1997; Markram et al. 1998; Tsodyks and Markram 1997; Varela et al. 1997). The physiological ranges of these parameters have been identified for several major types of neocortical synapses in slice preparations (Gupta et al. 2000; Markram et al. 1998; Tsodyks and Markram 1997). This enables one to quantitatively estimate the information content of postsynaptic responses and analyze the dependence of the information on the synaptic parameters and input conditions. Here we have presented the results for two types of neocortical connections, depressing synapses between pyramidal neurons and facilitating synapses between pyramidal neurons and interneurons.
One of the main results of the analysis is that, for every synaptic connection, the information contained in the postsynaptic response is maximal for a particular input frequency, unique to each synapse. For depressing synapses, this optimal frequency was found to be surprisingly low, typically below 5 Hz, i.e., at the range of spontaneous activity of in vivo neocortical networks (Abeles 1991). It is usually assumed that the spontaneous activity of cortical networks does not carry significant information, in contrast to the evoked activity characterized by much higher firing rates. Several recent studies regard this spontaneous activity as a “background” that provides a “context” for interpreting the evoked input (Bernander et al. 1991; Ho and Destexhe 2000; Rapp et al. 1992). Our finding that depressing synapses in the neocortex are actually “tuned” to encode information at the spontaneous rates indicates that old notions of what is “noise” in brain activity may have to be revised. Namely, that important information processing takes place during the spontaneous activity of cortical networks (Arieli et al. 1996). However, the resolution of this issue may have to wait for in vivo studies of synaptic transmission. As the optimal frequency for information encoding via depressing synapses was found to be inversely proportional to the time constant of recovery from depression, finding similar time constants in vivo and in vitro would confirm our suggestion. In contrast, finding significantly shorter time constants in vivo would imply higher optimal frequency and would thus weaken our conjecture regarding the importance of the spontaneous activity.
As a complementary issue, we also analyzed the dependence of the encoded information on synaptic parameters for a fixed presynaptic frequency. Important differences between the effects of these parameters emerged. For the U _{SE}parameter, representing the probability of neurotransmitter release, we found that optimal encoding always occurs at the highest possible value, i.e., at U _{SE} = 1. On the other hand, for the time constant underlying recovery from depression τ_{rec}, intermediate values were found to maximize the information content. The range of optimal values for τ_{rec}, calculated for low presynaptic frequency, was found to be in broad agreement with experimental data. These results indicate that the exact value of the usage parameter for a given synaptic connection is not tuned to maximize the information coding. Rather, plasticity of this parameter was found to occur on the basis of temporal relationship between the activity of pre and postsynaptic neurons in a Hebbian manner (Markram and Tsodyks 1996; Markram et al. 1997b; Stevens and Wang 1994). On the other hand, the recovery time constant may well be tuned to optimize the information coding in a nonHebbian manner, according to the typical frequency of presynaptic neurons. We found an inverse relationship between the optimal value of recovery time constant and the usage parameter. This prediction could be tested experimentally.
Finally, we analyzed the dependence of the information coding on the number of synaptic release sites. As a general rule, we found that increasing the number of release sites always improves the information efficacy of the synapse by reducing the trialtotrial fluctuations of the responses. Indeed, synaptic connections between pyramidal neurons usually have several contacts, with nonuniform distribution of the number of contacts that is biased toward higher values (Larkman et al. 1997; Markram et al. 1997a). We therefore suggest that not only is the dynamic time constant adapted to optimize coding of temporal information, but even the morphological properties of synaptic connections may be determined according to principle of optimizing the information content of postsynaptic responses.
Several interesting differences between depressing and facilitating synapses have emerged from our analysis. In particular, facilitating synapses are tuned to significantly higher frequencies, more reminiscent of the evoked activity of pyramidal cells. Facilitating synapses were also shown to code information about longer spike patterns. Mathematically, both of these properties of facilitating synapses result from the low values ofU _{SE} parameter, i.e., low initial probability of release. The functional significance of these results will have to be elucidated in future studies. One can speculate that the flow of temporal information in the neocortex recruits interneurons only when the activity is driven by sensory stimuli, rather than during spontaneous activity.
The theoretical analysis presented here complements a previous study, which analyzed the ability of depressing synapses to signal the population firing rates of presynaptic neuronal ensembles (Tsodyks and Markram 1997). In particular, we have shown that beyond the optimal frequency of depressing synapses, the instantaneous rate of temporal information gradually saturates. This saturation occurs near the limiting frequency of the synapse, defined as the frequency above which it cannot transmit information about the presynaptic rates (Tsodyks and Markram 1997). The present finding therefore supports the idea that the functional significance of the limiting frequency is that it defines the operational range for depressing synapses. The same is true for facilitating synapses, in which the optimal frequency given byEq. 16 is proportional to the peak frequency of these synapses, at which the average amplitude of PSRs is maximal (Markram et al. 1998).
The ability of dynamic synapses to encode information about the timing of preceding presynaptic spikes supports the suggestion that a temporal code is used for information processing in the neocortex (Ferster and Spruston 1995; O'Donovan and Rinzel 1997; Richmond and Optican 1990; Rieke et al. 1997; Senn et al. 1998; Tovee et al. 1993). This study focused on the ability of neocortical synapses to encode temporal information at the level of a single isolated presynaptic spike train. Since neocortical neurons have numerous synaptic contacts, an important challenge for future work is to analyze the ability of dynamic synapses to signal temporal patterns in the presence of many presynaptic neurons impinging on the postsynaptic cell (Abeles 1991; Hopfield 1995).
Acknowledgments
We thank M. London for helpful comments during this study and A. Cowan and C. Stricker for providing preliminary experimental data.
This work was supported by the US Office of Naval Research, the Israeli Science Foundation, and the US–Israel Binational Science Foundation.
Footnotes

M. Tsodyks (Email:misha{at}weizmann.ac.il).
 Copyright © 2002 The American Physiological Society