Journal of Neurophysiology

Error message

Notice: PHP Error: Undefined index: custom_texts in highwire_highwire_corrections_content_type_render() (line 33 of /opt/sites/jnl-jn/drupal-highwire/releases/20151124215058/modules/highwire/plugins/content_types/

Role of synaptic dynamics and heterogeneity in neuronal learning of temporal code

Ziv Rotman, Vitaly A. Klyachko


Temporal codes are believed to play important roles in neuronal representation of information. Neuronal ability to classify and learn temporal spiking patterns is thus essential for successful extraction and processing of information. Understanding neuronal learning of temporal code has been complicated, however, by the intrinsic stochasticity of synaptic transmission. Using a computational model of a learning neuron, the tempotron, we studied the effects of synaptic unreliability and short-term dynamics on the neuron's ability to learn spike timing rules. Our results suggest that such a model neuron can learn to classify spike timing patterns even with unreliable synapses, albeit with a significantly reduced success rate. We explored strategies to improve correct spike timing classification and found that firing clustered spike bursts significantly improves learning performance. Furthermore, rapid activity-dependent modulation of synaptic unreliability, implemented with realistic models of dynamic synapses, further improved classification of different burst properties and spike timing modalities. Neuronal models with only facilitating or only depressing inputs exhibited preference for specific types of spike timing rules, but a mixture of facilitating and depressing synapses permitted much improved learning of multiple rules. We tested applicability of these findings to real neurons by considering neuronal learning models with the naturally distributed input release probabilities found in excitatory hippocampal synapses. Our results suggest that spike bursts comprise several encoding modalities that can be learned effectively with stochastic dynamic synapses, and that distributed release probabilities significantly improve learning performance. Synaptic unreliability and dynamics may thus play important roles in the neuron's ability to learn spike timing rules during decoding.

  • neural learning
  • short-term plasticity
  • synapse
  • synaptic dynamics
  • temporal code

neural representation of information in the brain is thought to be encoded, in part, by the precise timing of action potentials (APs) and highly synchronized coincidence firing of neuronal populations (Ainsworth et al. 2012). This so-called temporal coding is thought to serve, for example, in representation of auditory, olfactory, and visual information in the cortex (deCharms and Merzenich 1996; Meister et al. 1995; Neuenschwander and Singer 1996; Wehr and Laurent 1996) and spatial information in the hippocampus (Buszaki 1994; Huxter et al. 2003). The ability of neurons to discriminate and learn various temporal spiking patterns thus represents an essential process for decoding of temporal code. Many models of neuronal and network processing have been developed to examine how neural learning schemes can be built to capture real neuronal sensory representations. Our understanding of temporal coding and decoding is, however, complicated by the intrinsic stochasticity of central synapses. Many spike timing learning rules that have been formulated so far for model neurons and neural networks predominantly use deterministic synapses (Buonomano 2000; Carvalho and Buonomano 2011; Gutig and Sompolinsky 2006; Jabri and Flower 1992; Mazzoni et al. 1991; Unnikrishnan and Venugopal 1994) and thus do not account for natural synaptic unreliability.

Central synapses are indeed characterized by a highly variable degree of unreliability, as often experimentally determined by their wide range of release probabilities (Murthy et al. 1997). The computational advantages of building neural circuits with unreliable synapses has been one of the central questions in neuroscience. While adding significant complexity to neural computations, such synaptic unreliability has been hypothesized to play important roles in many forms of neural processing (Abbott and Regehr 2004; Goldman et al. 1999, 2002; Tsodyks and Markram 1997). The role of synaptic stochasticity in neuronal learning has been suggested to modify network memory capacity (Mejias and Torres 2009; Varshney et al. 2006) and signal propagation (Guo and Li 2010; Kumar et al. 2010), but little research addresses the learning process itself. Interestingly, one conceptual link between synaptic unreliability and neural learning has been suggested to occur via the modulation of synaptic dynamics (Carvalho and Buonomano 2011; Seung 2003). Indeed short-term synaptic dynamics, also known as short-term plasticity (STP), acts on millisecond-to-minute timescales to rapidly modulate synaptic unreliability (i.e., release probability) in an activity-dependent manner (Zucker and Regehr 2002). Extensive experimental evidence implicates synaptic dynamics/STP in many forms of neural computations (Carlson 2009; Edwards et al. 2007), and numerous computational studies suggest critical roles for STP in information transmission and processing (Abbott and Regehr 2004; Buonomano 2000; Deng and Klyachko 2011; Fortune and Rose 2001). The role of synaptic unreliability in neuronal learning and decoding of spike timing remains, however, largely unexplored.

Because these problems are difficult to address experimentally, we attempted to examine these questions conceptually with a computational model of a learning neuron, the tempotron, equipped with a temporal learning scheme to learn spike timing patterns (Gutig and Sompolinsky 2006). Our results suggest that spike timing can be learned, although with a reduced success rate, by a model neuron even in the presence of synaptic unreliability and that the learning success rate can be improved significantly by firing spike bursts. A major further improvement in learning of temporal coding can be achieved if rapid synaptic dynamics is taken into account. Dynamic synapses allow decoding and classification of a variety of burst properties, including spike timing statistics and burst duration and timing, suggesting that bursts could represent a complex encoding strategy beyond their timing. Interestingly, while facilitating or depressing synapses have a preference for one type of spike timing/burst timing rule, their mixture is able to successfully learn variable spike timing rules. The relevance of this observation to real neurons is supported by considering spike timing learning in models with naturally distributed input release probabilities. Our results thus suggest a potential role for synaptic dynamics in neural learning and a conceptual advantage for widely distributed synaptic release probabilities in neural decoding.


Tempotron learning neuron.

Our numerical simulations were based on the model of a learning neuron, the tempotron (Gutig and Sompolinsky 2006), by considering the membrane voltage dynamics of a leaky integrate-and-fire neuron driven by exponentially decaying synaptic currents according to equations Embedded Image and Embedded Image where ωi is the synaptic weight of the ith synapse, ti is time of spike from the ith synapse, τ and τs are the membrane and synaptic time constants, respectively, and V0 is a normalization factor. The learning neuron receives inputs from N different synapses each from a different neuron, so that spike timing at different inputs is uncorrelated. The ratio between the membrane and synaptic time constants was held constant τ = 4τs, as previously optimized for the case of reliable synaptic transmission (Gutig and Sompolinsky 2006).

For spike timing classification and burst duration classification studies, we used the same parameters for the tempotron that were reported previously (Gutig and Sompolinsky 2006). Neuronal parameters were τ = 15 ms, τ = 4τs, and N = 500, and spikes were uniformly distributed between t = 0 and T = 500 ms unless otherwise stated. Throughout this report the teaching load (α) was set to 0.2 (unless noted otherwise) to reduce computational costs. For purposes of studying more complex spike patterns, model parameters were changed such that τ = 20 ms, τ = 4τs, N = 500, and spike patterns were distributed between t = 0 and T = 5 s.

Modeling the facilitating synapse.

For a model of the facilitating synapse, we used our previously developed mechanistic model of the Schaffer collateral (hippocampal CA3 to CA1) synapse (Kandaswamy et al. 2010). This model relies on two vesicle pools, namely, readily releasable and reserve pools with activity-dependent occupancy and interpool dynamics. Three components of short-term enhancement with distinct time constants and amplitudes determine the release probability per vesicle in the readily releasable pool (RRP), while the occupancy of the RRP determines short-term depression through depletion of the RRP. The model has been shown to correctly predict synaptic response to arbitrary spike trains, including natural spike trains recorded in exploring animals, without free parameters (Kandaswamy et al. 2010). We used the same model parameters in the present study that we used previously (Rotman et al. 2011). During short spike bursts, which were used as inputs in the present study, the synaptic dynamics of the CA3-CA1 excitatory synapses (and thus of the model synapse) was dominated by short-term enhancement, and thus represented an appropriate model of a facilitating synapse for this study.

Modeling the depressing synapse.

For a model of the depressing synapse, a general phenomenological model developed by Tsodyks and Markram (1997) was used. The formulation chosen in this study was previously used to examine the function of STP in neural network models of short-term memory (Barak et al. 2010). In the model of a depressing synapse, the response was determined by two parameters: the resources (x) and utilization (u). These parameters were controlled by two equations: Embedded Image Embedded Image where U is the resting value of u.

This formulation modeled depressing synapses when τD > τF. Since the resting value of x is set to 1 and the synaptic output is given by the product of u and x, the basal synaptic output is U. In our stochastic implementation, the model is used to predict release probabilities; therefore U is the basal synaptic release probability Pr. In this study we used depressing synapses with two different sets of parameters to test the robustness of our findings. For the studies of burst classification, the model parameters were Pr = 0.6, τD = 150 ms, and τF = 20 ms. In the section describing natural distribution of synaptic unreliability, the basal Pr was determined from this distribution (see below) and the time constants were τD = 100 ms and τF = 0 ms.


Estimation of errors was performed by considering fluctuation of success rates after learning was completed. Multiple sources of variability within the same training set resulted in the training process reaching a steady state in which the synaptic weights were not constant but rather fluctuated. During training, synaptic weights were modified 5·N·α times, where N·α is the number of trains in the set, and then the averaged correct response was evaluated for 300 randomly chosen trains from the learned set. The average success rate in the last 20 rounds of training was used to evaluate the correct response for the training set. For each task, learning was repeated 50 times to account for training set variability. The values of the means and SE reported indicated success in the task.


Spike timing classification with unreliable synapses.

Assessment of neuronal learning in general and specifically in application to temporal coding has been difficult to achieve experimentally. These kinds of investigations are typically carried out by examining how well neurons learn to respond (in the form of a spike) or not to respond to various ensembles of input spike patterns distributed over multiple inputs. Currently, with the several hundred inputs that are typically assessed in such approaches, neural learning is predominantly studied with computational modeling. To assess the effects of unreliable synaptic transmission on neural learning of temporal code, we used a learning neuron model, the tempotron, described previously (Gutig and Sompolinsky 2006). The tempotron was originally formulated as a learning scheme for temporal code in sensory input and was implemented with reliable synapses. We replaced the perfectly reliable synapses previously studied (Gutig and Sompolinsky 2006) with either constant-Pr static synapses or dynamic (see below) synapses. A population of 500 inputs with random spiking patterns was generated, in which each presynaptic neuron fired once in a 500-ms interval (Fig. 1A). These populations of input spike patterns were categorized as either positive or negative, depending on whether the postsynaptic neuron was expected to respond (with a spike) at least once to such an input pattern (positive) or not to respond (negative). The learning process then involved stimulating the model neuron with one of the selected spiking patterns (training trial) and modifying the synaptic weights of the inputs until the neuronal response matched the desired response. Unless otherwise noted, 50 positive and 50 negative population spike patterns were used for training; after the training stage, the same spike trains were used to assess learning success.

Fig. 1.

Spike timing classification with unreliable synapses. A: illustration of sample spike patterns being classified, showing spike times (vertical lines) of 20 inputs for positive (black) and negative (red) activity patterns. Single spike trains are displayed. B: neuronal learning of timed spikes with reliable and unreliable synapses. Learning performance is plotted as a function of release probability (Pr) for different teaching loads (α), as indicated. The model contained 500 input connections, each having a random spike once in a 500-ms simulated time frame. Connections were modeled as static unreliable synapses showing increases in correct classification with Pr. The dependence of classification success on Pr was qualitatively similar for all teaching loads. C: neuronal classification improves with increases in firing rate. Except for the number of spikes, the model parameters were the same as in A, showing improvement in classification success with increased numbers of spikes. Learning performance was examined for a range of Pr, showing an increase of classification success with larger Pr.

Previous studies have shown that under conditions of perfectly reliable transmission (Pr = 1) synaptic weights in the tempotron model can be adjusted to correctly classify temporal patterns of single-spike inputs with 100% accuracy (Gutig and Sompolinsky 2006). With unreliable synapses, however, each spike of the training trial results in stochastic synaptic release according to the chosen Pr; thus the same spike train results in different release patterns in each learning trial. The same postsynaptic response can, in principle, be attributed to axonal spike failure rather than synaptic release failure. However, since these cases cannot be distinguished within the present framework, we will use synaptic unreliability as the general condition modifying neurotransmitter release. As a result of synaptic unreliability, for Pr < 1, classification accuracy of the model neuron was imperfect (Fig. 1). We quantified the success rate of neuronal learning in the case of unreliable transmission as the ratio of the sum of correct responses (fire for positive patterns and no fire for negative patterns) over the sum of all trials (positive and negative) (Fig. 1B). To examine learning capacity under our conditions, we considered a range of “teaching loads” (α), which represented the number of patterns the neuron was expected to learn, expressed as a ratio to the number of synaptic inputs (Gutig and Sompolinsky 2006). We started our analysis by assuming that all inputs have the same Pr. We then extended this analysis to a realistic distribution of Pr values (see below). This analysis demonstrated that for every teaching load the learning success rate decreased monotonically with Pr and no perfect classification was observed at any teaching load for Pr values < 1 (Fig. 1B). We also observed a decrease in classification accuracy with increases in teaching load (Fig. 1B). This effect appears even at low teaching loads α < 1, which is much smaller than the critical teaching load reported for perfectly reliable synapses (α ≈ 3) (Gutig and Sompolinsky 2006). These results suggest that the model neuron can learn to classify spike timing-based patterns even in the face of synaptic unreliability, albeit with a reduced success rate, which is inversely correlated with decrease in Pr and increase in teaching load.

Since unreliable transmission leads to failures and thus reduces the effective release rate during the trains, we also examined how release rate contributes to spike timing classification success. We generated training sets with different numbers of spikes in the simulated time window (500 ms) and found that the performance of neuron classification was improved with increases in the number of spikes per synaptic input (Fig. 1C). The same qualitative relationship remained between success rate and decreases in Pr (Fig. 1C) or increases in the teaching load (data not shown). Classification success thus depends on the input firing rate. We note that because of the single-compartment nature of the tempotron, the increased firing rate in the above analysis could be interpreted as being equivalent to increasing the number of inputs rather than the number of spikes per input, thus effectively decreasing the teaching load. The above results indicate, however, that increasing firing rate and reducing teaching load are not equivalent in improving the classification success rate. This can be understood by considering that the number of independent synaptic weights that are adjusted in the model remains the same at any firing rate. Thus for the tempotron, changes in the input firing rate are not equivalent to an increase in the input number.

Since the dependence of classification success on the input firing rate is not the major focus of the present study, we will proceed to compare the success rates for classifying trains with the same firing rate.

Neuronal classification of spike bursts.

It has been suggested previously that firing spike bursts represents a means to overcome synaptic unreliability in information transmission (Kepecs and Lisman 2003; Lisman 1997). To study how clustering the spikes into bursts influences the ability of a model neuron to classify temporal spike patterns, we studied classification of spike bursts with variable durations (Fig. 2A). We used three spikes in each burst and ranges of burst durations of 15–200 ms, corresponding to the average firing rate of 15–200 Hz. Spiking patterns were generated by randomly choosing the start time of the burst and randomly distributing three spikes within predetermined burst duration (Fig. 2B). Burst firing improved spike timing classification compared with trains containing the same number of spikes but randomly distributed over the entire simulated time window (Fig. 2C). In general, the ability of the model neuron to perform burst classification improved as the burst duration shortened and the burst became similar to a “noisy” spike. Spike bursts may thus indeed serve, in part, as a means to compensate for synaptic unreliability to improve learning of spike timing rules. Interestingly, we noted that the greatest improvement in performance was in the case of lower-Pr synapses, particularly at and above the firing rate of 30 Hz. This observation could be particularly relevant to real neurons, since lower Pr are more representative of natural synaptic unreliability in the CNS.

Fig. 2.

Fine-tuning spike time learning by modulation of burst properties. A: schematic of the experimental paradigm. The schematic visualizes the multiple parameters that were modified in various classification tasks employed in C–E. In each task, the model neuron shown at bottom is trained to respond by producing a spike for the “positive” input (left) and not producing a spike for the “negative” input (right). Each input train consists of a single randomly distributed burst in each of the synaptic inputs. Parameters of the bursts, such as the burst's duration or spike timing distribution, that are modified in a specific task, are reflected at top. In addition, the type of synaptic dynamics of the input synapses is modified in some tasks as indicated along the arrows. For this and subsequent figures, S, F, D, FD, and N.FD correspond to static, facilitating, depressing, facilitating and depressing mixture, and naturally distributed facilitating and depressing mixture synapses, respectively. B, left: schematic shows the classification task with positive and negative sets having the same burst duration and random spike timing transmitted through static synapses. Right: sample spike times (vertical lines) for 20 inputs for positive (black) and negative (red) activity patterns. The timed bursts displayed were generated with burst durations of 80 ms. C: neuronal learning decreases with burst duration. The model parameters were the same as in Fig. 1. Bursts consisted of 3 randomly timed spikes within the predetermined burst duration. Connections were modeled as static unreliable synapses with Pr in the range of 0.4–0.8. Classification success monotonously decreased with prolongation of burst duration independently of Pr. D: classification of timed bursts improved with increases in firing rate. Bursts consisted of 3 randomly timed spikes within the predetermined burst duration. Bursts were fired within the different simulated time periods, effectively modifying the firing rate of the learned pattern. Learning was examined for a range of Pr of 0.4–0.9 with qualitatively similar results; only the data for Pr = 0.4 are shown for clarity. E: classification of timed bursts improved with increases in spikes per burst. Bursts consisted of spikes randomly distributed over 400 ms. Bursts were fired with different numbers of spikes and different simulated time periods. The increase in correct classification is observed in all simulated time periods. Learning was examined for burst durations of 200–600 ms and simulated time periods of 1–3 s with qualitatively similar results; only data for 400 ms are shown for clarity.

We further determined the contribution of the input firing rate to classification success in the case of burst firing by varying the overall input duration (T), which effectively alters the average firing rate. Similarly to the above experiment, spiking patterns were generated by randomly choosing the start time of the burst and randomly distributing three spikes within a predetermined burst duration; in addition, altering the input duration imposed changes on the start time distribution. The input duration was varied over a range of 400–800 ms, effectively varying the overall input firing rate by a factor of 2. We observed that the classification success rate increased with increase in the input rate (Fig. 2D). This effect, however, did not appear to change the relationship between the success rate and burst duration (Fig. 2D) or Pr (data not shown). The firing rate can also be modified by changing the number of spikes in each burst. We observed that the classification success rate increased with increase in the number of spikes in the burst (Fig. 2E). This increase was observed for various simulated durations from 1,000 to 2,000 ms. Classification success for a constant number of spikes increased with input rate (implemented as decreased simulated period) (Fig. 2E) as described above in detail (Fig. 2D).

In summary, burst firing improved classification of temporal code with unreliable synapses, yet for the burst rates studied here (up to 200 Hz) classification remained imperfect under most conditions. The success rate of spike timing classification may thus be further improved by additional coding/decoding strategies that we explore below.

Fine-tuning spike time learning by modulation of burst properties.

The above results suggest that bursts may represent a useful coding strategy to overcome synaptic unreliability. Thus we next examined how burst properties can be used to tune and further improve classification accuracy. In this analysis we increased the number of spikes to 11 per burst, and used a prolonged, 5-s input duration, thereby effectively reducing the firing rate. We chose a reduced firing rate in these experiments to decrease the classification success rate and to allow a greater dynamic range for modulation with qualitatively similar classification results (Fig. 2, D and E). We started by studying how different ways of distributing spike times within the burst influences classification success. We examined success in classification of constant-rate bursts with randomly distributed spikes and compared it to success in classification of constant-frequency bursts with evenly (deterministically) spaced spikes (Fig. 3A). More complex distributions of spike timing (linear, parabolic, or Gaussian) effectively decreased the width of a burst, which mixes the sources of observed changes and complicates the interpretation. To minimize contributions from altered burst duration, we chose durations in the range of 600–1,300 ms, within which little effect of burst duration on classification accuracy was observed (Fig. 3, C and F). We observed that classification success was not strongly affected by different spike distributions, at least for static synapses examined thus far (Fig. 3, B and C); there were only minor improvements in performance for the equally spaced spike bursts observed with increased burst duration (Fig. 3, C and F, black trace). Similar results were observed for a shorter simulated period of 3 s with 11 spikes per burst (data not shown). Real synapses, however, are not only unreliable but also dynamic, rapidly changing their Pr during transmission of the burst (Dobrunz and Stevens 1997; Zador 1998; Zucker and Regehr 2002). Therefore, we next considered whether the presence of dynamic synapses could, in combination with varying burst properties, further improve the success of burst classification.

Fig. 3.

Spike timing classification during bursts depends on synaptic dynamics. A: spike times (vertical lines) of 20 inputs for positive (black) and negative (red) activity patterns. Patterns of equally spaced spikes (left) and random spikes (right) are displayed. The timed bursts displayed were generated with a burst duration of 600 ms. B: synaptic transmission during bursts: averaged excitatory postsynaptic potential (EPSP) generated by equally spaced spikes (left) and random spikes (right) transmitted through different types of synapses. The response was averaged over 100 repetitions. au, Arbitrary unit. C: success rate for classifications of random (both + and − patterns) or equally spaced (both + and − patterns) bursts with static synapses. Here and throughout, the schematic of the experimental design is shown on left of corresponding data plots. Bursts consisted of 11 spikes within 600–1,300 ms. Connections were modeled by static synapses with Pr = 0.3. D: same as C for the tempotron model with facilitating synapses. Classification success of random spike trains was improved in the presence of facilitating synapses. E: same as C for the tempotron model with depressing synapses. Classification success of equally spaced spike trains was improved in the presence of depressing synapses. F: absolute change in success rates between classifications of bursts with random and equally spaced spikes.

Short-term synaptic dynamics improves spike timing learning.

Rapid changes in synaptic Pr due to STP are an activity-dependent phenomenon and are therefore sensitive to changes in spike timing statistics. In addition, such short-term dynamics depends on which form(s) of STP dominates in a given synapse; synapses with low basal Pr have a tendency to express dominant short-term enhancement during brief spike bursts, while synapses with high Pr typically express dominant short-term depression during bursts (Dobrunz and Stevens 1997; Zucker and Regehr 2002). We thus studied the role of short-term synaptic dynamics in the spike timing learning success by considering two types of synapses. Facilitating synapses were simulated with a mechanistic model of excitatory CA3-CA1 synapses at low (0.2) basal Pr (Kandaswamy et al. 2010) (but see below for considerations of the entire Pr distribution); these synapses mainly facilitated at the burst durations and rates studied here (Fig. 3B). For high-Pr depressing synapses, we used a generalized model of a depressing synapse (Fig. 3B) (Tsodyks and Markram 1997) (see materials and methods for model parameters).

We found that, unlike the case of static synapses, learning performance with dynamic synapses depended strongly on the parameters of the burst. Specifically, our analysis showed that the presence of facilitating synapses selectively improved the classification success in tasks discriminating randomly distributed spike bursts, with the improvements increasing for shorter bursts (Fig. 3, D and F), while the presence of depressing synapses selectively improved classification success in tasks discriminating evenly spaced spike bursts, with performance improving for longer bursts (Fig. 3, E and F). Such preferential improvement can be explained by the changes these different spike patterns impose on the Pr for the individual spikes within the bursts. Interspike interval (ISI) distribution contains shorter ISIs for the random bursts than for the evenly spaced bursts; these short ISIs work to enhance Pr and transmission through facilitating synapses, while reducing Pr and transmission through depressing synapses. These differences in ISI distribution increase overall classification performance with facilitating synapses for random bursts and with depressing synapses for evenly spaced bursts. We note that the observed effects of dynamic synapses are relatively modest, but they are instructive nevertheless by showing that dynamic synaptic transmission is advantageous for learning spike timing modalities.

Short-term dynamics improves learning of spike timing statistics during bursts.

On the basis of the above findings, we performed a series of studies to examine modulation of which spike input parameters made the presence of dynamic synapses particularly advantageous in the spike timing learning tasks. First, we examined the ability of a model neuron with dynamic inputs to differentiate bursts with different spike timing statistics. In this task, as in previous tasks using bursts, each input contained a randomly timed single burst. Here, in addition to differences in burst timing, we sought to distinguish trains with random spike bursts (designated as “positive” inputs) versus trains with evenly spaced bursts (designated as “negative” inputs), or vice versa (Fig. 4A). In the current task, as well as in other burst discrimination tasks described below, the differences in the classification success rate were expressed as changes in success rates between the conditions. Our analysis showed that in the presence of facilitating synapses randomized bursts were identified with ∼30% higher accuracy than the constant-frequency bursts (Fig. 4B). On the other hand, in the presence of only depressing synapses, an improvement of >20% in classification accuracy was observed selectively for evenly spaced bursts (Fig. 4C). In both cases, however, classification success decreased for the alternative spike patterns: classification of constant-frequency bursts in the presence of facilitating synapses and classification of randomized bursts in the presence of depressing synapses were both decreased (Fig. 4, B and C). Thus different forms of input synaptic dynamics can strongly enhance the model neuron's learning ability to differentiate spike bursts with particular types of spike timing statistics, with the success rate in a given task depending strongly on the specific type of synaptic dynamics.

Fig. 4.

Role of short-term dynamics in differentiation of bursts with different spike timing statistics. A: spike times (vertical lines) of 20 inputs for positive (black) and negative (red) activity patterns. Patterns differentiating randomly distributed spikes (left) and equally spaced spikes (right) are displayed. The timed bursts displayed were generated with burst durations of 600 ms. B and C: classification success as a function of burst duration for distinguishing randomly distributed spike bursts (designated as “positive”) from equally spaced spike bursts (designated as “negative”) (B) or for the reverse task (with the opposite positive/negative assignment; C).

Short-term dynamics improves differentiation of bursts with variable duration.

Next, based on our previous results with static synapses, we examined whether modulation of burst duration would further enhance classification success with dynamic synapses. We designed a biased burst duration classification task in which the “positive” and “negative” inputs had different burst durations with random spike timing (Fig. 5A). The positive and negative trains were generated by modifying the burst duration from a chosen (central) duration. The bias was defined as the difference between the positive/negative change in burst duration and the central duration; each positive/negative burst duration had an equal and opposite deviation from the central duration. This input structure allowed for controlled continuous modulation of the bias instead of the discrete set of distributions we explored previously. To compare the learning performance, the deviation in success rate from the condition of zero bias was determined. A model neuron with static inputs showed minor changes in classification (<3%) when burst duration was modified by up to 240 ms from a 600-ms central duration (Fig. 5B). In the case of dynamic synapses, however, a strong increase in successful classification was observed for both negative bias (for facilitating synapses) or positive bias (for depressing synapses) (Fig. 5B). This increased success was accompanied by a decline in success in the opposing classification task, in which each type of dynamic synapses performed no better than chance. This decrease in performance for opposing classification tasks led us to further examine neuronal classification through nonhomogenous mixture of synaptic dynamics. In particular, equal mixture of facilitating and depressing synapses led to increased performance for both positive and negative bias (with 22% and 26% improvement, respectively, at the largest bias of ±240 ms examined), with little change in performance at around zero bias (Fig. 5B). This result suggests that with the mixture of dynamic inputs the learning neuron has no a priori preference of bias, but through adjustment of synaptic weights it can learn to respond selectively for both shorter and longer bursts.

Fig. 5.

Role of short-term dynamics in differentiation bursts with variable duration. A: spike times (vertical lines) of 20 inputs for positive (black) and negative (red) activity patterns. Patterns differentiating narrow bursts (negative bias; left) and wide bursts (positive bias; right) are displayed. B: classification success for discrimination of bursts with different burst durations, designed as a biased burst duration task. The learning success was measured relative to that at zero bias. C: learning success in a biased burst duration task as a function of burst duration. D: learning success in a biased burst duration task with nonhomogeneous mixture of input synaptic dynamics.

Distribution of input synaptic dynamics improves learning during burst firing.

We further examined how the neuronal learning ability to discriminate bursts of different durations can be modulated by modifying the ratio of facilitating to depressing synapses and the “central” burst duration. Modifying the ratio of facilitating to depressing synapses shifted the success change toward the dominant synaptic dynamics in the mixture (Fig. 5D). When an equal mixture of facilitating and depressing inputs was considered, short “central” burst duration led to increased learning success for the negative bias (Fig. 5C), similar to performance of facilitating synapses alone. This facilitation-dominated behavior results from the presence of short ISIs, leading to increased Pr and contribution of facilitating synapses and, at the same time, a strong depression and reduced contribution of depressing synapses. In contrast, long central duration favors positive bias, similar to depressing synapses, since long ISIs result in weaker depression and thus the higher Pr of depressing synapses that then dominate the overall transmission (Fig. 5, C and D). Thus the distribution of synaptic dynamics across the input population combined with appropriate burst structure could strongly contribute to improved neuronal learning.

It is important to note that these multiple contributions to more successful classification can be combined to further optimize classification. For instance, a perfect (errorless) classification of spike bursts could be achieved when a model neuron with facilitating synapses was used to classify random from evenly spaced spike bursts with a negative bias and short burst duration (negative bias of 120 ms from a central burst duration of 300 ms). Similarly, a model neuron with only depressing synapses was also able to classify without errors equally spaced bursts from random bursts with positive bias and a relatively long central duration (positive bias of 240 ms around central duration of 1,200 ms). Thus the presence of synaptic dynamics in combination with modulation of spike burst statistics can lead to greatly improved and (under optimized conditions) even perfect spike timing learning.

Neural learning of spike timing with natural distribution of synaptic unreliability.

The above results suggest that a neuronal input containing a mixture of facilitating and depressing synapses is necessary for successful neuronal classification of both wide and narrow spike bursts. We examined the relevance of this finding to real neurons by considering an example of excitatory hippocampal synapses. In the hippocampus, excitatory CA3-CA1 synapses have a wide distribution of Pr and resulting synaptic dynamics (Dobrunz and Stevens 1997). We modeled realistic CA3 synaptic input onto a CA1 pyramidal cell and examined whether the physiologically relevant distribution of Pr allows successful classification of bursts with varying duration. The basal synaptic Pr was discretely sampled from experimentally obtained distribution (Murthy et al. 1997). In our approximation, synapses with low Pr (<0.4) were modeled as predominantly facilitating (454 synapses) and synapses with high Pr (>0.4) as predominantly depressing (46 synapses). As in the above experiments, synaptic dynamics at each input was determined according to the input's basal Pr based on the models of STP that closely approximated experimentally recorded synaptic dynamics in excitatory and inhibitory synapses (Kandaswamy et al. 2010; Rotman et al. 2011; Tsodyks and Markram 1997) (see materials and methods for details). Using a differential burst duration classification task (with 11 spikes/burst), we found a robust improvement in learning in a wide range of central burst durations of 800–2,200 ms (Fig. 6A). Classification for the shorter bursts was similar to that with facilitating synapses only, as would be expected from having the vast majority (90%) of the synapses facilitating in our approximation. Classification for the longer bursts, on the other hand, was similar to that with depressing synapses only for all but the strongest negative bias. These results suggest that a naturalistic distribution of input Pr permits improved learning for a much wider range of burst properties than is possible with uniform distribution of synaptic unreliability and dynamics.

Fig. 6.

Learning with naturalistic distribution of input synaptic release probabilities. A: biased burst duration classification with naturally distributed release probabilities and dynamics. Bursts with 11 randomly distributed spikes in a 5-s simulated time window were used. The learning success was measured relative to that at zero bias. B: asymmetry index and shift of minimal performance point relative to performance at zero bias as a function of burst duration. Both parameters showed optimal (minimal) values for burst duration of 1,000 ms. C: classification success change measured for biased burst duration task around the “optimal” central duration of 1,000 ms.

To quantify the conditions at which learning performance is optimal (in the sense that the neuron's performance is improved over the widest range of input burst properties), we defined two measures: 1) asymmetry index and 2) minimal performance point. The asymmetry index is the absolute change between the performance success for discrimination tasks with maximal negative and maximal positive bias (set at ±0.4 deviation of the central burst duration); the minimal performance point is the measure of the asymmetry in classification performance of tasks with negative versus positive bias, defined as the value of absolute bias at which classification success is minimal, in relative units. Using a biased burst duration classification task and a naturalistic distribution of input Pr, we found that both measures are optimal (minimal) at a burst duration of 1,000 ms (Fig. 6B). With a 1,000-ms “central” burst duration, corresponding to the mean firing rate of ∼11 Hz, naturalistic distribution of synaptic dynamics allowed improved classification over a wide range of burst durations distributed around this central duration (Fig. 6C). Interestingly, we noted that this optimal central burst duration of 1,000 ms is comparable to the average burst duration recorded in vivo in hippocampal pyramidal cells during exploration (Fenton and Muller 1998; O'Keefe and Dostrovsky 1971). These results suggest that a wide distribution of synaptic Pr may contribute to widening the dynamic range of neural decoding during burst firing.


Intrinsic stochasticity of synaptic transmission presents major challenges to our understanding of neural coding/decoding (Goldman et al. 2002; Guo and Li 2010; Kumar et al. 2010). Furthermore, the computational significance of synaptic unreliability remains poorly understood (Abbott and Regehr 2004; Maass and Natschlager 2000; Maass and Zador 1999). Here we used computational modeling to gain conceptual insights into the possible roles of synaptic unreliability and dynamics in the neuronal ability to learn spike timing-related tasks, the tasks believed to be critical to sensory coding/decoding (Ainsworth et al. 2012). We found that our model neuron can learn to make spike timing-based decisions even under conditions of unreliable synaptic transmission, although with a reduced success rate. Such imperfect learning performance is amendable to coding strategies that could strongly improve learning of temporal code. Our results suggest that one such strategy is firing of high-frequency spike bursts instead of single spikes. Importantly, burst firing is particularly beneficial to neuronal learning performance in the presence of short-term synaptic dynamics. Dynamic synapses permit learning and classification of a variety of burst properties, suggesting that bursts could represent a complex encoding strategy beyond simply their timing. We further found that while a singular type of synaptic dynamics, either facilitating or depressing, improves learning performance for a specific spike timing statistics or task, a wide distribution of input Pr and thus of short-term synaptic dynamics is particularly beneficial for successful learning of a variety of spike timing modalities. We confirmed the applicability of this observation to realistic neurons by considering an experimentally determined distribution of synaptic unreliability in the hippocampus. These results suggest that synaptic unreliability and dynamics may play a major role in the neuron's ability to learn spike timing during neural decoding, and provide one conceptual advantage of widely distributed synaptic Pr in neural decoding.

Spike bursts and strategies for improved learning in spike timing-dependent tasks.

Our results indicate that synaptic unreliability significantly reduces the learning success in spike timing-related classification tasks. Since synaptic unreliability is typical in CNS neurons, what are the coding strategies that neurons may use to overcome this intrinsic “limitation”? We found that one such strategy could be firing of spike bursts instead of single spikes. Indeed, burst firing is a common feature of many central neurons (Lisman 1997), and bursts have been previously suggested as a useful coding strategy owing to their much more reliable transmission compared with that of a single spike (Kepecs and Lisman 2003; Klyachko and Stevens 2006; Lisman 1997). Recent studies indeed suggested that burst frequency and its temporal filtering can be used to encode/decode sensory information (Izhikevich et al. 2003; Oswald et al. 2007). The presented work generalizes the notion of burst firing as a coding strategy beyond burst frequency coding by demonstrating that many burst properties such as spike timing statistics, frequency, and duration can carry information that can be discriminated and learned by a model neuron.

A trade-off in using bursts as a coding strategy is that stochastic transmission of multiple input spikes introduces additional “noise” in the postsynaptic response timing and amplitude. Neuronal classification with noisy spike timing was previously studied (Manwani et al. 2002), particularly for the tempotron (Gutig and Sompolinsky 2006), and the noise was shown to produce increased misclassification with increased noise amplitude.

Our study, by virtue of examining stochastic transmission of spike bursts, exposed the tempotron learning scheme to a much larger timing variation (noise) in addition to resulting large-scale variations in postsynaptic response amplitude. We observed that in the case of stochastic but static synapses, learning improves somewhat with burst firing even in the presence of large spike timing noise. Yet the advantages of burst firing were apparent predominantly as the burst was shortened and thus resembled a single spike, since improved classification was apparent mainly for high-frequency (>30 Hz) firing. We found, however, that this limited improvement in learning is characteristic only of the static synapses, while the presence of inhomogeneous dynamic synapses makes the neuron capable of much-improved learning of a wide range of spike timing-dependent tasks during burst firing.

Synaptic dynamics as a mechanism of improved learning of spike timing rules.

The requirement of multispike firing as a coding strategy introduces new degrees of freedom associated with the spike timing within the burst. While neural classification with static synapses appears not to be able to take advantage of this additional encoding strategy, our results suggest that the presence of synaptic dynamics provides neurons with a capability to exploit this factor as a means for improved learning. For instance, classification with dynamic synapses showed increased learning success in distinguishing spike bursts with two different spike distributions, random and evenly spaced, with facilitating and depressing types of synaptic dynamics showing preference for one or the other type of spike timing statistics (Figs. 3 and 4). Similarly, burst duration can be used in addition to burst timing to improve learning success when transmitted through dynamic synapses (Fig. 5). Moreover, transmission through a population of synapses with mixed dynamics allowed further improved classification of bursts with variable width for both short and long bursts (Fig. 5). These results support and extend the recent modeling studies based on reliable synapses showing improved temporal classification with nonhomogeneous or random distribution of synaptic dynamics over static synapses (Carvalho and Buonomano 2011). These results further suggest that facilitating and depressing synapses can effectively filter either long or short bursts. Filtering capabilities of dynamic synapses were previously suggested to play a role in neuronal computation (Abbott and Regehr 2004; Izhikevich et al. 2003), in which complex filtering was hypothesized to originate in complex synaptic dynamics, which comprises both facilitation and depression in the same synapse. Our study suggests a more nuanced picture of dynamic synapse capabilities when a mixture of inputs with depressing and facilitating synapses is considered. Here we showed that a neuron with such a mixture of dynamic inputs has filtering capabilities that are a combination of the capabilities of its inputs. Complex filtering therefore does not necessarily require complex synaptic dynamics at individual synapses.

We note that the single-compartment nature of the tempotron model limits our analysis of temporal learning to synaptic processes and does not address dendritic computation. The tempotron was chosen in our analysis as a useful model that provides a systematic approach for examining temporal learning, with no alternative multicompartment-based approach to temporal learning available, to our knowledge. Dendritic computations have indeed been suggested to play important roles in many forms of neuronal information processing (London and Hausser 2005), and they provide an additional level of complexity that extends the presented synaptic processing.

It is also important to note that in this study we primarily aimed to examine how synaptic unreliability and dynamics may influence neuronal learning, and not necessarily to find the optimal firing pattern for successful learning. This means that our simulated experiments were designed to separate the multiple effects controlling spike timing classification success. Specifically, since the number of spikes or firing rate influenced classification success, we compared trains with the same firing rate and chose the range of burst widths that minimized the direct influence of burst duration on learning performance. The spike timing schemes were also limited to broad distributions to avoid change of the “effective burst duration” and ambiguous interference between burst duration and spike timing.

Distribution of synaptic unreliability as means for improved learning of spike timing-based decisions.

In many brain areas, populations of morphologically similar synapses such as, for example, excitatory CA3-CA1 synapses in the hippocampus have a wide distribution of Pr, and of corresponding short-term dynamics (Dobrunz and Stevens 1997; Murthy et al. 1997). The computational benefits of such heterogeneity in synaptic properties are poorly understood. We found that low-Pr facilitating synapses and high-Pr depressing synapses are complementary in their selectivity during burst discrimination tasks for both burst spike timing as well as duration. Our results thus predicted that a mixture of input synaptic Pr and of corresponding dynamics should lead to improved learning for a wide range of input burst statistics. Indeed, we found this to be the case for a wide range of facilitating-to-depressing synapse ratios and, most importantly, when synaptic dynamics was modeled based on realistic distribution of synaptic unreliability in the hippocampus (Fig. 6). Moreover, we found that the extent of improvement depends on a ratio between the number of facilitating and depressing inputs and on the burst properties. In the case of excitatory hippocampal synapses we considered, this ratio was ∼90%/10%, and major improvements were observed for burst durations distributed around ∼1-s duration. Interestingly, this “optimal” burst duration is close to the mean duration of hippocampal place field discharges—the spike bursts of hippocampal place cells that are believed to carry information about the animal's position in the environment (Lisman 1997; O'Keefe and Dostrovsky 1971). Thus our results suggest one conceptual advantage of a wide range of synaptic Pr in that it may contribute to expanding the dynamic range of neural decoding during burst firing. In addition, these results support the idea that a certain correspondence between the neuronal burst firing patterns and a distribution of synaptic Pr (and thus of synaptic dynamics) may have evolved, in part, to optimize the neuron's ability to process temporal code, particularly during burst firing. Since we observed the improved learning with a wide range of facilitating-to-depressing input synapse ratios, this principle may not be limited to the hippocampus but could be common in many brain areas. If this is the case, our results predict that neuronal burst firing statistics would be accordingly modified in a given circuit to maximize learning, given the specific dynamics of the inputs.


This work was supported in part by grants to V. Klyachko from the National Institute of Neurological Disorders and Stroke (RO1-NS-081972) and Whitehall Foundation.


No conflicts of interest, financial or otherwise, are declared by the author(s).


Author contributions: Z.R. and V.A.K. conception and design of research; Z.R. performed experiments; Z.R. analyzed data; Z.R. and V.A.K. interpreted results of experiments; Z.R. prepared figures; Z.R. and V.A.K. drafted manuscript; Z.R. and V.A.K. edited and revised manuscript; Z.R. and V.A.K. approved final version of manuscript.


We thank Dr. Valeria Cavalli and Diana Owyoung for their constructive comments on the manuscript.


View Abstract