Journal of Neurophysiology

Temporally Precise Cortical Firing Patterns Are Associated With Distinct Action Segments

Tomer Shmiel, Rotem Drori, Oren Shmiel, Yoram Ben-Shaul, Zoltan Nadasdy, Moshe Shemesh, Mina Teicher, Moshe Abeles

Abstract

Despite many reports indicating the existence of precise firing sequences in cortical activity, serious objections have been raised regarding the statistics used to detect them and the relations of these sequences to behavior. We show that in behaving monkeys, pairs of spikes from different neurons tend to prefer certain time delays when measured in relation to a specific behavior. Single-unit activity was recorded from eight microelectrodes inserted into the motor and premotor cortices of two monkeys while they were performing continuous drawinglike hand movements. Repeated scribbling paths, termed drawing components, were extracted by data-mining techniques. The set of the least predictable relations between drawing components and pairs of neurons was determined and represented by one statistic termed the relations score. The chance probability of the relations score was evaluated by teetering the spike times: 1,000 surrogates were generated by randomly teetering the original time of each spike in a small window. In nine of 13 experimental days the precision was better than 12 ms and, in the best case, spike precision reached 0.5 ms.

INTRODUCTION

Most of what is known about the response properties of neurons and the functional architecture of the cortex is based on studying the changes in the firing rates of neurons. Even studies based on functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) draw on the assumption that when a group of neurons engage in a certain function, their average firing rate increases and thus there will be a concomitant increase in their oxygen demand and blood flow.

The fact that each cortical neuron is affected by thousands of other cells, yet each neuron typically fires at a low rate, seems to support the notion that the exact time of each spike is random. In the motor cortex, from which our measurements were derived, a response is typically characterized by counting the number of spikes in 20- to 100-ms epochs (e.g., see Georgopoulos et al. 1982; Kakei et al. 1999; Moran and Schwartz 1999). However, there are numerous reports of precise firing sequences in the cortex in which the interval between spikes ranges from precise synchrony to several seconds (Abeles 1982; Abeles et al. 1983; Dayhoff and Gerstein 1983; Frostig et al. 1990; Hastopoulos et al. 1998; Klemm and Sherry 1981; Landolt et al. 1985; Legendy and Salcman 1985; Lestienne et al. 1987, 1996, 1999; Nadasdy et al. 1999; Prut et al. 1998; Riehle et al. 1997; Tetko and Villa 2001; Villa and Fuster 1992; Villa et al. 1990, 1999). Some quantitative efforts to define the temporal accuracy of the activity of cortical neurons have produced a figure of around 25–10 ms (Nadasdy et al. 1999; Victor and Purpura 1998).

In recent years it has been claimed that what seemed to be precise firing sequences could be accounted for by chance if the appropriate firing statistics are used (Baker and Lemon 2000; Oram et al. 1999). These testing methods can be applied when there is a clear indication of the time to which the neuron responses can be related or when the interspike intervals (ISIs) of a single neuron can be accurately described by a gamma density of a fixed order throughout the recording period. Neither of these scenarios applies to the data analyzed here.

An alternative way of approaching the statistics issue is based on the following experiment suggested by Bienenstock, which begins with a hypothetical illustrative scenario. Suppose there were a drug that could create a random delay between the time that the membrane potential of a neuron hits threshold and the actual time that the neuron emits a spike. If after consuming this drug, all our thought processes were incapacitated, precise timing would be very important. If on the other hand nothing went wrong, the exact spike time would have no significance. This idea was formalized by Geman and Bienenstock in the following way. The null hypothesis is that spike time is random up to a time window of width J. If this is true, then teetering each spike at random within ±J/2 should not affect any statistic extracted from the spike trains. To test the probability that this null hypothesis holds, we repeatedly teeter the spike trains within J, recompute the statistic after each teetering, and obtain the distribution of these statistics for the teetered data. If the value of the statistic derived from the real data falls at the Xth percentile, then the probability that the data are consistent with the null hypothesis is (1 − X).

Our data are taken from recordings from several electrodes in the motor cortex of two monkeys while they performed continuous drawinglike hand movements. Testing the above null hypothesis calls for studying the relations between thousands of possible firing sequences and tens of motion components of the same type for both real data and thousands of teetered data. This appears to be a formidable computing task. However, data-mining techniques (Mannila et al. 1995, 1996) can efficiently carry out such calculations.

A preliminary report of this study that described only part of the results was published in Shmiel et al. (2005). Note that the results in this article were based on data recorded from two monkeys in two different experiments (see description in methods).

METHODS

The experiments

Two Macaca fascicularis monkeys were trained to hold a two-joint manipulandum and perform continuous drawinglike motions in the horizontal plane. The monkeys held the manipulandum that was placed under a white screen where they could not see it. A yellow cursor projected onto the screen indicated the position of the monkey's hand. In one of the monkeys (monkey U) free scribbling was induced by the following task. The working space was tiled by 19 invisible hexagons. One of the 19 hexagonals was randomly selected as an invisible target. When the monkey hit the target, a short beep was sounded the monkey obtained a juice reward and the target jumped at random to another location. In this way the monkey was encouraged to produce continuous movements. In the second monkey (monkey F) the task started by showing a circular initial target. Once the monkey placed the cursor inside the initial target one of 40 curved trajectories was displayed in gray. An elongated wormlike target was shown in front of the cursor. Once the monkey entered the cursor into the “tail” of this worm it advanced slightly along the trace. Thus the monkey was induced to “chase” the wormlike target along the trace. When it reached the end, the monkey was reinforced by a drop of juice. The manipulandum position was sampled 100 times/s.

After training, a recording chamber and a head holder were attached to the skull under full surgical anesthesia in aseptic conditions. The chamber position was verified by MRI. Proximal arm regions were identified by cortical microstimulation and the border between M1 and PreM by histology. After recovery, daily recordings of cortical activity with eight individually movable microelectrodes commenced. After recording for several weeks, the monkeys were trained to alternate between trials of scribbling and trials of a standard center-out motion. Data were initially stored in the computer memory and after 10 rewards, or 1 min with no reward, were transferred to disk files. Each file contained only trials with similar behavior.

Here, only data obtained for blocks of continuous drawing motion were analyzed. The statistics ideally require long stretches of stationary recording with multiple neurons. Therefore we looked for recording days with penetrations in which many cells showed arm-related activity based on intracortical microstimulations (train duration 100–200 ms; 300 pulses/s, each lasting 0.2 ms; amplitude ≤50 μA) and/or passive manipulations of arm joints. Only days with ≥400 rewards were selected. All isolated units were tested for stability of total spike count in a fixed window ranging from 1,000 ms before the reward event until the reward. For each unit we used only a continuous range of files in which activity was judged to be stable, in the sense of only small variations of mean firing rates.

Based on this criterion, eight recording sessions from monkey U and five recording sessions from monkey F were selected for further analysis. For three of these sessions spike sorting was done on-line and time resolution was 1 ms. For the other 10 days, spike shapes were sorted off-line (Alpha-Sort, Alpha Omega, Nazareth, Israel) and stored at a resolution of 0.1 ms. Only shapes regarded as well isolated or as a mixture of two spike shapes at most were used. Units whose averaged firing rate was below two spikes/s were not analyzed. These days contained three to 21 isolated units recorded through two to eight electrodes that showed stationary activity in a range of five to 136 files.

All handling and treatments were in compliance with the law, were approved by the Institutional Ethics Committee, and were supervised by a veterinarian.

Analysis overview

The study presented here focuses on analyzing the relations between hand motion and patterns of spikes. In our analysis we focused on finding relations that involve exact time delays between spikes. The statistical significance of the results was evaluated by comparing with the results from a large set of surrogate spike trains having the same firing-rate undulations as the actual data. The number of possible relations is large. In a typical day we had over 8,000 pairwise intervals that had to be studied for tens of motion patterns. All these had to be analyzed for both the real data and thousands of surrogate spike trains.

In each experimental day we first detected the set of the most repeating patterns of drawing. Then we computed all the possible pairwise cross-correlations near the beginning of each of these drawings. We estimated the probability that the outstanding bins were attributed to precise timing control by teetering (more details below). Finally we summarized the data by a single number we term the relations score. Intuitively the relations score grows larger as the chance probabilities of timing relations (between pairs of spikes and drawing patterns) become smaller. The next question was whether its value was higher than what we would expect from random data that have similar firing rate modulations. Thus to evaluate the significance of the computed relations score for the real data, we used an independent set of 1,000 more surrogates, for which we computed the relations scores following exactly the same procedure. By comparing the relations score of the real data with the 1,000 surrogate relations score values, we estimated the statistical significance of the real relations score. The details of this analysis are given below.

Drawing data analysis

The main goal of the drawing data analysis was finding repeating patterns of drawing. The idea was to use data-mining techniques (Mannila et al. 1995, 1996). In computer science there are many algorithms for detecting patterns in data. A major group in this type of algorithms is taken from the field of data mining, where the input data can be interpreted as a series of events with an associated time for each. Variations of common data-mining models usually assume an input of the form: 〈T1, E1〉, 〈T2, E2〉, … , 〈Tn, En〉, where Ei is the ith event in some described activity and Ti is its occurrence time. Efficient methods have already been developed to solve such problems. The results given in this article are based on the algorithms for discovering serial episodes as described below.

A serial-episode S is an ordered set of events, noted by: Ej1Ej2 →… → Ejk. Given a time window W, we say that S occurs at time t if the two following conditions hold.

1) The event Ej1 occurs at time t.

2) The events Ej2, … , Ejk occur in this order within the time interval [t, t + W]. Note that they can be separated by other events.

The frequency of S, noted as freq(S), is defined as the maximal number of nonoverlapping occurrences of S. A serial episode whose frequency has crossed some predefined threshold is called frequent. Given a series of events, a time window W, and a frequency threshold f, the problem of discovering serial episodes is the problem of finding all serial episodes that are frequent with respect to W and f.

For this purpose, the continuous drawings were converted into sequences of events (drawing events). A drawing event was marked as occurring at the time at which a certain drawing property changed from one range of values to another. The property itself could be arbitrarily chosen. However, in this research, we quantized all possible drawing directions into 12 sections of 30° each, and all possible drawing velocities into 10 sections of 20 cm/s each (i.e., 0–20, 20–40, … , 180–200). Transitions from one section to another were considered drawing events (for both types of sections). For example, a drawing event was said to occur when the drawing direction varied from 27 to 32° or when the drawing velocity varied from 101 to 99 cm/s. In one experimental day there could be hundreds of such transitions. Had we based our transitions on criteria other than direction and velocity (e.g., acceleration or curvature), a different set of transitions would have emerged.

To detect repeating patterns among these transitions on a timescale, we implemented the serial-episodes algorithm (Mannila et al. 1995, 1996). The detected repeating patterns were called drawing components, which consisted of either direction transitions or velocity transitions. A mixture of the two types in the same pattern was not allowed, but a single pattern might contain many transitions of the same type.

During the translation of the data into a series of transitions, an additional parameter was used for pruning indecisive transitions. The idea behind this parameter is that a formal transition of values between sections is sometimes not enough for it to be noted. Similar to Schmitt Trigger Logic we required that the trend of the transition would continue even after the first time we detected it. In practice, this parameter is a real number between 0 and 1, which determines how deep the channel must enter the target range to show conclusively that the transition is not noisy. If it goes deep enough, a transition is generated the first time the values entered the target range. For example, if we set this parameter at 25%, a transition between section [0, 20] and section [20, 40] is noted only if the channel reaches (or exceeds) value 25 within the target section, where 25 = 20 + 0.25(40 − 20). In this way we avoided generating transitions that are caused by noise inherent to the recorded channels.

Figure 1 illustrates typical direction and velocity drawing components.

FIG. 1.

Examples of 2 drawing components; 30 s of drawing is shown in each box. Hand position was sampled 100 times/s (dots in the drawing). Monkey mostly drew in a counterclockwise direction. A: occurrences of the drawing component: transitions of drawing direction from a range of 180–210° to a range of 210–240° are marked by thicker dots. B: occurrences of the drawing component: transitions of drawing velocity from a range of 20–40 cm/s to a range of 0–20 cm/s are marked by thicker dots.

Relations between drawing components and pairs of neurons

Once the occurrence times of the most frequently repeating drawing components based on direction and velocity transitions had been obtained (for a given experimental day), we turned to finding relations between these components and pairs of neurons. Formally, a relation is defined as a triple 〈C, n1, n2〉, where C is a drawing component and n1, n2 are different neurons. To simplify our analyses we focused on only the 15 most frequent components based on drawing direction and the 15 most frequent components based on drawing velocity (the more frequent the component is, the more spike activity to be analyzed near the occurrences of this component). For each such possible relation we tabulated the occurrences of each possible time interval between a spike from n1 and a spike from n2, adjacent to all occurrences of C. Because the brain activity is expected to precede hand motion, the exact time regions in which we counted these intervals were determined relatively by trying several 100-ms offset windows before each occurrence of C. Ten different offset windows were checked, starting at offsets 100, 200, … , 1,000 ms, respectively, before each occurrence of C. Formally, if 1 ≤ k ≤ 10 is the index of the offset window that we want to try, and T1, T2, T3, … , Tn are the occurrence times of C, then for each time interval delta we counted the number of windows of the form [Tik × 100 ms, Ti − (k − 1) × 100 ms], where i = 1, 2, … , n in which n1 emitted a spike, and after an exact time delay of delta, n2 emitted a spike as well. Note that only the first spike (from n1) is required to be within the offset window. For example, if C occurs at times 500, 2,000, and 7,500 ms, and we want to count an interval of 28 ms for the first predefined offset window, we need to count how many of the time windows [400 ms, 500 ms], [1,900 ms, 2,000 ms], [7,400 ms, 7,500 ms] contain a spike of n1 followed 28 ms later by a spike of n2.

The total number of different intervals between each pair of neurons was limited to 50, whereas the time units in which they were measured varied from 1 to 2 ms (depending on the recorded time resolution of the analyzed day). Data for 13 recording days were analyzed. On 10 of these days spike times were recorded at a resolution of 0.1 ms and on the three others at a resolution of 1 ms. For the first group, time intervals were measured in units of 1 ms, and in the second group in units of 2 ms. Thus for the 3 days in the second group, the 50 possibilities for time intervals were: {0–1 ms, 2–3 ms, … , 98–99 ms}. This is equivalent to computing the cross-correlation between the two spike trains with bins of 1 (or 2) ms and for delays 0 to 50 (or 0 to 100) ms.

Spike sorting by shapes that were recorded through a single electrode can result in mislabeling. Intracellular properties that may generate precise time intervals can be confused with precise timing, which is generated by the organization of activity in the network. Therefore we included only neural components consisting of two neurons that were recorded through different electrodes. For example, if we have three electrodes recording spikes from two neurons each, there are 30 − 6 = 24 valid pairs of neurons (note that the pairs 〈n1, n2〉 and 〈n2, n1〉 are different). Altogether, this yields 24 × 50 = 1,200 potential intervals, some of which may never occur.

Given a relation R =C, n1, n2〉 and an offset window W, we used the notations CC(R,W) and CC_peak(R,W) to represent the frequencies of the interspike intervals and the maximal frequency respectively. Formally, for each 0 ≤ d ≤ 49, the dth bin of CC(R,W) is the total number of occurrences of C around which (with respect to W) the time delay d exists between some spike of n1 and some spike of n2. Following this notation, CC_peak(R,W) is the maximal value of CC(R,W) over all time intervals. Figure 2 illustrates how intervals are counted in three different windows of 100 ms assuming a resolution of 1 ms.

FIG. 2.

Example of counting spike intervals in 3 different windows. Small higher/lower linelets represent spikes from 2 different neurons n1/n2, respectively. In this example counting intervals for the ordered pair (n1, n2) was limited to 3 occurrences of some drawing component. All existing intervals are marked below the time axis and are aligned with the time they occur. Note that only the first spike of each counted interval (the n1) is required to occur within the relevant window (e.g., the rightmost window). Because intervals are not counted more than once for a certain window, the total frequency of the 19-ms interval is 3. Both the 40- and 41-ms intervals have a frequency of 2, unlike the other intervals (5, 39, and 83 ms), which have a frequency of 1. Had intervals been counted assuming a resolution of 2 ms (instead of 1 ms), both 40- and 41-ms intervals would have fallen in the same bin (with a frequency of 3).

Preparing expected distribution for CC_peak based on surrogate trains

Once CC(R,W) and CC_peak(R,W) were computed for all possible relations in all 10 predefined windows, our next goal was to determine the likelihood of obtaining CC_peak(R,W) by chance, given the null hypothesis that there is nothing precise about the spike times. In other words, given a relation R and an offset window W, we wanted to estimate the probability of CC_peak(R,W) to occur by chance. For this purpose we used the idea of teetered data (Date et al. 2000; Hastopoulos et al. 2003). We generated surrogate spike trains by randomly teetering the time of each spike within 10 ms around its real time. Formally, for the data recorded at a resolution of 0.1 ms, suppose the time of a certain spike in the original data is t; then its time in the teetered data is uniformly distributed in [t − 4.9 ms, t + 5 ms] up to a resolution of 0.1 ms. For the data recorded at a resolution of 1 ms its time is uniformly distributed in [t − 4 ms, t + 5 ms] up to a resolution of 1 ms. Note that teetering destroys the accurate times of original spikes while preserving the averaged firing rate of each neuron around a certain time point.

After the teetered data were created, we used the values to again compute CC(R,W) and then CC_peak(R,W) for every R and for every W, following the same steps as before (recounting intervals during the same time slices, etc.). We repeated this process 1,000 times, creating an estimated distribution of 1,000 values for CC_peak(R,W). We labeled this distribution CC_peak_distribution(R,W) and it was used to compute the relations score in the next section. Figure 3 shows CC(R,W) and CC_peak_distribution(R,W) for one of the significant cases in our data. Note that the value of the peak of the real data (dashed line in Fig. 3) stayed outside the distribution even when the data were teetered 1,000,000 times. In all the cases used for further analysis the peaks for the real data were outside the distribution or in its far tail.

FIG. 3.

Probability of a peak in cross-correlation. Top: cross-correlation between units (4, 2) and (2, 4), that had a peak at 25 ms. Its height was 30 counts. What is the probability of obtaining this value by chance if the spike counts are not precise? Actual data were teetered 1,000 times within a window of J = 10 ms. Each time, the highest peak of the cross-correlation was noted. Bottom: distribution of these 1,000 noted peaks. Broken line is the peak value obtained in the real data (top). Clearly the probability of a chance hit is ≪0.001.

The significant relation shown in Fig. 3 was found for the offset window W = [−0.6, −0.5], i.e., it starts 0.6 s before the onset of the drawing component and lasts for 0.1 s. This drawing component consisted of a single transition in the drawing direction from [120°, 150°] to [150°, 180°]. An extremely significant time interval of 25 ms was found (measured at a resolution of 1 ms) between spikes from two specific neurons, where the first spike was within the offset window (related to the occurrences of the drawing component). Figure 4 provides a dot display of these two neurons within the offset window.

FIG. 4.

Dot display showing occurrences of a frequent interspike interval around occurrences of a drawing component in one of the relations. Top: firing times of unit (4, 2). Bottom: firing times of unit (2, 4). Each linelet represents a single spike. Each of the 30 lines in both panels shows spikes occurring around the appropriate 30 occurrences of the drawing component preceded at least once by the 25-ms interval in the 0.6- to 0.5-s window before the start of the drawing component. Rasters were aligned on the first spike of the selected interval. If the same interval appeared more than once around the same movement, the last occurrence was used for alignment. Onset time of the drawing component appears in blue. Trials are sorted by increasing delays between the neural intervals and the drawing components. Gray line in each panel represents the average firing rate considering all 30 rasters.

Evaluating the relations score for the original data

For each R and W, we wanted to roughly estimate the probability of obtaining the computed value CC_peak(R,W) by chance in the original data. We labeled this probability CC_peak_prob(R,W) and estimated it by the geometrical average of the two following probabilities.

1) CC_peak_prob1(R,W)—the probability of obtaining the value CC_peak(R,W) or more, assuming a Poisson-like distribution CC(R,W) of all other intervals in the original data.

2) CC_peak_prob2(R,W)—the probability of obtaining the value CC_peak(R,W) or more, assuming a Normal-like distribution CC_peak_distribution(R,W) of all heights of peaks from all teetered data. Note that if that distribution contains K > 0 values, which are greater than or equal to CC_peak(R,W), the CC_peak_prob(R,W) is simply estimated by K/1,000.

These two probabilities might be dependent, so we used their geometrical average as a rough estimation to measure the precise timing relations between drawing components and pairs of neurons. As mentioned earlier in the Analysis overview section, this imprecise estimation is of little importance because at the very end we repeat the exact computations for another independent set of 1,000 teetered data.

Conceptually, given R = 〈C, n1, n2〉, the value CC_peak_prob(R,W) provides a rough estimation of how unlikely it would be to obtain a delayed synchronization between spikes of neurons n1 and n2, within the offset window W, around the occurrence times of C. Because the computation of CC_peak_prob(R,W) relies on two approximations to Normal and to Poisson distributions, its evaluated value may at times not reflect the real probability of the relation R to occur by chance. Thus given an offset window W, we did not want to analyze CC_peak_prob(R,W) for all possible relations R, but only for those that had the highest likelihood of being estimated properly. In our analysis the only relations used were the 2% in which the mean value of CC(R,W) was the highest, considering its average in all teetered data. The rationale for this idea is that distributions consisting of discrete integer values have a better chance of being estimated properly when their mean value is relatively large. If the mean approaches zero, the total number of distinct distribution values becomes small (because they must be integers). Thus statistical metrics such as the SD becomes less reliable. Using relations that have relatively higher averages decreases the estimation errors.

Once we had all the estimated probabilities CC_peak_prob(R,W) for each used relation R and for each predefined offset window W (for a given experimental day), a statistic called the relations score was extracted. The relations score sums up the likelihoods of the estimated relations into a single score value. Intuitively, the relations score should become larger as the likelihood of the CC_peak(R,W) values decreases for this day. Note that its evaluation is based solely on the window in which the found relations were most unlikely, using the following steps.

1) For each predefined offset-window W compute relations-score(W) as follows.

1.1) Sort all relations into buckets, where each bucket contains relations having the same pair of neurons and the same peak interval (considering W).

1.2) For each bucket, remove all relations except the one of the lowest CC_peak_prob(R,W) probability.

1.3) Sort the remaining relations in ascending order by their CC_peak_prob(R,W) probabilities.

1.4) If there are more than 10 relations, remove the kth relation for all k > 10.

1.5) relations-score(W) is defined as ∑ −log2 (CC_peak_prob(R,W)).

2) The result relations score is defined as the maximal value of relations-score(W) out of all possible W.

In step (1) we score each possible offset window based on the unlikely relations in it. In step (2) we choose the maximal score out of all 10 offset windows and define it as the final relations score.

RESULTS

Estimating the P value using a test set of surrogate trains

We were interested in testing the null hypothesis that spike times are random within a window of width J. If this null hypothesis is true, then replacing the time of each spike by a randomly selected time within J around its true time should not affect any of the statistics extracted from the spike times (Date et al. 2000; Hastopoulos et al. 2003). To implement this idea we used the relations score statistic as defined in methods. This statistic becomes larger when there are several pairwise correlations between drawing components with outstanding bins. Once the relations score was computed for the actual data (denoted by S0), the next step is to assess the probability it could occur by chance. In other words, we need to test whether S0 is significantly higher than what we would expect from random data with similar firing rates. To simulate random data that preserve firing rates of neurons, we randomly teetered the time of each original spike within a small window of J milliseconds in units of 0.1 ms. For example if J = 10 ms and the original time of some spike is 125 ms, its new time after teetering may be any time within {120.1 ms, 120.2 ms, 120.3 ms, … , 130 ms}. Using this technique, we generated 1,000 such surrogate spike trains. Each surrogate train was given a relations score (S1, S2, S3, … , S1,000, respectively) following exactly the same procedure as for computing S0 (including reselection of the best offset window that led to the largest relations score). These 1,000 values were used to estimate the probability [denoted by p(S0)] of obtaining the value of S0 by chance (see Fig. 5). For example, if only 50 surrogate trains exceeded the relations score of the actual data, then p(S0) was estimated by 50/1,000 = 5%.

FIG. 5.

Distribution of relations scores for surrogate spike trains and the actual data; 1,000 surrogate spike trains were independently generated by teetering spike times of an experimental day within 10 ms. For each of these, the relation score was extracted. Distribution of these relations score values was estimated by a histogram using bins of size 3. Abscissa shows start values of each bin and the ordinate shows the total amount of teetered data with relations scores that fell into that bin. Relations score of the actual data was 121.351 (arrow). None of the 1,000 surrogate trains had a value above it; thus the P value for the actual data were estimated as <1/1,000.

Figures 3 and 4 illustrate a part of the actual data leading to the high relations score found in Fig. 5. The relations score was based on the 10 least likely relations. This method was used to estimate the probability p of the relations score value of the actual data for each experimental day. Significant values for p were observed for nine of 13 recording days in this study.

Detecting the time precision of the synchronization

For each of the experimental days that showed a significant relations score (using a J = 10-ms teetering window), we recomputed the value of p for different widths of teetering windows. Figure 6 shows that significant values of p could be obtained for much smaller teetering windows. The smallest windows producing significant results for monkey U were 0.5, 2, 3, 4, 8, and 12 ms and for monkey F were 2, 6, and 12 ms. Note that these represent an upper bound for the resolution. A different statistic could yield even higher temporal precision.

FIG. 6.

Significant levels for different teetering windows for both monkeys. Significance level for different teetering windows is shown for each experimental day whose p(S0) was <2.5% for 10-ms precision. Abscissa is the teetering window (from 10 to 0.3 ms); ordinate is the probability value of p(S0). Because we looked for significant values in drawing components based on both direction and velocity separately, all probabilities have to be multiplied by 2. That is, a significance of 2.5% in this figure stands for a significance of 5%, and so forth. As can be seen, the days shown have significant values for teetering at 0.5, 2, 2, 3, 4, 6, and 8 ms. Monkey U is represented by the continuous lines and monkey F by the dashed lines.

A significant value for this probability indicates that the relations between drawing components and pairs of neurons (on that day) are impaired as a result of teetering within J milliseconds. From this we can conclude that around the occurrences of similar behavior, pairs of spikes tend to prefer some specific time delays. Clearly, teetering within 0.5 ms already had a significant effect on the experimental day August 26, 1999. Thus the spike times of the cortical neurons are accurate within 0.5 ms. Figure 7 supplies information about the type of movements and their associated temporal structures in 10 unlikely relations found in our data.

FIG. 7.

Example of 10 significant unlikely relations. A: direction range was divided into 12 subranges of 30° each (D1, D2, …). B: velocity range was divided into 5 subranges of 20 cm/s each (V1, V2, …). C: most unlikely relations are shown. First 2 columns list the experimental data (date and monkey); next 2 columns indicate the type of movement and the drawing component (using the notations presented in A and B where “→” denotes a transition). Fifth column indicates the time period around these components during which they were most significantly related to neural activity in that day. In the last 3 columns, units 1 and 2 refer to the electrode ID and to the cell ID, and delta indicates the time interval between the firing times of these 2 units.

Reliability tests

To ensure the reliability of the results in this article we made the following verifications.

  1. ) We took the data for all significant days and, before starting, we teetered spike times within a window of J = 10 ms. Then we repeated the entire procedure step by step from the beginning (including reteetering the data, etc.). No significant relations scores were found for any of these days.

  2. ) Similar to 1), but instead of teetering the original spikes, we replaced the occurrence times of each drawing component used with randomly generated times (within the boundaries of the trials). No significant relations scores were found in this case either.

  3. ) When evaluating the probability of a relation whose peak was outside the created 1,000-value distribution of peaks, we used a probability of 0.001 instead of approximating by Normal/Poisson distributions. This change did not affect the results reported here.

  4. ) In addition to the brute-force teetering method, we tried another teetering method that preserves the refractory time of spikes from a certain neuron. We obtained the same results.

DISCUSSION

If each spike in the cortex could be tagged in a way that would allow grouping of all spikes that belong to the same process, then many computations could be carried out much faster (von der Malsburg 1981, 1994). Such tagging could be provided by the exact time at which the spike is emitted. Precise synchrony for all the associated spikes could then be used to identify them as belonging to the same process (Singer and Gray 1995). Phase-locked oscillations can play a similar role (Eckhorn et al. 1988; Gray and Singer 1989). Sequences of synchronized activity in small groups of neurons could be used as well (Abeles et al. 1993a,b; Bienenstock 1995; Shastri and Ajjanagadde 1993). It is worth emphasizing that the concept of coding by precise timing and by firing rates are not mutually exclusive and the two codes could well be multiplexed. Neurons may elevate their firing rates when an appropriate stimulus, or mental state, is present, and within this elevated rate each spike could be tagged by its exact time in relation to other spikes.

Several conditions and experimental evidence must be present to demonstrate that precise firing time is used in the cortex.

  1. ) Cortical neurons should be capable of emitting precisely timed spikes.

  2. ) There should be networks that are capable of controlling the spike time and read it.

  3. ) There should be experimental evidence that there are precisely timed spikes in the cortex.

  4. ) There should be experimental evidence that precisely timed spikes are strongly linked to specific stimuli, actions, or mental states.

Cortical neurons can emit precisely timed spikes, as was demonstrated by repeatedly injecting pseudorandom current into neurons (Mainen and Sejnowsky 1995). Synfire chains (Abeles 1991), synfire braids (Bienenstock 1996), and polychronous groups (Izhikevich et al. 2004) can generate precisely timed spikes. This was demonstrated by theoretical analysis (Abeles 1991; Herrmann et al. 1995), numerical analysis (Diesmann et al. 1999), simulations (Abeles et al. 1993a,b; Izhikevich et al. 2004; Morrison et al. 2005), and in a brain slice (Reyes 2003). The same structures can recognize the precise firing sequences of other networks and lock onto them. This ability can be used to dynamically bind some elements to a complex object while ignoring others (Abeles et al. 1993b; Hayon et al. 2004, 2005).

Despite multiple reports on the existence of precise firing sequences in brain slices of both anesthetized and behaving animals (Abeles 1982; Abeles et al. 1983; Baker and Lemon 2000; Dayhoff and Gerstein 1983; Frostig et al. 1990; Hatsopoulos et al. 1998; Klemm and Sherry 1981; Landolt et al. 1985; Legendy and Salcman 1985; Lestienne et al. 1987, 1996, 1999; Nadasdy et al. 1999; Prut et al. 1998; Riehle et al. 1997; Tetko and Villa 2001; Villa and Fuster 1992; Villa et al. 1990, 1999), the statistical validity of these findings has been challenged (Oram et al. 1999). Testing by teetering the spikes seems to be the most assumption-free approach, although it does present several pitfalls.

Precisely timed spikes may be evoked when activity is tightly locked to an external event. Such precise timing will be detected by teetering spike times, but may be of little interest when relating to coding by precise firing sequences. In our experiments, when the monkey scribbles freely, there is no precise external event and the firing rates of individual neurons wax and wane gradually in time. Pace-maker activity of a single neuron will produce precise time intervals between its spikes, a structure that will be destroyed by teetering. Careful examination of all dot displays of sequences that contributed to the final score on each day did not reveal any periodicities. Other intracellular processes (aside from pace-maker activity) can also contribute to precise spike timing. These include fast recovery from refractoriness and bursting mechanisms. Using only intervals across neurons reduces the likelihood that such intracellular processes will show up as precise across-neuron intervals. Furthermore, eliminating all bursts and short interspike intervals (ISIs) did not affect the results. Spike sorting has an inherent dead time in which two partially overlapping spikes cannot be detected. This too may be destroyed by teetering and therefore alter the results. By using only intervals between spikes of neurons recorded through two different electrodes, we eliminated the effects of such a dead time.

There are several variants of spike teetering. It is possible to use softer teetering by wedge-shaped or bell-shaped windows. One can avoid two-teetered spikes from becoming closer than a predetermined refractory period. G. L. Gerstein (2004) suggested using the joint ISI distribution to produce a teetered spike train with the same joint distribution. In this study, we simply teetered the time of each spike and disregarded all other spikes. In the teetered spike train refractoriness may be violated. If this occurs, however, it may only increase the likelihood of obtaining high counts in the same bin of a cross-correlation. To clarify this point, we used reliability test 4.

Finally, the choice of the statistic is crucial as well. For example, Oram and Richmond studied responses of single neurons in LGN and V1 to presentations of grating. They looked for cases where the same ISI repeated twice (or more) after a single stimulus presentation. The number of cases in which such repetitions occurred did not exceed chance. These results may indicate that there were no precise spike timings in their experiment, but also may indicate that their statistic was not appropriate for detecting precise timing. Our statistic was based on the number of repetitions of the same across-neuron spiking interval during repetitions of the same drawing behavior. It is based on the assumption that precise timing is generated by synfire chains. The precision found in our experiments is an upper bound on the actual precision. Possibly a smarter statistic may reveal even higher precision. Note that our findings are consistent with the assumption that activity is organized in synfire chains, but does not prove it. There may be other networks that can produce such precise timing.

Our results provide a positive answer to 3) above: There is experimental evidence showing precisely timed spikes, whose temporal accuracy may reach 0.5 ms. Our data partially provide evidence for 4) above: We were able to find evidence for precise timing only when data within a set time range before certain drawing components were considered. When the same number of time slices was randomly selected, no evidence for precise timing was found. Further research should focus more directly on ways of discriminating between stimuli, movements, or mental states by precise firing sequences.

GRANTS

This project was funded in part by Center of Excellence Grant 8006/00, administered by the Israel Science Foundation, and grants from the German–Israeli Foundation, Deutsche–Israelische Projektkooperation, the Horowitz Fund, and the Rich Center. M. Abeles is Shamoon professor at Bar-Ilan University.

Acknowledgments

T. Shmiel was responsible for the analysis and was aided by O. Shmiel. R. Drori was responsible for the experimental part and was aided by Y. Ben-Shaul, Z. Nadasdy, and M. Abeles. M. Shemesh conducted the off-line spike sorting. Y. Ben-Shaul and M. Abeles developed the behavioral hardware and software, M. Teicher supervised the analysis, and M. Abeles directed the project as a whole.

Footnotes

  • The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

REFERENCES

View Abstract