## Abstract

Oscillatory neurons integrate their synaptic inputs in fundamentally different ways than normally quiescent neurons. We show that the oscillation period of invertebrate endogenous pacemaker neurons wanders, producing random fluctuations in the interspike intervals (ISI) on a time scale of seconds to minutes, which decorrelates pairs of neurons in hybrid circuits constructed using the dynamic clamp. The autocorrelation of the ISI sequence remained high for many ISIs, but the autocorrelation of the ΔISI series had on average a single nonzero value, which was negative at a lag of one interval. We reproduced these results using a simple integrate and fire (IF) model with a stochastic population of channels carrying an adaptation current with a stochastic component that was integrated with a slow time scale, suggesting that a similar population of channels underlies the observed wander in the period. Using autoregressive integrated moving average (ARIMA) models, we found that a single integrator and a single moving average with a negative coefficient could simulate both the experimental data and the IF model. Feeding white noise into an integrator with a slow time constant is sufficient to produce the autocorrelation structure of the ISI series. Moreover, the moving average clearly accounted for the autocorrelation structure of the ΔISI series and is biophysically implemented in the IF model using slow stochastic adaptation. The observed autocorrelation structure may be a neural signature of slow stochastic adaptation, and wander generated in this manner may be a general mechanism for limiting episodes of synchronized activity in the nervous system.

- variability
- stochastic models
- noise
- oscillations
- synchronization
- neural synchrony
- ARIMA

## NEW & NOTEWORTHY

We study the firing statistics of spontaneously firing pacemaker neurons. The autocorrelation structure of the firing events reveal slow trends in the period as well as a tendency to minimize short-term fluctuations. A phenomenological model (autoregressive move average) and a biophysical model (integrate and fire) with stochastic and deterministic components capture the autocorrelation structure. We suggest that this structure is a neural signature of a stochastic, slowly adapting population of ion channels.

oscillatory neurons integrate their synaptic inputs in fundamentally different ways than normally quiescent neurons (Wilson 2014). Evidence exists for the contribution of intrinsic mechanisms to variability in the cycle-to-cycle period in spontaneously active vertebrate neurons. Long-term (minute to hours) trends in the spontaneous spike rate have been observed in neurons in the external segment of the globus pallidus (GPe) in anesthetized rats and brain slice preparations (Deister et al. 2012). These long-term fluctuations are distinguished from cycle-to-cycle jitter in the interspike interval (ISI) by calling the slow fluctuations “wander” in the period. Elegant experiments (Deister et al. 2012) using paired recordings of multiple neurons revealed that the spike rates of each cell wandered independently. This experimental design eliminates temperature or other variables that the cells have in common as the source of the wander. Moreover, the study suggested that the heterogeneity in firing rates produced by the wander could decrease the propensity of the GPe to inherit correlations in its input as well as decrease the tendency for synchrony within the GPe. In healthy animals, low background levels of synchronization in the basal ganglia are suppressed before and during movement (Courtemanche et al. 2003). In Parkinson's disease, baseline levels of synchrony can be both elevated and more resistant to suppression. Excessive synchrony within the basal ganglia-cortical loop in the beta-frequency band may contribute to bradykinesia (the slowing of movements) in Parkinson's disease (Brown 2007; Hammond et al. 2007). Since abnormal synchrony is implicated in the etiology of Parkinson's disease, powerful decorrelation of activity due to independent wander resulting from intrinsic stochastic sources may be necessary for healthy operation of the basal ganglia.

We have previously studied hybrid circuits of one real and one model spiking pacemaker neuron (Thounaojam et al. 2014). The hybrid circuits tended to alternate between periods of phase locking and periods in which the faster neuron continuously advanced within the cycle of the slower neuron, and we plot some representative data from Thounaojam et al. (2014) in Fig. 1. These neurons were reciprocally connected via virtual chemical synapses implemented using the Dynamic Clamp (Sharp et al. 1993a,b; Raikov et al. 2004; Preyer and Butera 2005). In Fig. 1*A*, the neurons did not synchronize immediately after the coupling was turned on, but then they synchronized for ∼150 s before desynchronizing. In Fig. 1*B*, the neurons synchronized initially but spontaneously desynchronized after ∼100 s. This lack of consistent 1:1 synchrony for more than a few minutes was common in these hybrid circuit experiments.

Figure 1 represents two experiments from over 35 presented, described, and analyzed in our previous paper (Thounaojam et al. 2014). In some experiments in our previous paper, coupling between the neurons was turned on and off, and the period of the isolated biological neuron was measured before each episode of coupling. The prediction of network activity was then adjusted based on the newly measured period and was able to account for changes in observed activity. For example, in Fig. 9*B* of that paper, we showed that the measured change in the average period of the biological neuron from 440.2 to 405 ms between coupling episodes was sufficient to convert phase locking to phase walkthrough in *experiment 22* from that paper, confirming that the changes in firing rate are on the same time scale as the changes in phase locking and the former can account for the latter.

These studies motivated us to further study spike trains in isolated pacemaking neurons to identify mechanisms that could underlie wander in the period and furthermore to identify neural signatures of these mechanisms that could be theoretically be used to identify the presence of these mechanisms from spike train data from intact animals using single unit data.

## METHODS

#### Experimental methods.

Late juvenile *Aplysia californica* (41–75 g) were ordered from the Miami National Resource Facility for *Aplysia* (Miami, FL) and housed in saltwater tanks (Instant Ocean, Blacksburg, VA) at room temperature for ∼2.5 wk or less. Animals were anesthetized with an injection of 50% of their weight in 330 mM MgCl_{2} (Jhala et al. 2011). The abdominal ganglion was harvested and pinned out in a Sylgard-lined (Dow Corning, Auburn MI) 35-mm petri dish filled with high Mg^{2+} low Ca^{2+} recording solution (Nowotny et al. 2003) to block synaptic activity and opened using forceps and fine scissors. Occasional inhibitory inputs could be seen in some data sets. Features such as autocorrelation and fitted model parameters did not differ qualitatively between data sets with and without inhibitory inputs, so these data were still included. Most saline reagents were from Sigma-Aldrich (St. Louis, MO), with the exception of potassium chloride, which was from Fisher Scientific (Pittsburgh PA).

The biological preparation was gravity perfused throughout the experiment with the high Mg^{2+} low Ca^{2+} recording solution. The preparation was cooled by placing the dish and a length of inlet tubing containing perfusion saline onto a Peltier device (TECA, Chicago, IL); temperature for the Peltier device was controlled using a proportional integrative derivative (PID) controller (Watlow, St. Louis, MO). In addition to the resistive thermal device (RTD) supplying temperature feedback to the PID controller, temperature was measured using thermistor (Warner Instruments) and thermocouple (Physitemp Instruments, Clifton, NJ) probes connected to independent thermometers. Temperature remained within ∼1°C for the duration of the experiment in five of the eight cells.

The lower left quadrant (LLQ) of the *Aplysia* abdominal ganglion contains endogenously active spiking neurons (Frazier et al. 1967) and most data sets were taken from this region, with the exception of *data set 5*. Electrodes consisted of chlorided silver wire and glass micropipettes pulled on a P-97 puller (Sutter Instrument, Novato, CA) to a resistance of 8–15 MΩ and filled with 3 M potassium acetate. An Axoclamp 2B amplifier (Molecular Devices, Foster City, CA) was used record electrical signals. Data were sampled at 10 kHz with a digital acquisition (DAQ) card (National Instruments, Austin TX) connected to custom real-time software (Real Time eXperiment Interface: www.rtxi.org) (Lin et al. 2010; Ortega et al. 2014). All experiments lasted for a minimum of 2 total hours. At least the first 20 min of each recording after cell puncture were discarded as transient.

Eight data sets were collected overall. Spike times were determined by positive crossings of a −20 mV threshold; for *data set 1*, this threshold was set to −35 mV to accommodate low spike heights. We retained this data set because spike widths, action potential waveforms, and responses to conductance injection appeared otherwise consistent with a healthy preparation. ISI is the time between two successive spikes. The cycle-to-cycle change in interspike interval (or ΔISI) is the difference between two sequential ISIs and is calculated as ΔISI[*n*] = ISI[*n*] − ISI[*n* − 1]. Two ISIs >3,500 ms were removed from *data set 8* as extreme outliers (mean and median ISI of *data set 8* with the 2 outliers removed is 1,742 and 1,715 ms, respectively). Data analysis was performed in MATLAB (The Mathworks, Natick, MA); autocorrelation sequences were calculated using built-in function autocorr().

#### Neural model.

The model is based on that by Schwalger et al. (2010). Equations for the model are given in the text. Simulations were run in MATLAB. Differential equations were solved using forward Euler method with a stepsize of 0.001 ms and a total simulation time of 7,000 ms to generate at least 500 ISIs. The forward Euler solver was used because stochastic elements were present, as in Higham (2001) and Ermentrout and Rinzel (2010). Other parameters were set as follows: *V*_{th} = 1, *V*_{reset} = 0, β = 3 *V*_{th}/ms, μ = 0.4 *V*_{th}/ms, τ_{AP} = 1 ms. Parameters *D* (*V*_{th}^{2}/ms), *N*_{a} (channels), and τ_{w} (ms) were varied.

#### Autoregressive integrated moving average models.

Autoregressive integrated moving average (ARIMA) filters can provide a parsimonious representation of time series data. There are several classes of models denoted as ARIMA (*p*, *d*, *q*), where *p* denotes the number of autoregressive coefficients, *d* denotes the number of integrators (also, the number of times the sequence is differenced), and *q* denotes the number of moving average coefficients (Box et al. 2008). A block diagram of an ARIMA (*p*, *d*, *q*) model is given in Fig. 2, where *p* = *d* = *q* = 1. The output of the autoregression block is interpreted differently for case of *d* = 0, meaning there is no integration of the model data, or, alternatively, no differentiation of the ISI data before fitting the coefficients of the ARIMA models. In that case, the output would not be the estimated ΔISI series but would instead be interpreted as the estimated ISI series. For the simulated data, the first 100 ISIs were discarded as transient and the following 400 used to calculate autocorrelation coefficients and ARIMA model coefficients as described above. Five simulations for each parameter set case were computed. Parameters for the ARIMA models were defined, estimated, and residuals inferred using built-in MATLAB functions arima(), estimate(), and infer().

## RESULTS

#### Autocorrelation structure of the data.

Data were collected from free-running, pacemaking neurons in the abdominal ganglion of *Aplysia*. The ISIs plotted as a function of time (Fig. 3*A*) showed clear long-term trends indicating nonstationarity. The first difference of the ISI series did not exhibit trends but rather appeared to be stationary with zero mean (Fig. 3*B*). Because of the trends, the autocorrelation of the ISI series remained high for many lag values (Fig. 4*A*). Both the time series in Fig. 3*A* and the autocorrelation coefficients in Fig. 4*A* are consistent with a model of the ISI, or instantaneous period, as a random walk, also called Brownian motion, in a single dimension. The appearance of a random walk could result from adding white noise to the ISI value on every cycle; the accumulation of the noise endows the ISI series with a type of memory because values that are close in time tend to resemble each other more than those separated by longer time intervals. In mathematical terms, such a random walk can be written as: ΔISI[*n*] = ISI[*n* − 1] + *a*[*n*] = ISI[0] + *a*[*i*], where *a*[*i*] is a white noise process. Thus taking the first difference of the ISI series is expected to yield a white noise process if the ISI series indeed exhibits a random walk.

In contrast to the ISI series, the autocorrelation of the ΔISI series (Fig. 4*B*) had only a single prominent coefficient, which occurred at a time lag of one ISI and took on a negative value of about −0.5. A single significant term in the autocorrelation series is typical for a moving average process (Box et al. 2008). A moving average on the ΔISI series is defined as follows:
(1)
Here θ is the coefficient for the moving average filter. Setting the constant drift term *c* to zero, we rearrange to obtain
The autocorrelation coefficient for a moving average process with one coefficient is only nonzero at lag of one (Box et al. 2008), and the value of the coefficient at lag one is *R*[1] = θ/(1 + θ^{2}). The negative autocorrelation coefficient at a lag of one in Fig. 4*B* means that successive ΔISI tend to have opposite signs. This therefore suggests a moving average with a negative coefficient. Taking the expectation value of the terms in *Eq. 1* and noting that *E*[*a*[*n*]] = 0 and then *E*[ΔISI[*n* + 1]] = θ*E*[ΔISI[*n*]. Thus, if a moving average filter is present, we would expect a linear dependence of the ΔISI on one cycle on the one that preceded it. To obtain further evidence that a moving average is implemented by the oscillatory neurons shown in Figs. 3 and 4, we made a Poincare plot (Fig. 5) of the ΔISI on one cycle vs. that on the one that preceded it. This plot indeed is approximately linear with a negative slope, consistent with a moving average filter with a negative value of θ.

#### Neural model.

To examine plausible biophysical mechanisms underlying the spike train autocorrelation structure described above, we implemented a nonleaky integrate and fire (IF) model pacemaker neuron as described in Schwalger et al. (2010). This model incorporates a static driving current, stochastic channel activity in an adaptation current, and random white noise current fluctuations of the membrane voltage that produce cycle-to-cycle jitter in the oscillatory period. Differential equations representing the system are as follows.
(2)
(3)
(4)
The first term μ of the differential *Eq. 2* for membrane potential corresponds to a baseline driving current; this term sets the intrinsic frequency of the oscillation. The second term represents a diffusion approximation of stochastic channels added to an adapting current; the fraction of open channels is the sum *w* + η, where *w* governs channel activation due to recent spikes and η represents the stochastic activity of those channels. The third term in the voltage equation is a white noise component, modeled as a zero mean Gaussian white noise process ξ(*t*) corresponding to stochastic activity of other, faster channels or fast membrane fluctuations due other fast noise sources. *D* is a measure of the amplitude of this fast noise and scales the resultant jitter in the cycle period. As in typical IF models, *v* is reset to zero upon crossing a threshold *V*_{th}, which in this model is normalized to one. In the differential *Eq. 3* for the deterministic adaptation variable (*w*), the value (*w*_{∞}) towards which (*w*) relaxes is set to 1 for τ_{AP} milliseconds when a spike occurs and then is set back to zero. The dynamics of both the stochastic and deterministic components of the current are governed by the time constant τ_{w}. The differential *Eq. 4* for the stochastic component (η) of the slowly adapting population of channels is driven by ξ_{n}, another Gaussian white noise process with mean zero. Just as *D* scales the amplitude of the jitter, the number of channels *N*_{a} in the slowly adapting population scales the slow noise. However, in the latter case there is an inverse relationship. Larger numbers of channels reduce the observable stochastic fluctuations. The variance σ^{2} was approximated using the values of the strength of adaptation β, τ_{AP}, *V*_{th}, and μ as in Schwalger et al. (2010).

The fluctuations generated by the process ξ(*t*) are memoryless, whereas the stochastic activity η depends on the history of the noise process and so has memory. As stated above, the memoryless white noise component is scaled by the parameter *D* and the history-dependent noise is scaled by 1/*N*_{a}. Figure 6 shows how the structures of the autocorrelation sequences for the ISI and ΔISI time series change as *D* and *N*_{a} are varied. Clearly, a small value of *N*_{a} is required to get the history dependence that underlies the slowly decaying autocorrelation in ISI, because this feature is missing from the rightmost column of the grids in Fig. 6, for example *A3* and *A6*, which do not include noise with memory. In Fig. 6, *A3* and *A6*, *N*_{a} is set to infinity, so there are no stochastic fluctuations in the population of slowly adapting channels. Conversely, the single, prominent negative autocorrelation value at lag one in the autocorrelation of the ΔISI sequence is clearly evident in the rightmost column. Therefore, this feature requires memoryless noise, but not noise with memory. Moreover, the single, prominent negative autocorrelation value does not appear along the bottom row in Fig. 6, for example in *A7* and *A8*, nor is it prominent in the leftmost column in Fig. 6, for example in *A1*, *A4*, and *A7*. Therefore, to get both features of the autocorrelation structure shown in Fig. 4, the parameter space away from the bottom and right edges of the grid is required. Moreover, decreasing *N*_{a} to increase the contribution of noise with memory obscures the single, prominent negative autocorrelation value at lag one in the autocorrelation of the ΔISI sequence, as evidenced by its absence from the leftmost column. Only simulations that include both a history dependent and a history independent component capture the correlational features observed in the experimental data, where the ISI autocorrelation remains high for many lags while one prominent autocorrelation value appears in the ΔISI sequence. This ability of the model to replicate the autocorrelation structure of the experimental data suggests that stochastic activity of slowly adapting channels could be responsible for the observed correlational structure in the invertebrate pacemaker neurons.

A more complete characterization of the ability of the IF model to replicate our experimental data is shown in Fig. 6*B*. The normalized mean squared error between the average ISI (Fig. 6*B1*) or ΔISI (Fig. 6*B2*) autocorrelation across the eight experimental data sets (Fig. 4*A*) shown in the same order as in Fig. 3 and that of the average of five IF model runs at the same parameter values with different instantiations of white noise curves is indicated by color. Darker shades indicate lower error, with black indicating minimal error and white maximal error. For the ISI series, the error in Fig. 6*B1* is lowest in the left triangular region in which *D* and *N*_{a} are both small, corresponding to low memoryless noise and high noise with memory. In contrast, this same region exhibits some of the highest error for the ΔISI case in Fig. 6*B2*; thus the optimal region to fit both data sets is a dark diagonal in Fig. 6*B3* showing the combined error for both measures. Therefore, both noise components are necessary to best match the model data to experimental data.

The short-term spike-to-spike variability requires a different mechanism from the slow trend, but Fig. 6 shows that both phenomena can be explained using the same population of channels. The fast, short-term jitter in ISIs in the model is due to fluctuations in the oscillator period driven by the white noise current term ξ(*t*) in *Eq. 2*, and the deterministic effects of adaptation acting on this jitter are responsible for the negative autocorrelation at a lag of 1 in the ΔISI series. The long-term trend, or wander, is due to slow random fluctuations in the population of open channels mediating adaptation: thus the slow trend could most parsimoniously be mediated by the same population of channels responsible for adaptation and the jitter. However, it is possible that the fluctuations on different time scales are mediated by separate channel populations, because only the negative autocorrelation at a lag of 1 in the ΔISI series requires an adaptive mechanism.

#### ARIMA model of experimental and simulated data.

A typical approach (Truccolo et al. 2005) to neuronal statistics uses conditional intensity models in a point process framework, often using generalized linear models (GLM). In the GLM framework, terms for dependence on the past firing history of a neuron at different time points in the past can capture the rhythmic tendencies of an oscillatory neuron. Here we use instead a very simple point process models to reveal the dependence of neural firing on intrinsic dynamics (i.e., pacemaking) and noise. ARIMA models of time series data provide insight into how noise and/or inputs are processed by a system. We apply the ARIMA models to the sequence of ISIs; thus in our application, as in many others, a point process model of interevent times is generated. The model defined in *Eq. 1* is equivalent to an ARIMA (0, 0, 1) model on the ΔISI series but is equivalent to an ARIMA (0, 1, 1) model on the ISI series (see Fig. 2). An ARIMA (0, 1, 1) has one integral component and one moving average component. Based on the evidence presented in *Autocorrelation structure of the experimental data*, we hypothesized that the model given in *Eq. 1* is a parsimonious representation of both our experimental data on free-running pacemakers, as well as the nonleaky IF model given in *Eqs. 2–4* with adaptation and both history-dependent and history-independent noise components. To test this hypothesis, we compared the performance of the model given in *Eq. 1* with two closely related ARIMA models that, with the right parameter selection, can reduce to the model in *Eq. 1*. These models are
(5)
(6)
*Equation 5* represents an ARIMA(1, 0, 1) model with a moving average of the noise combined with an autoregressive Ornstein-Uhlenbeck (OU) process. An OU process (Uhlenbeck and Ornstein 1930) is equivalent to Brownian motion plus a mean-reverting term with a time constant τ, which is determined by the autoregression coefficient ϕ by the following relationship: ϕ = 1 − 1/τ. *Equation 5* reduces to *Eq. 1* as ϕ approaches 1, implying that the τ for mean reversion approaches infinity, effectively removing the mean reverting component, resulting in a moving average plus an integrator to produce the Brownian motion as in *Eq. 1*. *Equation 6* is an ARIMA(1, 1, 1) model that adds an integrator term to *Eq. 5*, but reduces to *Eq. 1* as the autoregressive coefficient ϕ approaches 0. The equivalences are more evident if Eq. 6 is written in terms of the first difference terms ΔISI[*n*] = ISI[*n*] − ISI[*n* − 1] and ΔISI[*n* − 1] = ISI[*n* − 1] − ISI[*n* − 2].

We applied the estimate() function in MATLAB to find ARIMA parameters using maximum likelihood for the data for eight neurons and also data from the simulations whose autocorrelation structure most closely resembled the average of the eight experimental data sets. The Ljung-Box Q-test for residual autocorrelation of the time series beyond that for which the ARIMA model can explain was applied using the Matlab function lbqtest(). The null hypothesis of autocorrelation in the residual series was rejected for the windows of the ISI series marked in gray (the ends of the ISI series were not analyzed) in Fig. 7, *C1*-*C3*. The preponderance of gray (except for *data set 4*) indicates that the residuals were in general uncorrelated, supporting the goodness of fit of the ARIMA models to the data. Figure 7*A* clearly shows a strong tendency for the moving average coefficient θ to take on negative values consistent with the presence of adaptation both in the invertebrate pacemaker data (Fig. 7*A1*) and in the simulations (Fig. 7*A2*). Moreover, Fig. 7*B* shows a very strong tendency for the autoregressive coefficient ϕ in both the experimental data (Fig. 7*B1*) and the simulated data (Fig. 7*B2*) to approach one for the ARIMA (1, 0, 1) model, which reduces it to the ARIMA (0, 1, 1) model. The same figure shows a weaker, but still pronounced tendency, for ϕ in both the experimental data (Fig. 7*B1*) and the simulated data (Fig. 7*B2*) to approach zero for the ARIMA (1, 1, 1) model, which also reduces that model to the ARIMA (0, 1, 1) model. The striking similarity between the convergence of both the experimental and model to an ARIMA (0, 1, 1) model results lead us to conclude that this particular variant of an IF model captures the two essential aspects of the autocorrelation structure by implementing an integrator and a moving average in a biophysically realistic way. The mapping of the ARIMA components onto the biology is elaborated upon in the discussion.

To determine whether the nearly equivalent, optimal versions of the 011, 101, and 111 models were better than the other possible first order ARIMA models (namely the 001, 010, 100, and 110 models, the Akaike's information criterion (AIC) was calculated for the residuals of each estimated model and ranked 1–7, lowest to highest, for each data set. The 011, 101, and 111 models had the lowest AIC values for all data sets except *data set 1*, meaning these models outperform the others in those cases. For *data set 1*, the 100 model was slightly better than the 011, but the results were otherwise similar.

## DISCUSSION

#### Comparison with our previous study.

The work presented here is a logical extension of that in Thounaojam et al. (2014). Thounaojam et al. showed that hybrid circuits of one biological neuron coupled to one computational neuron could show phase-locked or phase-slipping activity, and switches between these two modes over the course of minutes. They used three models to replicate the biological randomness in one neuron of a simulated two-neuron circuit; white noise added to the ISI, white noise added to the phase resetting curve (PRC), or modeling ISI as an OU process. When the activity of coupled neuron was strongly phase-locked, white noise added to the period or PRC of a model neuron was able to replicate the number of activity changes, fraction of time locked, and *R*^{2} (a measure of the strength of phase locking; Batschelet 1981). On the other hand, when the coupled neurons were less tightly locked, and exhibiting transitions between modes, the OU model was better able to model circuit activity. However, that study clearly noted that the mean-reverting aspect of the OU process, which is an ARIMA (1, 0, 0) process, was generally unimportant in matching the data: the time constants were very long relative to the ISIs, specifically 10 to 1,000 times as long. As stated above, the ARIMA (1, 0, 0) process becomes equivalent to an ARIMA (0, 1, 0) process in the limit as the autoregessive coefficient ϕ approaches one.

The variability of ISI in a single neuron is directly measured in this study, as opposed to inferred from activity metrics in a coupled circuit. This allowed the observation of an additional attribute of the data that was not known in the previous study, namely the single prominent value at lag one in the autocorrelation of the ΔISI sequence. Therefore, we added the moving average component to the model to convert it from ARIMA (0, 1, 0) to ARIMA (0, 1, 1). This is a crucial new insight from the current study. Moreover, the IF modeling work shows that the biophysical instantiation of the moving average filter is likely the deterministic component of the adaptation currents. The biophysical instantiation of the history dependence of the ISI could be the stochastic component of the same currents as modeled. Alternatively, there could be contributions from other slow processes like second messenger cascades. There are some neural modeling precedents for incorporating slow fluctuations in the oscillatory period (Tikidji-Hamburyan et al. 2014) and for incorporating a moving average filter on white noise terms (McCarthy et al. 2008).

#### Biophysical instantiation of the ARIMA (0, 1, 1) model.

Since the best fit was obtained at a moving average parameter of θ = −0.7, this implies (see above) that *R*[1] ≈ −1/2, which is in very good agreement with the average value of about −0.4 for the negative correlation observed for the ΔISI series in Fig. 4*B*. This is strong evidence for a biological instantiation of a moving average filter, just as the long-lasting autocorrelations in Fig. 4*A* argue for a noise integrator to produce a random walk with trends. The IF model, with both deterministic and stochastic adaptation currents, provides a plausible mechanism for a biophysical instantiation of these essential features captured by the ARIMA (0, 1, 1) model.

The integrator is clearly implemented in *Eq. 2*; after all, this is an IF model. The implementation of the moving average filter is more complex and depends on the fact that two types of noise were shown to be necessary in the IF model to replicate features of experimental data. Fast noise, or jitter, is necessary to impart some variability to the cells, because otherwise, unlike real neurons, they are perfect, regular pacemakers. With no noise (*N*_{a} = ∞, *D* = 0), the ISIs in the model described in *Eqs. 2–4* will converge to a constant value during which the increment in ω produced by a spike is exactly balanced by the decay of this quantity during the cycle. In the presence of noise, a cycle that is longer than the steady value will cause *w* to attain a lower value than usual; the presence of −*w* in *Eq, 2* will cause the instantaneous frequency in the next cycle to tend to be reduced less by the adaptation term and thus have a greater instantaneous frequency on average biasing the tendency for a shorter cycle to follow a longer one. The converse is true for a shorter than usual cycle. Therefore, noise is required for the negative autocorrelation of ΔISI at lag one. However, the noise needs to be memoryless, otherwise the history dependence of the noise will obscure the cycle-to-cycle variability needed in the adaptation variable to achieve the self-correcting tendency that adaptation confers. In other words, in the absence of fast memoryless noise, the model would show little cycle-to-cycle variability despite the presence of an adaptation current, because that current would equilibrate and change only very slowly from cycle to cycle, tracking the slow long-term trends in the period. These trends cause the observed high ISI autocorrelation values seen in Fig. 4. The need for prominent cycle-to-cycle variability caused by memoryless jitter explains why low values of *N*_{a}, which increases the contribution of noise with memory, obscure this cycle-to-cycle jitter and therefore also obscure the negative autocorrelation of ΔISI at lag one that are otherwise obtained at nonzero values of *D* (compare Fig. 4*A1* to Fig. 6, *A2* and *A3*). The role of slow adaptation in establishing the autocorrelation structure in the experimental data shown in Fig. 4 is strongly supported by the presence of a prominent slow adaptation current in these invertebrate pacemaker neurons (Cui et al. 2009).

The ARIMA (0, 1, 1) model, in contrast to the IF model, requires a single source of noise directly input into the moving average, whose output is then fed to an integrator. The ARIMA model, unlike the IF model, is not bound by the constraint of using components that directly map onto the biophysics of a cell. Hence, noise filtered by a moving average filter can directly be added to the model without considering how the cell actually implements such a filter using the complement of ion channels available to it. This explains why the biologically inspired model requires two types of noise (Fisch et al. 2012), whereas the ARIMA (0, 1, 1) model can emulate the output of both the IF model and the ISI series from the real pacemaking neurons in our study with a single noise source. The real neural pacemakers in all likelihood also require two sources of noise, as they are clearly subject to the constraint of using only the available biological components.

#### Implications for network models.

The Kuramoto model (Kuramoto 1984; Strogatz 2000) in *Eq. 7* is widely used to describe synchronization of coupled oscillators. The model assumes that the phase (θ_{i}) of oscillator has a constant angular velocity (ω_{i}) that is jittered by a noise process (ξ_{i}) and is coupled to *N* other oscillators through the sine of their phase differences weighted by *K*; other formulations of the interaction term are often used (Winfree 1967; Izhikevich 2007)
(7)
The term process (ξ_{i}) represents noise that causes fast jitter in the spike times. To incorporate slow wander in the Kuramoto formalism, a second noise process process η_{i} with time contant τ is required for each oscillator as in *Eq. 8*.
(8)
Controlling the amplitude of this noise process may be important in pathological synchrony (Wilson 2013), since a decrease in the magnitude of the noise process would favor synchronization. Although our ARMA analysis did not detect a mean-reverting term for the frequency, it is nonetheless possible that mean reversion operates on time scales greater than the ones that we studied.

#### Integrators vs. resonators.

Others (Schwalger and Lindner 2013) have shown that adapting neurons that behave like integrators (their spikes times are always advanced by a small excitation) possess negative correlations between adjacent ISIs. The spike times for neurons in our study are always advanced by a small excitation (Preyer and Butera 2005), but they exhibit slow trends that presumably obliterate the negative correlation that would otherwise be observed between adjacent ISIs. However, the negative correlation in ΔISI, which has not previously been noted, is manifested instead. Some neurons do not behave like integrators but instead are resonators (Izhikevich 2007) whose spike times can be delayed by small excitations and cannot fire arbitrarily slowly. It is possible that neurons that are resonators rather than integrators will synchronize more robustly, despite the presence of fluctuations in their periods (Tikidji-Hamburyan et al. 2015).

#### Conclusions.

We described the correlational structure of the sequence of ISIs in a population of spontaneously pacemaking invertebrate neurons. We found a succinct ARIMA representation for how the ISI of endogenously spiking neurons varies as a function of time. Although the identity of the channel populations mediating fast jitter and slow wander has not yet been experimentally verified, our simulations suggest that a single stochastic channel population mediating slowly adapting currents can account for the correlational structure of our ISI process. Together, these data support the notion that oscillatory neurons process information differently than noise-driven neurons, which are frequently modeled as Poisson point processes under the assumption that sequential ISIs are independent. We have begun to lay the groundwork for point process modeling of nonstationary oscillatory neurons in which sequential ISIs are not independent. Oscillatory neurons are ubiquitous in the basal ganglia, and include dopaminergic neurons in the mesencephalon, tonically active cholinergic neurons in the striatum, as well as the principal neurons in the output nuclei, including the globus pallidus and the substantia nigra par reticulata (Surmeier et al. 2005). Pacemakers have also been identified in the inferior olive (Lang 2001), the pons (Williams and Marshall 1987), and the neonatal spinal cord (Li et al. 2015). It has been suggested that hippocampal pyramidal neurons may act as autonomous pacemakers under certain conditions in vivo (Yamada-Hanff and Bean 2013). The mechanism we present here for nonstationary ISIs may serve to prevent excessive and potentially pathological synchrony, both in terms of the fraction of the population that is synchronized as well as the duration of synchronized episodes. Moreover, these results address an important problem in neuroscience, because they allow the detection of adaptation currents and mechanisms for integration of noisy fluctuations using only spike times, which can be obtained from baseline extracellular recordings.

## GRANTS

This work was funded by National Institute of Neurological Disorders and Stroke Grant R01 NS-054281 under the Collaborative Research in Computational Neuroscience Program and by the National Institute of Biomedical Imaging and Bioengineering Grant R01 EB-016407.

## DISCLOSURES

No conflicts of interest, financial or otherwise, are declared by the author(s).

## AUTHOR CONTRIBUTIONS

S.E.N. performed experiments; S.E.N. analyzed data; S.E.N., R.J.B., and C.C.C. interpreted results of experiments; S.E.N. and C.C.C. prepared figures; S.E.N. and C.C.C. drafted manuscript; S.E.N., R.J.B., and C.C.C. edited and revised manuscript; S.E.N., R.J.B., and C.C.C. approved final version of manuscript; R.J.B. and C.C.C. conception and design of research.

- Copyright © 2016 the American Physiological Society