## Abstract

In this study, we developed a general description of parameter combinations for which specified characteristics of neuronal or network activity are constant. Our approach is based on the implicit function theorem and is applicable to activity characteristics that smoothly depend on parameters. Such smoothness is often intrinsic to neuronal systems when they are in stable functional states. The conclusions about how parameters compensate each other, developed in this study, can thus be used even without regard to the specific mathematical model describing a particular neuron or neuronal network. We showed that near a generic point in the parameter space there are infinitely many other points, or parameter combinations, for which specified characteristics of activity are the same as in the original point. These parameter combinations form a smooth manifold. This manifold can be extended as long as the gradients of characteristics are defined and independent. All possible variations of parameters compensating each other are simply all possible charts of the same manifold. The number of compensating parameters (but not parameters themselves) is fixed and equal to the number of the independent characteristics maintained. The algorithm that we developed shows how to find compensatory functional dependencies between parameters numerically. Our method can be used in the analysis of the homeostatic regulation, neuronal database search, model tuning and other applications.

## INTRODUCTION

Numerous studies, both experimental and theoretical, have revealed myriad factors underlying neuronal and network activity. These factors range from neuronal morphology and distribution of ionic current channels to structural properties of networks such as the average number of synaptic connections per neuron. Experiments show that despite the variability in these factors and the complexity in their interactions, some characteristics of neuronal activity, and in particular those related to appropriate function, are maintained (Buzsaki et al. 2002; MacLean et al. 2003; Marder and Goaillard 2006; Swensen and Bean 2005). These maintained functional characteristics could be constancy in the period of a network producing rhythmic output, the time interval between the stimulus onset and the response of a neuron, or many others.

Various factors compensate each other to maintain function. An example of such compensation has been discovered recently in neurons of the lobster stomatogastric ganglion (MacLean et al. 2003; Schulz et al. 2006). In neurons with similar firing properties, there was a linear correlation between the conductances of transient potassium (*I*_{A}) and hyperpolarization-activated inward (*I*_{h}) currents. This correlation accounted for the commonly observed variability of these two currents across preparations. Importantly, the correlation appears to reflect a homeostatic mechanism that regulates *I*_{A} and *I*_{h}. Experimental (MacLean et al. 2003; Swensen and Bean 2005) and modeling (Achard and De Schutter 2006; Goldman et al. 2001; MacLean et al. 2005; Prinz et al. 2004) studies in vertebrates and invertebrates show that multiple combinations of conductances can underlie similar neuronal behavior, suggesting that multiple complex compensatory mechanisms are possible. A traditional approach for finding compensatory mechanisms is based on sensitivity analysis (MacLean et al. 2005; Nygren et al. 1998; Olsen et al. 1995; Paulsen et al. 1982; Weaver et al. 2007). In this analysis, the sensitivity of a specified characteristic to a certain parameter is a ratio of the normalized change in the characteristic to the underlying normalized change in the parameter. Sensitivities can be used to determine approximately how much one parameter must change from its original value to compensate a change of the other parameter. That multiple compensatory covariations exist is well documented (Marder and Goaillard 2006; Rich and Wenner 2007; Ward 2006); what is lacking are systems for determining and understanding these covariations.

In this study, we introduce a general method for describing all putative compensatory covariations of neuronal and network parameters given specified characteristics of neuronal activity to be maintained. The method rigorously specifies exact covariations and linear approximations to them. It is based on a fundamental and powerful mathematical tool, the implicit function theorem. We show that compensatory covariations are possible charts of manifolds composed of parameter sets for which specified characteristics are constant. In particular, we show that to maintain *N* independent characteristics of neuronal activity affected by a change of one or more parameters, at least *N* other parameters should change appropriately.

To illustrate the method, we apply it to a model of the smallest functional network of the leech heartbeat central pattern generator (CPG) consisting of two mutually inhibitory neurons (Cymbalyuk et al. 2002; Hill et al. 2001; Olypher et al. 2006; Sorensen et al. 2004) (Fig. 1*A*). In particular, we show how certain parameters must covary to maintain the period of the network activity and the average spike frequency in bursts. Finally, we discuss applications of our method to the analysis of the homeostatic regulation, model tuning and other problems.

Parts of this paper have been published previously in abstract form (Olypher and Calabrese 2006, 2007).

## METHODS

We used a modified version of a model by Hill et al. (2001) of a half-center oscillator from the leech heartbeat CPG, consisting of two identical neurons that mutually inhibit one another. The model, like the living network, produces a periodic pattern of activity where the two neurons burst alternately (Fig. 2). The neurons were modeled according to the Hodgkin-Huxley formalism with five inward and three outward voltage-dependent currents. The inhibitory interactions of the model neurons, as in the living system, have a graded and a spike-mediated component (Angstadt and Calabrese 1991; Ivanov and Calabrese 2000, 2003, 2006a,b; Lu et al. 1997; Olsen and Calabrese 1996). In this study, by the synaptic strength we mean the maximal conductance, *ḡ*_{SynS}, of the spike-mediated current. In the examples considered, synaptic strength was always one of the parameters varied. The other parameters were the maximal conductance, *ḡ*_{h}, of a hyperpolarization-activated current, *I*_{h}, and a scaling factor, η, for the inactivation time constant, τ_{h,CaS}, of a slowly inactivating low-threshold calcium current, *I*_{CaS} Here *h _{∞}*

_{,CaS}(

*V*) = 1/(1 +

*e*

^{360(V−0.055)}), τ

_{h,CaS}(

*V*) = 0.2 + 5.25/(1 +

*e*

^{−2.50(V+0.043)}),

*V*was the membrane potential (in

*V)*,

*t*was time (in

*s*) (Hill et al. 2001). By definition, η greater (smaller) than one slows down (speeds up) the inactivation. When a parameter of the neuronal model was varied, it was varied in both neurons simultaneously to comply with the simplifying assumption that the two neurons in the half-oscillator model are identical. However, our method can be similarly applied for unilateral variations of parameters.

In our study, we used model neurons with endogenous bursting by setting the leak current's maximal conductance *ḡ*_{L} = 9.9 nS and the reversal potential *E*_{L} = −63.5 mV instead of *ḡ*_{L} = 8 nS, *E*_{L} = −60 mV as in the Hill et al. (2001) model (cf. Cymbalyuk et al. 2002). Other parameters of the Hill et al. (2001) model were not changed. The model and the set of chosen parameters are called canonical here.

Another change in the Hill et al. (2001) model was aimed at making it smooth with respect to the variables and parameters. By smoothness here and in the following text, we mean at least first-order differentiability. In other words, the right-hand sides of the model system must change continuously with changes of the variables and parameters, and the velocity of these changes must also be continuous. We consider this requirement as quite natural and meeting physiological intuition. The original model has only one place that required a change to meet this requirement—the expression for the total calcium current *I*_{Ca} = max(0, −*I*_{CaF} − *I*_{CaS} − *A*), where *I*_{CaF} and *I*_{CaS} are the rapidly inactivating and slowly inactivating slow low-threshold calcium currents, respectively, and *A* is a dynamic threshold (see details in Hill et al. 2001). To make the system smooth, this original nonsmooth function for *I*_{Ca} was replaced here by a smooth sigmoid function This substitution, as expected, had negligible effect on the solutions of the system.

To quantify the system's activity for a particular set of parameters, the model was simulated for 110 s. Measured characteristics were burst period, *T*, defined as in Hill et al. (2001) as the time between the middle spike of one burst and the middle spike of the next burst, and average intraburst spike frequency, *F*, defined as a number of spikes in a burst divided by the period. The period was calculated with the middle spikes rather than the first or last spikes because it yielded the smallest coefficient of variation. These two characteristics were chosen because in the living system, the period of these oscillations set heartbeat period, and the intraburst spike frequency determines the level of inhibition both within the half-center oscillators and between the oscillator interneurons and their motor neuron targets (Hill et al. 2001). The first 30 s of the simulation was considered as an interval sufficient for stabilizing the pattern and were discarded from analysis. Convergence to a stable regime was confirmed by low variation of the period over the next 80 s of simulation. The initial conditions for all simulations were the same and taken from the stable regime corresponding to the canonical parameters. Simulations were performed with the Matlab (MathWorks, Natick, MA) solver ode15s with the absolute and relative tolerances 10^{−9} and 10^{−8}, respectively. Matlab code is available from http://calabreselx.biology.emory.edu/pub/HC.zip.

Partial derivatives of activity characteristics with respect to parameters were first assessed with standard numeric formulas: by varying a parameter and finding the ratio of the characteristic's change over the parameter's change, repeating this for doubled variations of the parameters, and using the Richardson extrapolation to improve the accuracy of the result (Press et al. 1993). We also estimated the partial derivatives by calculating the characteristics for the original values of the parameters and ±8, ±6, ±4, and ±2% deviations from the original values and then applying a linear fit to these values of the characteristics. Both methods gave similar estimates of the partial derivatives. Complication arose when considering the partial derivative of the average intraburst spike frequency, *F*, with respect to *ḡ*_{SynS}. The changes of *F* for ≤8% changes of *ḡ*_{C}_{aS} were too small to obtain a reliable linear fit. Thus we used a linear fit of the values of *F* corresponding to ±20 and 0% changes of *ḡ*_{SynS} to estimate the derivative and obtained a number with the absolute value less then 0.005 (see following text).

### Implicit function theorem

Our main analytical tool was the implicit function theorem (Rudin 1976). The theorem states that under certain conditions, equalities involving several variables imply functional dependencies between the variables (appendix a). For example, if the conditions of the theorem are met, an equality *C*(*p*_{1}, *p*_{2}) = 0, true for particular values *p*_{1} = *p*_{1}*, *p*_{2} = *p*_{2}*, implies that near the point (*p*_{1}*, *p*_{2}*) there exists a unique function *p*_{2}= *f*(*p*_{1}) such that *p*_{2}* = *f*(*p*_{1}*), *C*(*p*_{1}, *f*(*p*_{1})) = 0, and The first condition of the theorem is the differentiability of characteristics with respect to parameters, which in the example considered means that partial derivatives ∂*C*(*p*_{1}*, *p*_{2}*)/∂*p*_{1}, and ∂*C*(*p*_{1}*, *p*_{2}*)/∂*p*_{2} must exist. The second condition of the theorem is the nondegenerate dependency of characteristics with respect to parameters that are supposed to be dependent on other parameters. In the current example, this condition is fulfilled if ∂*C*(*p*_{1}*, *p*_{2}*)/∂*p*_{2} ≠ 0.

### Algorithm for specifying the variations of model parameters that maintain activity characteristics

The following algorithm is a general one. It can be applied to an arbitrary neuronal or network model to determine coordinated changes of *n* parameters that do not affect *m* characteristics. The main assumption of the algorithm is a smooth dependence of characteristics on parameters.

First, select parameters *p*_{1}, *p*_{2}, …, *p _{n}* of the model to be varied near some point

*p** = (

*p*

_{1}*,

*p*

_{2}*, …,

*p*

_{n}*) in the parameter space.

Second, select a set of activity characteristics *C*(*p*) = (*C*_{1}(*p*), *C*_{2}(*p*), …, *C _{m}*(

*p*)) to be maintained. By subtracting target values from

*C*

_{i}, we can assume that

*C*(

*p**) = 0, and the goal is to find

*p*near

*p** such that

*C*(

*p*) = 0. Let partial derivatives ∂

*C*(

_{i}*p**)/∂

*p*,

_{j}*i*= 1, …,

*m,*

*j*= 1, …,

*n*exist.

Third, assume that for *y* = (*p*_{1}, …, *p _{m}*), the determinant Fourth, then, according to the implicit function theorem, near

*p** there exists a unique function

*y*=

*f*(

*x*), where

*x*= (

*p*

_{m}_{+1}, …,

*p*), such that

_{n}*y** =

*f*(

*x**),

*C*(

*f*(

*x*),

*x*) = 0, and in linear approximation, Δ

*y*= −[

*C*

_{y}′(

*p**)]

^{−1}·

*C*

_{x}′(

*p**)·Δ

*x*, where Δ

*y*= (

*p*

_{1}−

*p*

_{1}*, …,

*p*−

_{m}*p*

_{m}*), Δ

*x*= (

*p*

_{m}_{+1}−

*p*

_{m+1}*, …,

*p*−

_{n}*p*

_{n}*), and Fifth, perform simulations to estimate the range of (

*p*

_{m}_{+1}, …

*p*) variations over which characteristics

_{n}*C*are well maintained according to linear approximation. Refine that approximation to get exact compensatory covariations between parameters (

_{i}*p*

_{1}, …,

*p*) and (

_{m}*p*

_{m}_{+1}, …,

*p*). Repeat the algorithm at another point to extend the functional dependency between

_{n}*y*and

*x*into a further region of the parameter space.

Definitions: we shall call *p*_{1}, …, *p _{m}*

*compensating*parameters,

*p*

_{m}_{+1}, …,

*p*

_{n}*compensated*parameters, and

*p*=

_{i}*f*(

_{i}*p*

_{m}_{+1}, …,

*p*), 1 ≤

_{n}*i*≤

*m*,

*compensatory*functions.

### Notes on the algorithm

The calculation of the gradients of the selected characteristics is a key computational step. This step is in common with multiple computational procedures, such as for example the steepest descent method. The difference here is that we are not interested in the direction determined by the gradient, but in the direction, or directions in a multidimensional space, orthogonal to the gradient.

The condition in *step 3* of the algorithm means that there is no characteristic for which the gradient in *p** is a linear combination of the gradients of other characteristics. This condition does not hold if there is a linear dependence among characteristics, in the simplest case when *C _{j}*(

*p*) =

*k*·

*C*(

_{l}*p*) for some 1 ≤

*j, l*≤

*m*, and

*k*≠ 0. Only independent characteristics can be used for the analysis. Another case where the condition does not hold is when there is a pair of linearly correlated parameters, e.g.,

*p*

_{1}= α·

*p*

_{2}for some α ≠ 0. Only one parameter from such a pair must be kept in the set of compensating parameters.

The condition in *step 3* may not be satisfied at critical (bifurcational) points of the parameter space. Indeed at such points, a solution to the system, for which the dynamics (characteristics) are specified, either disappears or changes abruptly. These points limit areas where functional dependencies between parameters that maintain specified characteristics can be defined with the help of the implicit function theorem. For the period of the activity, a corresponding result is well known (appendix b).

*Step 5* does not specify a method for refining the isomanifold. In this study, we used standard Matlab routines. To find the isomanifold, on which the period *T* of the model is equal to the target value *T**, we used the function *fzero*. *fzero* finds a zero of a function, in our case *T* − *T**, by a combination of bisection, secant, and inverse quadratic interpolation methods. To find the isomanifold, on which the period *T* of the model is equal to the target value *T**, and the spike frequency *F* is equal to the target value *F**, we used the function *fminsearch*. *fminsearch* minimizes a function, in our case (*T* − *T**)^{2} + (*F* − *F**)^{2}, by the simplex search method.

The implicit function theorem provides a clear geometric description of the set of parameter combinations for which specified characteristics of activity are constant. The theorem shows that these sets form smooth, *n-m* dimensional manifolds in the parameter space. These manifolds are intersections of *m* (*n −* 1)-dimensional manifolds each of which is a level set of a particular characteristic. Because the latter manifolds can be called isosurfaces, we'll refer to their intersection as an *isomanifold*. All possible compensatory covariations of parameters, with some *m* parameters being functions of the other *n-m* parameters, are possible charts (Arnold 1978) of this isomanifold. These charts are well defined according to the implicit function theorem.

## RESULTS

### Maintaining the period of the network in the face of parameter variation

To illustrate our method for determining compensations of functional characteristics through parameter variation, we consider examples from our own work. In a leech half-center oscillator from the heartbeat CPG, the strength of the mutually inhibitory synapses varies over a more than twofold range yet cycle period, spike frequency, and other functionally important output characteristics can remain very similar (Tobin and Calabrese 2005) (Fig. 1). Similar variations of the synaptic strength caused significant changes (10%) in the period of our canonical model of this half-center oscillator (Fig. 2). We later consider another functionally important characteristic—the average spike frequency in a burst. Spike frequency determines the accumulated impact of the spike-mediated inhibition in the half-center oscillator (Hill et al. 2001). In the model, corresponding changes in synaptic strength also changed the spike frequency but less than the period and nonmonotonically (Fig. 2*A*, *right*).

What parameters of the model can co-vary with *ḡ*_{SynS} in such a way that the period will remain constant, as in the experiments of Tobin and Calabrese (2005)? Multiple parameters, in addition to *ḡ*_{SynS}, strongly affect the period in the model (appendix c, Table 1). Among these parameters are the maximal conductances of the low-threshold slowly inactivating calcium current, persistent potassium current and other currents (Cymbalyuk et al. 2002; Hill et al. 2001; Olypher et al. 2006; Sorensen et al. 2004). According to the general picture described in methods, variations of all these parameters compensating variations of *ḡ*_{SynS} are possible parameterizations of the manifold in the parameter space where the period is constant. For the current analysis, we chose the maximal conductance, *ḡ*_{h}, of the hyperpolarization-activated current; any other parameter can be analyzed similarly. Next, we just followed the steps of the general algorithm.

First, selected parameters were *p* = (*p*_{1}, *p*_{2}) where *p*_{1} = *ḡ*_{h}, *p*_{2} = *ḡ*_{SynS}; *p*_{1}* = 4 nS, *p*_{2}* = 150 nS.

Second, a specified characteristic was *C* = *T*(*p*) − *T*(*p**), where *T* was the cycle period, *T*(*p**) = 7.91 s, *C*(*p**) = 0. A smooth dependence of *T,* and hence *C*, from *ḡ*_{SynS} and *ḡ*_{h} follows from the theorem, described in methods and illustrated by Fig. 3*A*.

Third, let *y* = *ḡ*_{h}, *x* = *ḡ*_{SynS}. Then det[*C*_{y}′(*p**)] = ∂*T*(*p**)/∂*ḡ*_{h} = −0.63 s/nS ≠ 0 (Fig. 3*A*).

Fourth, according to the implicit function theorem, near *ḡ*_{SynS}* = 150 nS and *ḡ*_{h}* = 4 nS there exists a unique function *ḡ*_{h} = *f*(*ḡ*_{SynS}) such that for each pair (*f*(*ḡ*_{SynS}), *ḡ*_{SynS}), the cycle period *T* is the same as at the point *ḡ*_{SynS} = *ḡ*_{SynS}*, *ḡ*_{h} = *ḡ*_{h}*. The function *f* is smooth and has a linear approximation (the tangent) (1) where χ = −[∂*T*(*p**)/∂*ḡ*_{h}]^{−1}·[∂*T*(*p**)/∂*ḡ*_{SynS}] = −0.077/(−0.63) = 0.012.

Fifth, we used linear approximation (*Eq. 1*) to calculate the exact compensatory functional dependency *f* between *ḡ*_{h} and *ḡ*_{SynS} (Fig. 3*B*) near *ḡ*_{SynS}* = 150 nS, and *ḡ*_{h}* = 4 nS. For larger values of *ḡ*_{SynS}, we used the parabolic approximation of the function *ḡ*_{h} = *f*(*ḡ*_{SynS}) based on its values near *ḡ*_{SynS}* = 150 nS, and *ḡ*_{h}* = 4 nS. Figure 4*A* shows that the linear approximation (*Eq. 1*) provided compensation of the period within 2% over a wide range of varied *ḡ*_{SynS}, which is a consequence of the fair approximation of *f* by its tangent over the same range. When *ḡ*_{h} was determined by the functional dependency shown in Fig. 3*B*, the compensation of the period was nearly perfect (Fig. 4*B1*). Both approximate and exact compensatory covariations of *ḡ*_{h} and *ḡ*_{SynS} affected the spike frequency much more than corresponding variation of *ḡ*_{SynS} alone (Fig. 4, *A2* and *B2)*.

Geometrically, all sets of parameters, for which period of the system is constant, form an (*n −* 1)-dimensional manifold in *n*-dimensional space of all model parameters. In this case, we considered the section of the manifold determined by the condition that all the parameters except *ḡ*_{SynS} and *ḡ*_{h} have their canonical values. Near the point (*ḡ*_{h}*, *ḡ*_{SynS}*), this section is a smooth one-dimensional manifold, which is simply a smooth curve (cf. Fig. 3*B*). Obviously, we could parameterize this curve not by *ḡ*_{SynS} but by *ḡ*_{h}, and interpret the resulting function, inverse to *f*, as one determining variations of *ḡ*_{SynS} compensating variations of *ḡ*_{h}.

### Maintaining both period and spike frequency of the network in the face of parameter variation

According to the theory developed, to maintain both the period, *T,* and the spike frequency, *F,* when *ḡ*_{SynS} is varied, compensatory covariations of at least two parameters are necessary and sufficient. As one of the many potential pairs of parameters, we chose *ḡ*_{h} and a scaling factor, η, of the time constant of the slow low-threshold calcium current inactivation, *h*_{CaS} (see methods). As we have shown recently, η strongly correlates with the burst duration of the neuron (Olypher et al. 2006). In this case, the application of the general algorithm was as follows.

First, selected parameters were *p* = (*p*_{1}, *p*_{2}, *p*_{3}) with *p*_{1} = *ḡ*_{h}, *p*_{2} = η, *p*_{3} = *ḡ*_{SynS}; *p** = (*p*_{1}*, *p*_{2}*, *p*_{3}*), *p*_{1}* = 4 nS, *p*_{2}* = 1, *p*_{3}* = 150 nS. Note that *p*_{2} is dimensionless, whereas *p*_{1}, *p*_{3} are in nanoSiemens. It is easy to check that the algorithm is not sensitive to differences in units among parameters and/or characteristics.

Second, specified characteristics were *C*_{1} = *T*(*p*) − *T*(*p**), where *T*(*p**) = 7.91 s, and *C*_{2} = *F*(*p*) − *F*(*p**), where *F*(*p**) = 8.82 Hz. As earlier, a smooth dependence of *T,* and hence *C*, from *ḡ*_{SynS}, *ḡ*_{h}, and η follows from the general theory. With regards to the frequency, we approximated it by a smooth function (Fig. 5*A2*).

Third, let *y* = (*y*_{1}, *y*_{2}), *y*_{1} = *ḡ*_{h}, *y*_{2} = η, *x* = *ḡ*_{SynS}. Then Fourth, according to the implicit function theorem, near *p** there exists a unique (vector) function *ḡ*_{h} = φ(*ḡ*_{SynS}), η = ψ(*ḡ*_{SynS}) such that for each triple [φ(*ḡ*_{SynS}), ψ(*ḡ*_{SynS}), *ḡ*_{SynS}], the cycle period *T*, and the spike frequency *F* are the same as at *p**. The functions φ and ψ are smooth. Their linear approximations follow from the system from where (2) Fifth, Fig. 5*A* shows that the linear approximation (*Eq. 2*) provided a decent compensation for the period and spike frequency in the interval [0.5 *ḡ*_{SynS}*, 1.5 *ḡ*_{SynS}*]. There were no difficulties in refining this approximation for the values of *ḡ*_{SynS} close to 150 nS (Fig. 6). To calculate the points on the isomanifold for the values of *ḡ*_{SynS} >250 nS, we used linear approximations based on points already found on the manifold. Note that the compensating dependency *ḡ*_{h} = *ḡ*_{h}(*ḡ*_{SynS}) is not monotonic (Fig. 6*B*) as in the case when only the period was maintained (Fig. 3*B*). This nonmonotonicity, to be considered in detail elsewhere, is in contrast to the monotonic and almost linear compensating dependency η = η(*ḡ*_{SynS}) (Fig. 6*C*). Figure 7 shows that the activity in the compensated model is very close to the activity of the canonical model.

For two characteristics, the period and the spike frequency, the isomanifold in *n*-dimensional parameter has the dimension *n* − *2.* We showed that the section of this isomanifold, determined by the condition that all parameters except *ḡ*_{SynS}, *ḡ*_{h}, and η had their canonical values, was a smooth curve. We parameterized this curve by *ḡ*_{SynS} and interpreted the resulting parameterization as variations of *ḡ*_{h} and η compensating variations of *ḡ*_{SynS}. *Equation 2* is a linear approximation to that parameterization. We could of course consider *ḡ*_{h} as a free parameter and parameterize the same smooth curve with *ḡ*_{h}. That would give us variations of *ḡ*_{SynS} and η compensating variations of *ḡ*_{h}.

To explore the individual contributions of *ḡ*_{h} and η to the approximate, linear compensation, we first made simulations with *ḡ*_{h} varied according to Δ*ḡ*_{h} = 0.00049·Δ*ḡ*_{SynS} and fixed *ḡ*_{h}. Figure 5*B* shows that *ḡ*_{h} compensated both the period and the spike frequency. Indeed, from the partial derivatives ∂*T*(*p**)/∂*ḡ*_{h} = −0.63, and ∂*F*(*p**)/∂*ḡ*_{h}, it followed, that 1% increase of *ḡ*_{h} should lead to 0.32% decrease and 0.39% increase of *T* and *F,* respectively. Next we made simulations with fixed *ḡ*_{h} = 4 nS and η varying according to Δη = −0.00080·Δ*ḡ*_{SynS} (Fig. 5*C*). Corresponding partial derivatives implied a notably greater effect of η on *T* (2.9% increase) than on *F* (0.49% increase) of *F* with 1% increase of η. For small deviations of *ḡ*_{SynS} from its canonical value, the contributions of *ḡ*_{h} and η are approximately independent and proportional to changes in *ḡ*_{SynS}, showing that the compensation works in linear approximation.

## DISCUSSION

In this study, we developed a general description of parameter combinations for which specified characteristics of neuronal or network activity are constant. We showed that near a generic point in the parameter space there are infinitely many such combinations and that they form a smooth manifold, which we have called the isomanifold. This isomanifold can be extended as long as the gradients of characteristics are defined and independent. All possible variations of parameters compensating each other are simply all possible charts of the same isomanifold. The number of compensating parameters (but not parameters themselves) is fixed and equal to the codimension of the isomanifold, which is in turn equal the number of the independent characteristics maintained. The algorithm that we developed and described here shows how to find compensatory functional dependencies between parameters numerically.

Our approach requires a smooth dependency of activity characteristics on parameters. Such smoothness is often intrinsic to neuronal systems when they are in stable functional states. The conclusions about how parameters compensate each other, developed in this study, can thus be used even without regard to the specific mathematical model describing a particular neuron or neuronal network.

The application of the implicit function theorem to finding level sets of a characteristic of neuronal activity is not new. In particular, it is the basis of the parameter continuation technique implemented in dynamical systems software packages, like XPP-AUTO (Ermentrout 2002), and CONTENT (Kuznetsov et al. 1996). The theorem, when its conditions are met, guarantees the existence and uniqueness of a continuation and provides a linear approximation to compensatory functions which can be refined. In practice, parameter continuation is mostly used for finding isocurves, i.e., level sets of a characteristic in a plane of two parameters. Here, we consider a general case with multiple characteristics and parameters, focus on the geometrical picture provided by the theorem, and most importantly, connect the theory with the general problem of neuronal variability.

### Predictions

Our general approach allows us to formulate several predictions related to the analysis of the homeostatic regulation, model tuning, and data analysis. Specifically, if a particular homeostatic mechanism maintains *m* independent characteristics of neuronal activity, then our approach predicts that at least *m* parameters must be changed as a response to perturbation in one or more other parameters of the system. Correspondingly, if during model tuning, one parameter is updated, then to return to target values of *m* characteristics of the model activity, at least *m* other parameters have to be readjusted.

Our study shows that correlations between pairs of parameters in a database of simulated models with similar behavior are generically rare but nevertheless possible. They are rare because a correlation between two parameters adds an extra constraint on the set of parameter combinations for which characteristics remain constant. Geometrically, this constraint means that an isomanifold on which characteristics are constant must belong to a hypersurface orthogonal to the hyperplane of the two parameters. On the other hand, we have demonstrated an almost perfect linear correlation between the two parameters η and *ḡ*_{SynS} in the half-center oscillator model explored here (Fig. 6*C*) under the condition that the burst period and spike frequency in the model remain constant with variations of these two parameters and *ḡ*_{h}.

Our theory also implies that a biological control mechanism that coregulates (linear correlation) different ion channels (conductance parameters), regardless of their functional relation to one another, may effectively limit the number of activity characteristics that can be maintained. Effectively the parameters become nonindependent in regulating activity. In neurons of the lobster stomatogastric ganglion, linear correlation between the conductances of *I*_{A} and *I*_{h} have been observed (MacLean et al. 2003; Schulz et al. 2006). Overexpression of *I*_{A} by mRNA injection leads to corresponding increases in *I*_{h} expression and relative constancy of cell activity (burst period and spike frequency). One is tempted to conclude from our analysis that such correlation means that these currents exclusively control burst activity. We would argue that rather than indicating that other parameters do not influence burst activity, it is more likely that the epigenetic mechanism was selected by evolution as more efficient than other potential compensatory mechanisms involving other parameters. At the same time such a mechanism can limit the system's flexibility. Perhaps such epigenetic homeostatic mechanisms are useful, however, for setting baseline activity. Modulation may break such linkages by affecting one current more than the other either by channel modification or by directly affecting gene expression, and thus alter this homeostatically set baseline activity (see, e.g., Khorkova and Golowasch 2007). Moreover, combinations of such epigenetic mechanisms that effect different correlations between the two parameters may restore flexibility (see, e.g., Brezina et al. 1996).

### Relation to previous work

Our approach extends sensitivity analysis (MacLean et al. 2005; Nygren et al. 1998; Olsen et al. 1995; Paulsen et al. 1982; Weaver et al. 2007), compliments the “brute force” approach (Achard and De Schutter 2006; Goldman et al. 2001; MacLean et al. 2005; Prinz et al. 2004), and is “orthogonal” to bifurcational analysis (Izhikevich 2006). In sensitivity analysis, sensitivities determine approximately how much one parameter should change from its original value to compensate a deviation of another parameter from its original value. Sensitivities approximate partial derivatives of a characteristic at a point of interest in the parameter space. When this approximation is accurate, sensitivity analysis provides a linear approximation to actual compensatory covariations of parameters. Traditionally, the maintenance of only one activity characteristic by compensatory covariations of a pair of parameters has been considered. Here we consider multiple characteristics and parameters and show how to reveal exact compensatory functional dependencies between parameters.

The brute force approach was critical in revealing the fact that neuronal and network models with multiple sets of parameter can have similar behavior (Goldman et al. 2001; Prinz et al. 2004). In particular, the sampling of the large volume of parameter space proved that these parameter sets can be very different. Our approach is local. We start with one point in the parameter space and explore its vicinity. The payoff is that we get complete information on these parameter sets near the point. Our simulations, described here, show that the vicinity in which our analysis is applicable can be reasonably large (cf. Fig. 4*B1*). Furthermore, the isomanifolds the existence and uniqueness of which are established locally can be extended out of that vicinity.

Sampling the parameter space of models has also been used for analyses of the structure of parameter sets for which a system's behavior is the same. For example, Goldman et al. (2001), MacLean et al. (2005), and Taylor et al. (2006) found that these sets have often a layered structure. This observation is in accord with the general picture developed here. Indeed, manifolds corresponding to close characteristic values should not intersect because the intersection would mean that for the same set of parameters, activity characteristics would be different. From the layered structure, Goldman et al. (2001) deduced that there are sensitive and insensitive directions in the parameter space. In the sensitive direction, a characteristic changes most of all, whereas in the insensitive direction, it changes least of all. Our analysis puts these observations into a rigorous mathematical framework. Namely, if the smooth manifold of parameters is one dimensional, then at every point it has a gradient, a vector with components that are partial derivatives of the characteristic with respect to parameters. By definition, the gradient shows a direction, in which the characteristic changes most of all, or the most sensitive direction. Directions, orthogonal to the gradient, are obviously insensitive directions.

Achard and De Schutter (2006), who simulated a multicompartmental model, showed that once there is a point in the parameter space with values close to the target values of the specified characteristics then in the vicinity of this point there are other points where the model has characteristics that are also close to those target values. From our analysis, it follows that near a generic point there must be a whole manifold of parameters for which the characteristics are the same.

Our approach is “orthogonal” to bifurcational analysis, which is about small parameter changes leading to significant changes in dynamics (activity). The focus of our approach is to seek variations of parameters that have no or small effect on model dynamics. Given that characteristics of the dynamics change drastically when a system experiences a transition to another state, linear approximations to compensatory covariations of parameters work better the further parameters are from such critical values. Figure 8*A* illustrates such a transition from the alternating bursting to spiking in the model considered here.

### Half-center oscillator from the leech heartbeat CPG

To illustrate the method, we applied it to a model of the smallest functional network of the leech heartbeat CPG consisting of two mutually inhibitory neurons. Motivated by the experimental finding by Tobin and Calabrese (2005) (Fig. 1), we varied the maximal conductance, *ḡ*_{SynS}, of the spike-mediated inhibition between the neurons and explored how other parameters might compensate variations of *ḡ*_{SynS}. Our choice of the maximal conductance of the hyperpolarization-activated current, *ḡ*_{h}, and the scaling factor, η, of the inactivation time constant of the slow low-threshold calcium current as potential compensating parameters was influenced by our recent studies of the role of these parameters in shaping pattern of the activity in this particular network (Olypher et al. 2006; Sorensen et al. 2004) and the analysis of covariations of *ḡ*_{h} and *ḡ*_{SynS} in hybrid half-center oscillators formed from stomatogastric neurons (Grashow and Marder 2006).

Although extensive model simulations and physiological intuition strongly suggested that these three parameters might change in a coordinated way to maintain the burst period and spike frequency, it was not clear that such maintenance was possible in the model, given the complex nonlinear interactions between the parameters and characteristics. The theory presented in this study firmly establishes the existence and uniqueness of such compensatory functional dependencies.

We generated here a testable prediction on concordant changes of *ḡ*_{h}, *ḡ*_{SynS}, and η. This prediction can be verified in a hybrid system consisting of a living, pharmacologically isolated, heart interneuron connected with artificial synapses to a model heart interneuron running in real-time (Olypher et al. 2006; Sorensen et al. 2004). In this setting, the endogenous hyperpolatization-activated current and calcium current of the living interneuron can be pharmacologically blocked and substituted by corresponding artificial conductances with regulated *ḡ*_{h}, *ḡ*_{SynS}, and η.

### Applicability and perspectives of the method

In the method developed here, characteristics and the parameters maintaining them can have arbitrary nature with regards to units and origin. In particular, a close analysis of the algorithm shows that at every step only terms with similar units are summed. Both parameters and characteristics can originate in ordinary differential equations, partial differential and integral equations, mappings, multicompartmental models, or continuous medium network models. The main requirement is that the characteristics considered must smoothly depend on parameters. However, even this requirement can be lifted, if not the characteristic itself, but its smooth approximation is considered and if this approximation depends on parameters smoothly. For example, in the interval [0.5*ḡ*_{SynS}, 1.5*ḡ*_{SynS}], the number of spikes in the noncompensated model explored here increased from 33 to 36 meaning that there were two points where the partial derivative of the spike frequency with respect to *ḡ*_{SynS} lost its continuity. However, in that interval the compensated spike frequency was <2% different from the canonical value indicating that the compensation worked.

The accuracy of the linear approximation of the isomanifold depends on the accuracy of the estimates of partial derivatives of characteristics with respect to parameters. Figure 8*B* illustrates a case where the period in the model for a particular choice of the leak current parameters is so variable that a reliable estimate of the period's partial derivative with respect to the reversal potential of the leak current, *E*_{L}, is impossible.

Accurate estimates of partial derivatives of characteristics do not guarantee that specific parameters can be considered as compensated or compensating. The critical condition is formulated in the step 3 of the algorithm. If the corresponding determinant is equal to zero within the numeric accuracy employed, then the set of compensated and compensating parameters must be rearranged/reduced.

A particular numeric method for finding an isomanifold must be chosen depending on the problem at hand. For example, for finding isosurfaces there are effective algorithms developed for data visualization, such as the Marching Cubes algorithm and its generalizations (Bhaniramka et al. 2004). In the case of multiple characteristics, the intersection of the isosurfaces must be considered. For this case, there may be specific and effective algorithms as well; however, we are not aware of such algorithms.

In this study, we used the implicit function theorem for revealing functional relations between finite-dimensional vectors of parameters. The theorem has a more general form, defined for functional spaces (Wouk 1979) with the premises and statements of the theorem reformulated correspondingly. The theorem states the local existence and uniqueness of an operator (nonlinear) for compensatory changes not of parameters but of functions affecting specified (functional) characteristics. In theoretical analyses, this general form of the implicit function can be useful in proving existence and uniqueness of implicitly defined connections between various functions affecting neuronal and network activity.

## APPENDIX A: IMPLICIT FUNCTION THEOREM

With two or more characteristics to be maintained, the intersection of the isosurfaces of these characteristics has to be considered. The existence, uniqueness, dimension, and structure of this intersection in a high dimensional space is not obvious, and the Implicit Function Theorem provides the means of determining them. When the theorem holds, it states, in particular, that locally, the intersection is a manifold of a specific dimension. Technically, it means, that the intersection can be mapped by mutually consistent local charts each of which is isomorphic to a Euclidian space.

Theorem (Rudin 1976). Consider a mapping *F*:*U*→*R*^{n} defined in a neighborhood *U* of point (*x*_{0}, *y*_{0}) ∈ *R*^{n+m}, where *m* and *n* are natural numbers. Let *F* be continuously differentiable, *F* ∈ *C*^{(p)}(*U*), *F*(*x*_{0}, *y*_{0}) = 0, the Jacobian of *F* with respect to *y* is an invertible matrix, det[*F*_{y}′(*x*_{0}, *y*_{0})] ≠ 0. Then there exists the *m* + *n* dimensional box *I* = *I*_{x}^{m} × *I*_{y}^{n} ⊂ *U*, where *I*_{x}^{m} = {*x* ∈ *R*^{m}:|*x* − *x*_{0}| < α}, *I*_{y}^{n} = {*y* ∈ *R*^{n}:|*y* − *y*_{0}| < β}, and α and β are some positive numbers, and such a mapping *F* ∈ *C*^{(p)}(*I*_{x}^{m} × *I*_{y}^{n}) that *F*(*x, y*) = 0 ⇔ *y* = *f*(*x*) for each point (*x, y*) ∈ *I*_{x}^{m} × *I*_{y}^{n}, and *f′*(*x*_{0}) = −[*F*_{y}′(*x*_{0}, *y*_{0})]^{−1}[*F*_{x}′(*x*_{0}, *y*_{0})].

Here, *C*^{(p)}(*U*) is a standard notation for the space of functions, which are continuous and have continuous partial derivatives up to order *p* in *U;* *F*_{y}′(*x*_{0}, *y*_{0}), and *F*_{x}′(*x*_{0}, *y*_{0}) are matrices made of partial derivatives of the vector function *F* with respect to components of vectors *y* and *x* in (*x*_{0}, *y*_{0}).

## APPENDIX B: SMOOTHNESS OF THE PERIOD

Theorem (Hartman 1964). Let *x*, *F* be real vectors. Let *F* (*x,*μ) be continuous for small |μ| and for *x* on some *d* –dimensional domain and have continuous partial derivatives with respect to the components of *x*. When μ = 0, let *x′* = *F* (*x*, μ) have a nonconstant solution *x* = *g*_{0}(*t*) of period *p*_{0} > 0 such that the linearized system *y′* = *A*(*t*)*y*, where *A*(*t*) ≡ ∂_{x}*F* [*g*_{0}(*t*), 0], has exactly one characteristic root equal to 1. Then, for small |μ|, the system *x′* = *F* (*x*, μ) has a unique periodic solution *x* = *x* (*t*, μ) with a period *p*(μ), depending on μ, such that *x*(*t*, μ) is near *g*_{0} (*t*) and the period *p*(μ) is near *p*_{0}_{;} furthermore *x*(*t*, μ), *p*(μ) are continuous, *x*(*t*, 0) = *g*_{0}(*t*), and *p*(0) = *p*_{0}.

Further analysis shows that when *F* (*x,* μ) is differentiable, *x*(*t*, μ), *p*(μ) are also differentiable (Hartman 1964).

## APPENDIX C: PERIOD SENSITIVITIES

## GRANTS

This work was supported by National Institute of Neurological Disorders and Stroke Grant NS-24072.

## Acknowledgments

The authors thank G. Medvedev and J. Rubin for helpful discussions.

## Footnotes

The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “

*advertisement*” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

- Copyright © 2007 by the American Physiological Society