Rapidly adapting (RA), stretch-sensitive neurons were recorded in vitro, using an isolated preparation of skin and nerve from mouse hindlimb. The skin was stretched uniaxially using a pseudo-Gaussian noise stimulus. Loads and displacements were recorded as were spike responses of single RA afferent neurons. The goal was to determine what components of the mechanical stimulus were associated with spike responses. The association between stimuli and spike responses was measured using multiple logistic regression. Spike responses were strongly associated with the rate of change of stress and weakly associated with the rate of change of strain and with stress. There was no association between spike responses and strain. There were significant memory effects associated with each variable, and memory effects differed for each variable. The maximal effect of the rate of change of stress was observed 8–12 ms prior to a spike.
Mechanical sensing by cells is a widespread phenomenon. Mechanical stimuli can evoke cellular responses that range from channel opening (Bostel et al. 2002; Brakemeier et al. 2002; Zou et al. 2002) to changes in gene expression (Ohki et al. 2002). When mechanical stimuli are applied to tissues they can produce very complex internal states. For example, stretching skin can simultaneously produce tensile, compressive, and shear stresses and strains. This makes it difficult to identify which components of stress or strain actually stimulate a cell. Furthermore, mechanical variables such as tensile stress and strain along a particular direction are confounded; i.e., they are related to each other through the constitutive properties of the tissue. To determine whether a neuron is driven by one variable versus another, it is necessary to decouple them, i.e., to cause one variable to change independently of the other. Two strategies have been developed to accomplish this, and both have shown that responses of mechanoreceptor neurons are associated with stress variables rather than strain. One strategy involved the use of biaxial stimuli to study cutaneous SA2 neurons that are highly directionally selective (Grigg 1996). In that experiment, the skin was stretched along the preferred direction of an SA2 afferent, and stress and strain along that direction were decoupled by simultaneously stretching along the orthogonal direction. Multiple correlation analysis was used to show that the neurons were preferentially sensitive to tensile stress rather than strain along the neuron's preferred direction. Similar experiments on receptors found in the joint capsule have yielded similar findings (Khalsa et al. 1996). A second approach (Fuller et al. 1991) utilized the viscoelastic properties of soft tissues to decouple stress and strain variables. Dynamic stretch of viscoelastic materials causes a phase shift between stress and strain variables, thus decoupling those variables. Joint capsule tissue was stretched dynamically, and conditional probability distribution (transinformation) analysis was used to test for associations between stimulus patterns and spike responses. It was found that mechanoreceptor responses were more closely related to stress than to strain.
Other studies (Ge and Khalsa 2002; Khalsa et al. 1997; Looft 1994; Pubols 1990) have applied mechanical stimuli to mechanoreceptor neurons such as SA1 and mechanical nociceptor neurons. They recorded stress and strain variables along with neuronal responses and showed that neural responses had a stronger linear relationship with stress than with strain variables. In contrast, Pubols (1990) found a stronger relationship between neuronal responses and strain than stress.
In an earlier study from this laboratory (Del Prete and Grigg 1998), the responses of rat cutaneous rapidly adapting (RA) afferents were studied using dynamic stretch stimuli. Associations between spikes and tensile stress and strain were sought using cross-correlation analysis. In that experiment, stress and strain were confounded, and while cross-correlation showed a stronger association between spikes and tensile stress than with tensile strain, it was not possible to determine the individual effect of either variable.
The present study, similar to our earlier study in rats, was designed to determine the mechanical sensitivity of stretch activated, RA mechanoreceptors in mouse skin. In the present experiment, we have applied uniaxial stretch stimuli to isolated specimens of skin harvested from control mice, and we have recorded spike responses from single RA afferent neurons. In this experiment, stress and strain are decoupled from each other on a very short time scale (i.e., on the order of milliseconds). On that time scale it was not possible to characterize neural responses in terms of a continuous frequency measure. Therefore standard forms of correlation analysis were not suitable for use (Sanger 2002). When the output variable is binary, then the multiple correlation method of choice is multiple logistic regression (MLR), which allows for the modeling of a binary (dichotomous) outcome as a function of a set of candidate stimulus variables. This method allows the determination of the independent effect of a variable by controlling for the other variables in the model (Hosmer and Lemeshow 1989). Logistic regression is widely used in clinical trials and epidemiology, applications in which it is used to model the binary dependent variable (incidence of disease or death) as a function of categorical and continuous variables (risk factors). Here, we describe its use in analyzing the binary dependent variable (spikes) as a function of a set of experimentally controlled continuous independent variables (tensile stress and strain and their rates of change).
Cutaneous afferent neurons were recorded in an in vitro preparation of isolated skin and nerve that was harvested from the mouse hindlimb. Individual afferent neurons were recorded while the skin was subjected to dynamic, uniaxial stretch stimuli using a linear actuator.
Adult C57BL6 mice, of either sex, with a body mass of 20–22 g, were anesthetized with 35 mg/kg ip Nembutal. A skin specimen of approximately 10 × 6 mm was excised from the hindlimb along with a length of its associated cutaneous nerve. The geometry of the specimen and its location on the hindlimb are shown in Fig.1 A.
Marks were placed on the skin while it was in situ, and the distances between the marks were noted. The specimen was removed from the leg, and maintained in vitro in an apparatus. The apparatus (Fig. 1,B and C) consisted of a Lucite bath filled with HEPES-buffered artificial interstitial fluid (Koltzenburg et al. 1997) at pH 7.40 at room temperature (20°C).
The goal was to subject the skin to uniaxial loads, while maintaining a geometry that was as close as possible to its natural (i.e., in situ) geometry. The sample was fastened in the apparatus by its edges. One end of the skin specimen was gripped in a clamp (Fig. 1, Band C) that was anchored at one end of the bath. The gripping area of the clamp extended across the width of the sample. The opposite end of the specimen was coupled to a linear actuator. The actuator was coupled to the skin by a segment of suture material with a small hook (H) at its end. The hook engaged a plastic tab (P) (5 mm × 18 mm × 0.24 mm thick) that was glued to the edge of the skin. The plastic tab served to uniformly distribute the load applied by the hook over the width of the sample, thus avoiding stress concentrations. The string, hook, and plastic tab had a total mass of 22.5 mg; minimizing the mass attached to the actuator optimized its frequency response. The sides of the skin were anchored to the sides of the chamber using two short gold chains (GC) with hooks at their ends. The hooks were engaged in holes punched in the edges of the skin. The purpose of anchoring the edges of the sample was simply to maintain the width of the sample close to the in vivo dimensions. We extended the edges until the 5 × 10 mm strip between the fixed clamp and the plastic tab (between the dashed lines in Fig. 1 B) had a geometry similar to that observed in situ. Neurons were sampled only in that strip.
It was assumed that stress and strain were both uniformly distributed throughout the part of the skin sample within which neurons were sampled. In support of this assumption, both the clamp and the plastic tab extended across the entire width of the sample, to avoid stress concentrations. Neurons were not sampled within 2 mm of either the clamp or the tab. The hooks applied to the sides of the sample were ≥3 mm from the area within which neurons were sampled and gave no evidence of sustaining any significant loads.
The actuator was an Aurora Scientific 305B lever system. This is a rotary servomotor that can be operated in force control mode. An aluminum arm, 28 mm long, was clamped on the motor shaft. Operating the motor through small angular displacements resulted in an approximately linear displacement of the tip of the arm. When operated in force control mode, the actuator caused the force applied to the tissue to follow a command (input) voltage. The Aurora controller outputted voltages that were proportional to actuator displacements and to the loads applied to the specimen. Length signal resolution was 1 μm and length signal linearity was ±0.5% over a range of 20 mm. Force resolution was 1 mN (100 mg) and force linearity was 0.2% of force change. Length and force step response time was 2 ms (1 to 99% critically damped).
The experimental design was based on the hypothesis that the neurons we studied would respond to force (stress)-related variables rather than displacement (strain)-related variables. Thus the actuator was run in a force-controlled mode, and tensile load was the controlled variable. The command signal to the actuator was a band-limited (0–100 Hz) pseudo-Gaussian noise (PGN) signal, and the system response showed significant power at frequencies ≤ 75 Hz. There were two reasons for using a PGN signal. First, the neurons we studied were all rapidly adapting, so a dynamic stimulus was required to activate them. Second, our method of analysis was based on an approach similar to reverse correlation (Eggermont et al. 1983), in which associations are sought between the spikes in a data record and the ensemble of stimuli that precedes them. In order for this approach to work it is necessary that the stimuli before the spike have a random structure (Eggermont et al. 1983).
The experimental design sought to decouple tensile stress and strain. Since skin is viscoelastic (Silver et al. 2001), dynamic stretching induces a phase shift between stress and strain variables. We quantified the phase shift between stress and strain using a method (Hoffman and Grigg 2002) in which a nonlinear model of the skin is determined as a set of second-order Wiener kernels that are calculated from experimental stress and strain data. The model was used to predict the phase relationship between stress and strain and revealed that, at frequencies ≤ 50 Hz, stress led strain with a phase shift of ≤15°.
The cutaneous nerve was pulled into a small oil-filled chamber (Fig.1 B) and treated briefly with a solution of collagenase (Worthington, CLS-1). It was subsequently dissected into small filaments using sharpened forceps. Filaments were placed on a gold wire recording electrode. The indifferent electrode was placed in the bath. Neuronal signals were amplified with an EG&G 113 amplifier. The output of the amplifier was filtered using a Riverbend Learning Filter, which digitally subtracts noise that is in phase with line frequency. Filaments were recorded while a search stimulus (a manual stretch) was applied to the skin. Dissection of filaments was continued until a clearly identifiable single unit was recorded. Criteria for single units were the consistency of the shape and amplitude of the waveform. A software template-matching system (Signal Processing Systems, Prospect, S. Australia) was used to detect spike responses during experimental runs. This system output a voltage pulse when a spike matched an appropriate template. The pulse triggered a counter on a National Instruments MIO-16E-4 data acquisition board. Following a spike, the counter output was set to 1.0 V for the duration of the sampling period. The counter output was sampled using the A/D converter, and the counter output was reset to 0 after each sampling period. Thus a “1” was collected in each sampling period in which a spike occurred. In sampling periods during which no spikes were observed, a “0” was collected. This method assumed that the neuron never fired more than 1 spike within any 2-ms period. We verified this by separately analyzing interspike intervals (e.g., see Fig.2). All data:analog load and displacement signals from the Aurora controller, and the output of the counter (indicating spike/no spike) were sampled at 500 Hz using the MIO-16E-4 data acquisition board. Data were written into files as they were collected. Data collection runs were typically 60 s in duration, but occasionally were as long as 300 s. Intertrial intervals were typically 5 min but were longer after long-duration runs.
We attempted to determine the association between mechanical variables measured in the skin and the spike responses of single neurons. The loads applied to the skin sample and the resulting displacements were directly measured. Under the assumption that stresses and strains were uniformly distributed in the skin samples, loads (F(t)) were used to calculate tensile stress (ς), using measures of cross section area of the skin (ς =F(t)/area). Cross section area was measured at the completion of experiments by polarizing microscopy of cross sections cut with a cryostat, as described in Hoffman and Grigg (2002). Displacements (l(t)) were used to calculate the Lagrangian strain (ɛ) using the expression (ɛ = [l(t) −l 0]/l 0). The resulting records of stress and strain were differentiated using the expression in the following equation to obtain their rates of change Equation 1where x indicates ɛ or ς, respectively.
The four variables: ɛ, dɛ/dt, ς, and dς/dt as well as all of the first–order interactions between them: ɛ × ς, ɛ × dɛ/dt, ς × dς/dt, ς × dɛ/dt, ɛ × dς/dt, and dς/dt × dɛ/dtwere the independent variables (predictors) in the MLR data analysis. Our strategy in selecting the terms in the model was to have it be parsimonious, i.e., to include all the terms of direct interest, while excluding terms that were either not significant or not physically interpretable. All higher-order interactions were not included for these reasons.
Relationships between predictors and dichotomous spikes were sought using MLR. In the section below, we describe the considerations that are relevant to using MLR in this application. The accompanying presents a formal, detailed treatment of the use of logistic regression. Basically MLR works by fitting a regression model on the probability of the event of interest, i.e., the dichotomous outcome. This produces a slope (β) for each predictor variable, the antilog of which is the estimate of the effect of that predictor after controlling for other predictors. In this application, the independent variables are recorded values of stress, strain, and their rates of change, and the outcome is binary (dichotomous), a set of intervals that either contain a spike or do not and are represented by zeros and ones, respectively. A segment of a typical data file is shown for reference in Table 1. MLR estimates the effect of the predictors in the left hand columns of Table 1 on the probability of a spike.
As with all multivariate modeling methods, MLR does not control for, nor can it estimate, the effects of variables not included in the model. In particular, this experiment contains factors that can influence the outcome but are not included in the MLR model. Specifically, after each spike, the neuron has a recovery process that includes refractory periods, which influence the likelihood of a spike in subsequent sampling periods. This means that sampling periods following a spike are not independent.
We dealt with nonindependence caused by refractory behavior by using two separate strategies.
In strategy 1 we estimated the duration of the recovery period of the neuron, and excluded from the analysis all the data collected during it. This method ensured that the data that are subjected to analysis are recorded from a fully recovered neuron and hence independent of the sampling period before it contained a spike. The details of how we used this method are described in detail in the following text.
In strategy 2 the effect of nonindependence resulting from refractory behavior is to bias estimates of the effects of independent variables toward the null, i.e., toward finding no relationship between independent variables and spikes. For example, logistic regression estimates the effect of the predictors on the binary (0/1) outcome measure. At issue is that some of the zeros that are recorded might have been ones were it not for the effect of the neuronal refractory period. This results in reducing the strength of association between the predictors and the outcome and thus biases the result of the analysis against finding associations. Thus our strategy in this case was to take the data, as collected, and subject it to analysis using MLR.
To analyze results collected when the neurons were in a fully recovered state, i.e., (strategy 1), we used published values (100 ms) for the duration of the recovery period, including the subnormal and supernormal periods, that follows each spike (McIntyre et al. 2002). This is an extremely conservative estimate of neuronal recovery, since the excitability changes in the sub- and supernormal periods are small (McIntyre et al. 2002). We selected for analysis only those spikes that were preceded by 100 ms that were free of spike responses. This ensured that the neuron was fully recovered when a spike occurred.
Another strategy was to account for the effects of refractory periods but not the supernormal and subnormal periods. To this end, we directly measured the duration of refractory periods and excluded from analysis all data collected when the neuron was refractory. The duration of the refractory period was measured using a “hazard function” (Gray 1967). To create a hazard function, cutaneous RA afferents were recorded while the skin was mechanically stimulated by indenting it using a continuous, random input with a bandwidth of 0–100 Hz. The indentation stimulus activated cutaneous afferent neurons that were typical of those that we describe underresults. It is assumed that the stimulus will elicit spike responses at all possible interspike intervals. The hazard function (Fig. 2) is the conditional probability of observing a response at different intervals between spikes. Six neurons were studied in this fashion. The conditional probability of observing spikes reached a plateau after 18 ms and remained constant thereafter (Fig. 2). Thus, after 18 ms, a second spike is independent of the effects of a prior spike.
In summary, in those analyses in which we desired to obtain a totally unbiased estimate of the dependencies between predictors and spikes, we analyzed only those spikes that were preceded by 100 ms of zeros (i.e., no spikes). In analyses in which we wished to obtain a largely unbiased estimate of stimulus–response dependencies, we excluded data collected during the refractory period, i.e., for 18 ms after each spike. This latter strategy ensured that the outcomes that we analyzed represented the properties of a neuron in a largely recovered state; i.e., it ignored only the super- and subnormal periods. In other analyses we ignored the effects of recovery processes and included all data. The advantages and disadvantages of these analysis strategies are described in detail under discussion.
A reverse correlation design allows for the determination of system memory. In the present experimental design, where sampling periods make up a regular sequence (i.e., a time series), a stimulus presented during a particular sampling period can influence the likelihood of a response in a subsequent sampling period. This is an important phenomenon known as system memory, in which an input variable at one point in time has an effect on the probability of a spike occurring at some point later in time. This can be determined for some memory timet by subtracting the constant time t from the time of occurrence of each spike in the data record, so as to create a dummy output variable, which is termed “Lag” (Table 1). Shifting the spikes up by one sampling period (i.e., Lag 2, Table 1) and then performing MLR analysis on the resulting file effectively determines whether there are any dependencies between stimuli in one sampling period and a response in a subsequent sampling period. We used this approach, known as lag analysis, to quantify memory. Dependencies were determined between outcome responses and stimuli that preceded them by ≤50 ms.
In logistic regression, the strength of association between a stimulus variable and the output variable is expressed in terms of an “odds ratio.” Odds ratios are tied to the units of the input variable and represent the probability of the outcome event (i.e., the probability of a spike response) that is associated with a one-unit change in a predictor variable. In our experimental design, the input variables (stress, strain, and their time derivatives) all have different units. For example, stress is expressed in units of kPa, while dς/dt has units of kPa/s. Similarly, the units of stress and strain are different. For this reason, the odds ratios of our predictor variables are not comparable in a direct and intuitive way. We dealt with this problem by expressing magnitudes of stimuli as normalized variables (i.e., standardized variables) (Steel and Torrie 1981), by subtracting the mean and dividing by the SD. All predictors were approximately Gaussian in distribution, and the resulting normalized variables had means = 0 and SD = 1. Values of odds ratios, calculated from these normalized data, were then comparable inasmuch as they represented the probability of a spike, associated with a change of 1 SD in the magnitude of the particular variable. This allowed all four normalized variables to be represented on the same nondimensional x-axis. Furthermore, interactions terms, which are functions of two or more variables, are all dimensionally reduced to functions of one variable, the SD.
All logistic regression analyses were run using SPSS version 9.0.
Thirty neurons were recorded in seven successful experiments. Neurons were studied using PGN stimuli whose magnitude ranged from 1 to 120 kPa and had mean values of ≤40 kPa. These stresses resulted in strains that ranged from 0.01 to 0.37. Typical stimuli, used for studying all neurons, had magnitudes that ranged from 1 to 40 kPa, with a mean value of about 20 kPa. In these runs, strains ranged from 0.01 to 0.13 with a mean value of 0.07. Figure3 shows an example of raw (Fig.3 A) and normalized (Fig. 3, B and C) data recorded from one neuron. As can be seen, spikes were relatively sparse in these records. The number of spikes recorded in a 120-s collection run ranged from 100 to as many as 1,000. The number of spikes evoked by a particular PGN stimulus sequence increased with the magnitude and bandwidth of the stimulus.
Figure 4 shows histograms of the spike responses (black histograms) observed in a single run plotted versus the distributions of the four main factors in the experiment (colored histograms). Spikes were associated with a wide range of values of ς and ɛ and were associated primarily with large values of dς/dt and dɛ/dt. It is important to note that the histograms in Fig. 4 illustrate raw relationships, inasmuch as they do not account for covariance between inputs. For example, similar relationships are observed between ς and ɛ and between dς/dt and dɛ/dt. These variables covary in this experiment, and the representation in Fig. 4 does not allow for the determination of the relative contributions made by (for example) dς/dt and dɛ/dt. This question requires the use of a multivariate method of analysis, for which we used MLR.
Normalized data were subjected to analysis with MLR. The logistic regression analysis yielded a β value for each factor in the model (see ). The anti-log of the β values, ψ = eβ, was used to calculate an estimate of the odds ratio for each factor. Table2 is an example of SPSS output including odds ratios determined in a typical data run for one neuron.
It should be noted that the results in Table 2 reflect the association between the spike responses and the input variables measured in the same sampling period, i.e., when there is no memory. Memory effects were explored by performing logistic regression analysis using the offset (lag) spikes. We systematically varied the lag time from 0 to 50 ms in 2-ms increments and performed logistic regression analysis for each lag time. The resulting β values were used to calculate odds ratios for all the factors at each memory time. These are shown for one neuron, based on analysis using strategy 1 (see methods) in Fig. 5.
In the analysis depicted in Fig. 5, there were four variables with significant effects on spikes: dς/dt, ς, dɛ/dt, and the interaction dɛ/dt × ς. These variables had their effects at different memory times. There was no effect of ɛ in this (or for that matter in any) neuron. The variable with the largest odds ratio was dς/dt, at a memory time of 10 ms. The time at which the peak odds ratio was observed for dς/dt varied between 8 and 12 ms between different neurons and even between different runs for a single neuron. The odds ratios for the interaction terms, while significant, were always small. None of the other interaction terms was significant. Odds ratios were somewhat variable between different runs for a particular neuron and between different neurons. This neuron was typical of the whole sample in that dς/dt at a memory time of 10 ms was most strongly associated with spikes.
Logistic regression analysis makes the assumption that outcomes are independent of each other. We ensured that this assumption was satisfied in the analysis of Fig. 5, by only including data in which spikes were preceded by 100 ms of silence. However, this strategy necessitated the exclusion of data from the analysis. To determine how great an effect was caused by the exclusion of data, we used data from a sample neuron and analyzed it using three strategies (outlined undermethods). Two of three strategies involved eliminating different amounts of data, and one strategy was based on using all data. Results of these analyses are shown in Fig.6. The results in Fig. 6 A are based on analysis similar to that used for Fig. 5; namely, the only spikes that were included were those preceded by 100 ms with no spike activity. Figure 6 B is based on analysis of all data except for data recorded when the neuron was refractory. To perform this analysis we excluded from analysis all data collected for 18 ms after each spike. The final strategy was to include all data (i.e., exclude nothing) in the analysis (Fig. 6 C). Analysis based on this strategy yielded lower values for odds ratios.
To demonstrate the degree of consistency of response within one neuron, the neuron depicted in Fig. 6 was studied in nine different runs employing a variety of frequency bandwidths and stimulus amplitudes. All analyses were done following strategy 2, after removing refractory data (as in Fig. 6 B). The odds ratios from all these runs were averaged together (Fig. 7); the error bars indicate the degree of consistency of the response within a single neuron.
To summarize the properties of the population of afferents and to show the consistency of response between afferents, we combined data from all neurons that were sampled (Fig. 8). When there were multiple runs for a single neuron, the odds ratios for those runs were averaged together as in Fig. 7. This provided the best estimate of the properties of each neuron. The resulting odds ratios for all (n = 30) individual neurons were then averaged together to create Fig. 8. It is important to note that the data that make up Fig. 8 were collected in different neurons, using stimuli of different magnitudes and with different bandwidths. All odds ratios were calculated using data that excluded refractory data. Thus this figure yields the best estimate of the relationship between stimuli and spike responses in these experiments.
To demonstrate the validity of the logistic regression analysis we used the following two methods:
1) Randomizing spikes. We wished to demonstrate that the effects seen in the preceding figures (Figs. 5-7) represented the association between the input variables and the spikes, rather than being due to some unspecified process. Therefore we performed logistic regression analysis on the data set of Fig. 6 after randomizing (i.e., shuffling) the locations of spikes in the data file (Hung et al. 2002). Results of this analysis are shown in Fig.9. The analysis of randomized spikes yielded an odds ratio for each factor in the analysis that was not different from 1.0, which is the value expected when there is no relationship between the input and output variables.
2) Predicting spikes based on the logistic regression model. One way of validating a model is testing whether it can predict the occurrence of spikes in other sets of data collected under similar conditions. Since the odds ratios represent the change in the probability of a spike caused by a particular stimulus, it was possible to estimate the likelihood of observing a spike associated with particular stimulus values. We made a prediction model based on the odds ratios for a particular data set and used that model to predict the occurrence of spikes in three different cases: predicting the original data set, predicting randomized (shuffled) spikes, and predicting spikes in another data set from another neuron. We used the receiver operating characteristic (ROC) curve (Hanley 1989) to quantify the goodness of the prediction in these three cases (Fig. 10). The ROC curve shows how the true prediction rate (the sensitivity) of a model relates to the false-positive rate (100 − specificity) predicted by the model, as the prediction criterion value (cut point) is changed. If a model has no predictive value, then, as the prediction criterion is changed, correct predictions and false predictions are equally likely. Hence a model with no predictive value is represented by the diagonal line in the ROC curve.
The goodness with which a model makes predictions is quantified by the area under the ROC curve. The lack of any relationship is indicated by an area of 0.5, perfect predictions by an area of 1.0, and perfect incorrect prediction by an area of 0. The results of the ROC analyses are shown in Table 3.
The main goal of the experiment was to determine what components of the mechanical stimulus were responsible for eliciting spike responses. Mouse cutaneous RAs were shown to have a strong association with the rate at which stress varies, dς/dt. However, there were also significant influences of ς and dɛ/dt. That neural responses were mainly driven by dynamic components of the stimulus (i.e., dς/dt and dɛ/dt) was not unexpected since the sample population consisted of RA afferents. Of the two interaction terms that were significant, one (dɛ/dt × ς) is proportional to the incremental area under the stress–strain curve and thus reflects strain energy. The odds ratio for this term was always small, indicating that, unlike other systems (Grigg and Hoffman 1984; Khalsa et al. 1996; Srinivasan and Dandekar 1996), strain energy does not appear to be an important determinant of the response of these neurons. The fact that dς/dt × ς was significant means that, similar to other afferents (Houk et al. 1981; Rogers et al. 1996), the effect of dς/dt was dependent on the absolute value of ς.
There were strong memory effects for all of the variables that were associated with spike responses. The variable dς/dt, which had the strongest association with neural responses, had its maximal association with spike responses at a memory time of 8–12 ms; i.e., there was a delay of 8–12 ms between the stimulus and the response it evoked. Memory associated with other variables had different time courses.
The presence of memory effects raises the question as to their origin. Part of memory reflects time required for generator currents to reach threshold and for sufficient current to accumulate for the initiation of a spike. However, accumulation time has been estimated to be 0 ms in muscle spindles (Schafer et al. 1999) and would in any event be expected to be very small in RA afferents. Conduction time along the axon is also included in memory time. However, conduction time is relatively short in these small preparations, in which the distance from skin to electrode is usually 10 mm or less. Even at the low (20°C) temperature at which the experiments were conducted, the conduction time was still on the order of 1 ms. Since skin is a fiber-reinforced composite material, memory time may be required for reorientation of tissue constituents with load. However, it is still unclear why different variables such as dς/dt, ς, and dɛ/dt would have their effects at different memory times. Memory effects are an example of phenomena that may be illuminated by experiments using mutant and gene-targeted mice that have altered skin mechanical phenotypes.
While we have described the use of MLR for the analysis of responses of RA afferents, we note that it can be used in the analysis of any system in which spikes are the output. We (unpublished observations) have applied MLR to previously published data (Fuller et al. 1991) in which slowly adapting mechanoreceptors in cat knee joint were studied with a PGN stretch stimulus. MLR revealed a significant association with stress, similar to the results reported byFuller et al. (1991) using transinformation analysis.
There are many aspects of these findings that remain unclear. It is widely but uncritically accepted that mechanically sensitive channels are activated by molecular deformations (i.e., strains). Yet it is not clear through what mechanism macroscopic stress can be coupled to neuronal activation, to the exclusion of macroscopic strain. It is hoped that further studies on mechanoreceptor neurons can elucidate these problems.
Logistic regression analysis presents both some potential problems and some palpable strengths in analysis of data from this experiment. Neuronal recovery periods pose the potential problem. A spike in a neuron is followed by a recovery period during which the neuron's excitability is altered: it is depressed during refractory periods and the subnormal period and is enhanced during the supernormal period. This means that there are times when outcome events are not independent of each other. This violates the assumption, commonly made in MLR, that outcome events are independent of each other.
It is possible to avoid violating this assumption by selecting data for analysis. One way is to perform MLR analysis using only data that were recorded when the neuron was in a fully recovered state. Since the duration of the sub- and supernormal periods is on the order of 100 ms (McIntyre et al. 2002), this strategy involved including in the analysis only those spikes that were preceded by ≥100 ms with no spike responses. A typical data file contained large numbers of such intervals. The data of Fig. 3 illustrate one such interval, prior to the first spike in the figure. By analyzing only these data, we ensured that both the refractory periods and the sub- and supernormal periods from a preceding spike were completely finished when the next spike was elicited. The strength of this method is that it yields an unbiased estimate of the association between predictors and spikes. The downside is that it is necessary to exclude data from analysis, and it limits the characterization of neurons to effective frequencies of 10 Hz or less. Analyses done in this fashion revealed a strong relationship between dς/dt and spike responses, with the maximal memory effect for dς/dt at around 10 ms.
A second way of analyzing the data involved excluding only the data collected during the refractory periods (i.e., for 18 ms following each spike). The rationale was that 1) we were able to measure the duration of the refractory period, and 2) excitability changes during the sub- and supernormal periods are relatively small, so that their effect on spike responses would be correspondingly small. These analyses included data sampled when the recovery of the neuron was complete except for any effect of subnormal and supernormal periods. With this analysis strategy, the trade-off described in the prior paragraph is somewhat different. Any effect of the subnormal and supernormal periods is not accounted for, but, on the other hand, the characterization of the neuron is now based on effective frequencies of ≤55 Hz. Importantly, these analyses yielded results that were identical to those excluding sub- and supernormal periods, i.e., the strongest association was with dς/dt, at a memory time of about 10 ms.
A third way of analyzing the data was to include all data, including those collected immediately following spike responses, in the analysis. This means that there are instances in the data in which a suprathreshold stimulus fails to elicit a spike because the neuron is refractory. The stimulus will be associated with a “0” rather than a “1” response, and this will lower the odds ratios for the predictors. As above, a trade-off is still involved. In this case, an additional factor–refractory behavior–is unaccounted for in the analysis, while the characterization of the neuron is based on data that impose no limitations of the frequency of neuronal response. These analyses revealed that the association between stimuli and spikes was highly similar to those seen in the analyses described above: the strongest association was still with dς/dt, and the maximal effect was seen at a memory time of 8–10 ms. The odds ratios obtained in these analyses were smaller than those from the previous analyses but were nonetheless highly significant. It should also be noted that, while the effective neuronal frequency differs in these strategies, the stimulus bandwidth is the same in all cases since this refers to the stimuli during the period prior to the spikes.
Since there were differences in odds ratios measured with these strategies, it is of interest to show the validity of the odds ratios. The MLR method was validated with a prediction model that was based on the odds ratios from the all-data analyses, which yielded the smallest estimates of odds ratios. The prediction model was used to predict spikes in either the same or different data sets. The prediction model was very successful in predicting the actual spikes in experimental data files, as evidenced by the ROC analyses. Thus, while our method of analysis may violate an assumption that is commonly made in MLR, it does not appear to substantially alter the effectiveness of the method in our application. Furthermore, other studies have routinely used nonrecovered neurons in experiments in which stimulus–response relationships are characterized. Any study in which the responses of cutaneous RA afferents are characterized using stimuli with a frequency content > 55 Hz (Gescheider et al. 2001) is using nonrecovered neurons.
While there are some limitations (discussed above) on the use of MLR in this application, there are some important advantages to its use. It is a powerful tool for dealing with data such as ours, in which binary spike events are related to continuous predictor variables. By using logistic regression, we avoid longstanding problems associated with attempts to analyze spike train data using standard correlation methods. In particular, logistic regression allows us to treat spikes as discrete events (Victor and Purpura 1997) rather than transforming a series of discrete events into frequency values (seeSanger 2002). It is not necessary to assume that there is a linear relationship between predictors and outcomes. In addition, as a multivariate method, it allows for determining the effect of individual variables that may be confounded, as well as of interactions between them.
MLR quantifies the strength of association between an input variable and the outcome variable in terms of odds ratios. An odds ratio relates the likelihood of observing the outcome variable in terms of the units of the predictor variable. A conceptually simple example involves the association between the effect of smoking, in packs per day, on heart disease. An odds ratio of 1.5 means that smoking 2 packs per day makes heart disease 1.5 times more likely than smoking 1 pack per day. In the present experiment, stimulus magnitudes were normalized to have a mean = 0 and a SD of 1.0. Thus an odds ratio of 30 for dς/dt means that a stimulus that is 1 SD unit greater than the mean is 30 times more likely to elicit a spike than a stimulus whose magnitude is equal to the mean. The absolute values of odds ratios can vary for several reasons, including the amount of data in the data set, the ratio of spikes to no spikes, and the presence of any unmeasured or uncontrolled predictors. Differences in odds ratios can be quite important when, as in Fig. 5, components of a particular data set are included in or excluded from the analysis. But, in general, the magnitudes of odds ratios should be interpreted with some caution.
There are important guidelines for using MLR in applications such as this. Most important is that the stimulus must have a random structure. This is a requirement for any experimental design using reverse correlation (Eggermont et al. 1983). A random structure allows for all possible combinations of predictors to occur prior to a spike. We accomplished this by using a PGN stimulus. The use of a PGN input was also useful in that it allowed for expressing the magnitude of stimuli in units of SDs. This made it possible to make comparisons of the odds ratios for different predictors and interaction terms.
We thank S. Liberman for writing programs and P. Tilander and D. Robichaud for technical assistance.
This work was supported by National Institute of Neurological Disorders and Stroke Grant NS-10783.
Address for reprint requests: P. Grigg, Department of Physiology, Room S4–245, University of Massachusetts Medical School, 55 Lake Avenue, Worcester MA 01655 (E-mail:).
- Copyright © 2003 The American Physiological Society
In linear regression, the outcome y, is modeled as a function of one or more independent variables, sometimes called predictors. Such models produce predicted values for the outcome, given a set of values for the predictors. If ŷ is the predicted value for the outcome andx 1, … , xk are values for k different predictor values, a linear regression model for y for subject i might be:ŷi = β0 + β1 x 1 + … + βk xk where β0 is the intercept and β1, β2, … , βk are the effects of the k different predictor variables for one unit of change in the predictor. Other ways to describe this are thatŷ is the conditional mean for yconditioning on x 1, … ,xk or the expected value of y givenx 1, … , xk , in mathematical notation: ŷ = E(y‖x⃗), wherex⃗ is the vector x 1, … ,xk . When outcomes are dichotomous it is convenient to model them with dummy values of 1 when the outcome of interest occurs and 0 otherwise. Conditional means of dichotomous variables coded this way are then constrained to be between 0 and 1 in a way similar to cumulative probability distributions. One cumulative probability distribution, the logistic distribution, lends itself particularly well to the modeling of dichotomous variables because it is flexible and easy to use and also because its use lends itself to biological interpretation (Hosmer and Lemeshow 1989).
In this application, in which the input variable was a controlled stretch stimulus and the output was action potentials evoked by that stretch, the probability of a positive outcome (a spike) is the outcome measure and β0 ,β1 are the regression coefficients (Hosmer and Lemeshow 1989). This model is clearly a nonlinear monotonically increasing function of the magnitude of the controlled variable. The relationship was modeled using the logistic regression function Equation A1where π(x) represents the expected value fory constrained between 0 and 1 or, equivalently, the conditional mean of y given x, and wherex is the predictor or factor. However the “logit” transformation Equation A2is linear in its parameters and may be continuous and range from −∞ to +∞, properties that lend themselves to regression models.
Estimates of the coefficients β0 and β1 are made by fitting the model to the experimental data using maximum likelihood estimation (Kleinbaum et al. 1988). The significance of the coefficients can be evaluated using Wald tests or likelihood ratio tests, which are provided as ancillary output from most statistical software packages.
While the goal of modeling is to estimate the model coefficients, a variable of interest in logistic regression is the anti-log ψ(x) of these coefficients, which is interpretable as an estimate of the “odds ratio” for the factor x Equation A3The magnitude of the odds ratio represents the difference in probability of a positive outcome (i.e., a neuronal spike) given a one-unit change in the input (stimulus) variable relative to the probability of the outcome without the change in the stimulus. The odds ratio is thus a measure of the strength of association between an input variable x and the outcome variable.
When there are multiple factors in an experimental design the experimenter is confronted with the problem of selecting for analysis a model from all the possible models that can be defined based on all the different possible combinations of factors. The process of arriving at a satisfactory model is referred to as model building. The issue in model building is to establish rational criteria for including or excluding factors in the analysis with a goal of arriving at a model that is parsimonious (has as few terms as possible) while including as many independently predictive variables as possible. One strategy, with regard to a particular factor, is to fit nested models, which vary by the inclusion of a single variable at a time, so as to determine its relative contribution. The significance of the incremental improvement in the analysis over the random improvement resulting from increasing the complexity can be evaluated with a likelihood ratio test (Kleinbaum et al. 1988). A second strategy is to use a model that includes all factors that are of interest in an experimental design. This is perhaps a more conventional method in biological experiments–include the factors of interest and use a statistical instrument to determine whether they are significant predictors. Finally, by means of “interaction” factors, MLR allows to test whether the relation between each single factor and the dichotomous outcome might be dependent on the level of some other predictor. Our strategy was to include all the measured experimental variables and all the first-order interactions between those variables in the model. Thus the final multivariate model we used was as follows Equation A4where x⃗ is the vector of all the physiologically meaningful factors listed above.
Since we had six interaction terms in our model, the expressions that were used for calculating odds ratios were Equation A5 Equation A6 Equation A7 Equation A8Note that the odds ratios are functions of the interacting variables. For example, the odds ratio for strain ψɛ is a function of three variables, ς, ɛ˙, and ς˙. The magnitudes of all odds ratios are expressed in units of SD, so that the odds ratio function can be represented in a one-dimensional domain. Before applying the functions in Eqs. EA5 , EA6 , EA7 , and EA8 we tested the relative strength of each interaction term compared with the contribution of the main factors. If the interaction term strength was found to be at least one order of magnitude lower than the strength of the main factor it was disregarded.