## Abstract

We recorded neural activity from ensembles of neurons in areas 5 and 2 of parietal cortex, while two monkeys copied triangles, squares, trapezoids, and inverted triangles and used both linear and nonlinear models to predict the hand velocity from the neural activity of the ensembles. The linear model generally outperformed the nonlinear model, suggesting a reasonably linear relation between the neural activity and the hand velocity. We also found that the average transfer function of the linear model fit to individual cells was a low-pass filter because the neural response had considerable high-frequency power, whereas the hand velocity only had power at frequencies below ∼5 Hz. Increasing the width of the transfer function, up to a width of 700–800 ms, improved the fit of the model. Furthermore, the Rsqr of the linear model improved monotonically with the number of cells in the ensemble, saturating at 60–80% for a filter width of 700 ms. Finally, it was found that including an interaction term, which allowed the transfer function to shift with the eye position, did not improve the fit of the model. Thus ensemble neural responses in superior parietal cortex provide a high-fidelity, linear representation of hand kinematics within our task.

## INTRODUCTION

The superior parietal cortex (SPC, Brodmann's areas 5 and 2) contains a sensorimotor representation of the arm. Early studies of SPC investigated its sensory representation (Duffy and Burchfiel 1971; Sakata et al. 1973), following its known anatomical connections with primary sensory cortex (Jones and Powell 1970). Later experiments explored its relation to movements (Kalaska et al. 1983; Mountcastle et al. 1975), and arm position (Georgopoulos and Massey 1985; Georgopoulos et al. 1984) using more rigorously controlled paradigms. These experiments as well as others (Chapman et al. 1984; Seal and Commenges 1985) established the presence of both a sensory representation in SPC, which could be eliminated by cutting the dorsal roots (Seal et al. 1982), and a motor representation, which could precede the earliest electromyographic (EMG) activity (Kalaska et al. 1983).

More recent experiments in the SPC have extended these findings by characterizing the neural activity with respect to task factors including delay periods (Crammond and Kalaska 1989), dynamics (Kalaska et al. 1990), different periods of a delayed reach task (Johnson et al. 1996), and movements made in a similar direction but within different parts of space (Ferraina and Bianchi 1994; Lacquaniti et al. 1995). Several of these experiments have shown that neural activity in the SPC can be modulated by task factors other than movements, including the planned direction of a movement (Crammond and Kalaska 1989) and the onset of a target that indicates the direction of an upcoming movement (Johnson et al. 1996). Finally, work on the coordinate system in which reaches are represented in parietal cortex has shown that the preferred reach directions of cells in medial area 5 can be dependent on eye position (Buneo et al. 2002).

We trained monkeys to copy triangles, squares, trapezoids, and inverted triangles (Averbeck et al. 2003) and recorded the activity of small ensembles of neurons while the monkeys performed the task. Previous experiments have shown that the neural activity in primary motor cortex can be used to predict the hand velocity during the production of complex trajectories in a tracing task (Schwartz 1992–1994; Schwartz and Moran 1999). Our analyses extend these results to parietal cortex. Although the studies cited in the preceding text have shown that there is a sensory-motor representation of the arm in parietal cortex, the accuracy of that representation has not been examined. The use of simultaneously recorded neurons is important because noise correlations measured in most systems may decrease the amount of information contained in neural ensembles (Sompolinsky et al. 2001). Thus if sequentially recorded neurons are used to estimate hand velocity, the accuracy of the reconstruction will be overestimated. Although simpler tasks could have been used to study the parietal representation of hand velocity, the complex response properties of cortical neurons (Muir and Lemon 1983) would not guarantee that the representation of movements in a simple task would correlate with the representation of movements in a more complex task. Finally, the diversity of movements generated in the execution of the trajectories in our task generated a rich dataset on which to test our decoding models.

For our analyses, we applied both a linear model and a nonlinear model to predict the hand velocity in 25-ms bins, using the neural responses of the small ensembles. These analyses allowed us to establish how well the hand movements were represented in small ensembles of parietal cortex neurons, an issue relevant to the neural control of prosthetic devices (Taylor et al. 2002; Wessberg et al. 2000) as well as the neural coding of complex arm movements. Furthermore, by comparing linear and nonlinear models, we were able to examine the relative linearity of the representation in parietal cortex. If the hand velocity was a strongly nonlinear function of the neural responses, a nonlinear model should outperform a linear model. We found that the linear model generally outperformed the nonlinear model. This suggests a relatively linear relationship between neural activity and hand velocity. The estimate of hand velocity by the linear model was highly accurate, accounting for 60–80% of the variance of the hand velocity, with ensembles of 6–18 cells. We also found that the accuracy (Rsqr) of the reconstruction scaled monotonically with the number of cells in the ensemble, saturating for larger ensembles. In the frequency domain, the average transfer function was a low-pass filter, reflecting the fact that the movement was band limited to frequencies below ∼5 Hz, whereas the neural activity contained power at higher frequencies. Finally, we found that there was no effect of eye position, such that an interaction term between the neural activity and the eye position did not increase the accuracy of the prediction of the hand velocity, suggesting that in the part of the cortex from which we were recording, the representation was in a hand-centered coordinate system.

## METHODS

The task and recording protocols have been reported in detail previously (Averbeck et al. 2002).

### Animals and task

Two male rhesus macaques, *Macaca mulatta* (*M157* and *M555*, 8–10 kg body wt) were used in the experiments. Care and treatment of the animals during all stages of the experiments conformed to the Principles of Laboratory Animal Care (National Institutes of Health Publication No. 86–23, revised 1995). All experimental protocols were approved by the appropriate institutional review boards.

During the experiments, monkeys were seated in a primate chair facing a rear projection screen. They used a joystick (model 541 FP, Measurement Systems, Norwalk, CT) to control a cursor on the screen. A 26-mm excursion of the joystick (∼1 side of the square) resulted in a 113-mm (13.4° of visual angle) excursion of the cursor on the screen. The monkeys began a trial by moving the cursor into a start hold circle presented on the left half of the screen. This initiated a wait time (WT, 1 s for *M157*, 2 s for *M555*). At the end of the WT, a shape appeared on the right half of the screen, and the monkey drew the shape presented in a copy space on the left half of the screen. *Monkey M555* drew triangles and squares and *monkey M157* drew triangles, squares, trapezoids, and inverted triangles. To complete a trial, the monkey had to execute a trajectory while remaining within invisible corridors. The corridors were small channels, essentially outline versions of the shapes, that constrained the trajectory appropriately. A trajectory was completed when the monkey returned to the start hold circle. If at any time during the trial the cursor went beyond the allowed corridors, the trial was terminated, and the monkey was not given a reward. Animals were allowed as much time as they needed to complete the trajectory, although they normally drew continuously and rapidly. *M157* drew 30 correct trials of each shape in five blocks of six trials each. The average percent correct performance for *M157* was 72.4 ± 0.12, 63.2 ± 0.14, 56.4 ± 0.10, and 62.6 ± 0.13% (means ± SD) for the triangle, square, trapezoid, and inverted triangle, respectively. *M555* was given 60 trials of each shape, in one block of 60 trials each. The average percent correct performance for *M555* was 55.6 ± 0.19 and 34.1 ± 0.15% for the triangle and the square, respectively.

### Data collection and preprocessing

Eye position was monitored using a scleral search coil technique (CNC Engineering, Seattle, WA) (Fuchs and Robinson 1966; Judge et al. 1980). The coil and a halo for holding the head (Nakasawa Works, Tokyo, Japan) were implanted during an aseptic surgery performed under general gas (halothane) anesthesia. We recorded the activity of multiple single cells simultaneously using a 16-microelectrode recording matrix (UWE THOMAS Recording, Marburg, Germany) (see Lee et al. 1998; Mountcastle et al. 1991). The joystick and eye position were sampled at 200 Hz. The hand position, eye position, and spike times were prefiltered, using an FIR low-pass filter, with a cutoff frequency of 15 Hz (Oppenheim and Schafer 1989), and sub-sampled at 40 Hz (similar to using a binwidth of 25 ms) to reduce the computational load, and eliminate noise. The FIR filter order was sufficient to minimize aliasing.

Frequency domain analyses presented in the results were carried out using the fast Fourier transform (FFT) algorithm of Matlab (The Mathworks, Natick, MA). The periodogram for the neural signal (see Fig. 5) was estimated by averaging across trials and across the cells from the ensemble, using the spike times for each cell before they were prefiltered. The periodogram for the transfer function was found by computing the periodogram for each cell's transfer function, and then averaging these across cells. Finally, the periodogram for the predicted and measured velocity were found by averaging across trials, and across the *x* and *y* components of the velocity.

We decoded velocity as an estimate of the accuracy of the kinematic signal in parietal cortex. Given the initial position, we estimated position by integrating the velocity trace. Because velocity is linearly related to position, either position or velocity can be decoded with a linear model. Because velocity is the difference in position, a position filter can always be turned into a velocity filter by first taking the difference in the neural response or by incorporating the differencing operation into the filter. Thus distinguishing carefully between position and velocity representations would require a different experimental paradigm. In the present work, we are primarily concerned with the fidelity of the representation of velocity as well as whether this representation is linear or nonlinear.

### Nonlinear model

We carried out a nonlinear regression analysis (Chan and Tong 2001) by numerically computing the expected value of the velocity given the neural activity. Theoretically such a model represents the best estimate of a variable in a least squares sense (Papoulis 1991). The expected value can be written as (1) where *E* is expected value, ν(*t*) is the velocity to be predicted at time *t*, and *s*(*t*) is the neural activity vector at time *t*. This integral can be estimated numerically, using the formula (2) where the sum is over all data points *t′*, *f*( ) is a probability density function, in our case a Gaussian, *s⃗*(*t*) is a neural activity vector defined below and *b* is a bandwidth parameter, which was optimized separately for each model. The vertical bars indicate Euclidean distance.

Estimates were constructed using *Eq. 2* by first splitting the dataset in half. The current neural activity vector, s⃗(*t*) was then compared with all the neural activity vectors *s⃗*(*t*) in the other half of the dataset, i.e., the sum over *T* is taken across the half of the dataset, which did not contain *s⃗*(*t*). Thus to predict the hand velocity at a point in time, the Euclidean distance was calculated, between the neural activity vector *s⃗*(*t*), and all the neural activity vectors in the other half of the data, denoted by *s⃗*(*t*). These distances were then scaled by the bandwidth parameter *b* and used as arguments to *f*, which was a zero mean, unit variance Gaussian distribution. The velocity, *ν*(*t*), which corresponded to each neural activity vector *s⃗*(*t*), was then weighted by *f*. This regression is similar to a *k*-nearest neighbors analysis, in which the estimated velocity would be the average of the *k* nearest neighbors, except in our estimate all the data points were used, and our average was weighted by the distance of each data point. In practice, however, *b* was quite small, such that the mean was heavily weighted toward the nearby data points, making it a local average.

Because we were analyzing data from sensory cortex, which is also known to have neural activity preceding (i.e., motor) and following (i.e., sensory) the movement in time, the window of neural responses was centered on the hand velocity. Thus when predicting the velocity *ν* at time *t*, the corresponding neural activity vector was given by where *k* = *w/*2, *w* is the total number of bins, and the subscript on the *s* in the brackets indicates the cell index. For the data plotted in Fig. 2, *k* was 8 bins (200 ms). The smaller window was used for the analyses shown in Fig. 2 to decrease the computational load of the nonlinear model. In analyses that we do not present, we also used a smaller window and adapted it to each cell to optimize the variance explained by the model. However, the maximum variance was often explained by lags at the extremes of the allowable range. Therefore we used the fixed range, since it was in-line with previously reported physiological delays. Because we used neural activity which followed the hand velocity in time, we had to truncate the prediction of the hand velocity 200 ms before the end of the trial because we did not collect neural activity past the end of the trial.

### Linear model

We also used a linear model to predict the hand velocity from the neural activity (Ljung 1999). For the linear model, we estimated the optimal transfer function *h*, which, when convolved with a spike train, *s*(*t*) minimized the squared error of the estimate of hand velocity *ν*(*t*). In our case *h*(*k*) and *ν*(*t*) are two-dimensional vectors, one for the *x* and one for the *y* dimension. The convolution formula for one dimension is (3) where α is a constant, *w* is the width of the filter in bins, *n* is the number of neurons, *ν*_{x} is the *x* dimension of the velocity, the *h _{xi}*(

*k*) are the coefficients of the filter for the

*x*dimension of the hand velocity, for

*cell i*, and

*s*(

_{i}*t*) is the neural response at time

*t*. We fit models at a range of different widths

*W*, and lags, τ. For the analyses presented in Figs. 2 and 10,

*w*was eight bins. We used a smaller window in Fig. 2 to allow direct comparison with the nonlinear model and in Fig. 10 to control the complexity of the model. For the other cases, (except Fig. 8

*A*), we present results with 28 bins (700 ms) because filter widths between 700 and 800 ms extracted the most information. For the analyses presented in Figs. 2 and 10, τ was 4 bins. For the other noncausal filters, τ was 8 bins, and for the causal filters, τ was 0.

### Eye-position model

To estimate the effect of eye position on the motor representation, we fit models that included interaction terms between the eye position and the neural response. Eye position was correlated with hand velocity in our task, therefore we had to test for the significance of an interaction between neural activity and eye position, in the presence of linear terms for both members of the interaction. Therefore the first model was (4) where *h*_{x}^{ex} and *h*_{x}^{ey} are the coefficients which control for the linear correlation between the eye position and the hand velocity with the superscript indicating the eye coordinate and the subscript indicating the hand coordinate. To include the interaction effect, *Eq. 4* was extended to (5) where *h*_{xi}^{ex} are the coefficients which account for the *x* dimension of the eye position on the filter, *h*_{xi}^{ey} are the coefficients for the *y* dimension of the eye position. *Eq. 5* can be rewritten as (6) which shows the explicit linear effect of the *x* and *y* eye position on the filter. If the linear filter coefficients, *h _{xi}*, are thought of as a series of preferred directions at each lag,

*Eq. 6*can be interpreted as allowing for a linear shift in the preferred direction as a function of the eye position, with the size of the shift given by

*h*

_{xi}

^{ex}and

*h*

_{xi}

^{ey}, which are coefficients fit by the model, to minimize the sum of the squared error. Although between 3,000 and 5,000 data points were available for each ensemble, the number of terms in

*Eq. 6*could become quite large. Therefore as mentioned in the preceding text, we used a window of only 200 ms (8 bins) when estimating this model.

### Model performance

For both the linear and the nonlinear analyses, a single model, fit across the data from all the shapes, was used to predict the hand velocity from movement onset until the end of the trial. Model performance was assessed by calculating an Rsqr statistic, in conjunction with cross validation (CV). However, CV, especially in the form of leave-one-out cross validation, tends to overfit models (Larsen and Goutte 1999). To minimize the effect of overfitting, we used twofold CV. The *R*^{2} results presented in the paper were estimated as follows. The dataset, which consisted of all correct trials for all the shapes, was split into two halves. The split was carried out such that both halves of the dataset contained half of the trials for each shape. The model parameters were estimated on the first half, and the sum of the squared error (sse) was estimated on the second half, according to (7) where *T* is the total number of relevant time bins across trials in the second half of the dataset. Then the model parameters were estimated on the second half, and the sse was estimated on the first half. The two sse estimates were then added, to get an estimate of the sse for the whole dataset. The total sum of squares (sst) was correspondingly calculated as (8) where |ν_{x} is the mean velocity in the *x* dimension and |ν_{y} is the mean velocity in the *y* dimension. The Rsqr was then calculated as (9) where the sst and sse are the pooled values across all correct trials of all shapes. Thus one *R*^{2} value was calculated for each ensemble. This statistic does not account for the fact that the *x* and *y* components of the hand velocity are correlated. Another approach would be to use the determinant of the error co-variance matrix as an estimate of the variance (Johnson and Wichern 1998). Unless it is assumed that the correlations between the *x* and *y* components will change for different models, however, using the determinant should not produce different results, and it would be less easy to interpret.

### Anatomical location of recordings

Figure 1 shows the electrode penetrations. All penetrations are shown, but data were only analyzed for penetrations anterior to the intraparietal sulcus. Most cells were recorded on the crown and upper bank of the gyrus. Although the data for *M157* likely came from area 2 as well as area 5, we were unable to find significant effects of the *x-y* position of the recording location on the Rsqr, and therefore all data were pooled.

## RESULTS

We examined the neural activity of 102 ensembles, composed of a total of 1,275 neurons, recorded from the SPC of two monkeys. Neurons with less than five spikes per trial on average were excluded from the analyses. Thus the results presented are based on the activity of 664 neurons. We included only correct trials in the analyses that follow. Because there are nonlinearities in the system which links neural responses in parietal cortex to hand velocities, we compared the effectiveness of linear and nonlinear models for predicting hand velocity. Figure 2 shows the results of this analysis. In most cases, the linear model was more effective at predicting the hand velocity than the nonlinear model, often accounting for as much as 20% more variance than the nonlinear model. Thus hand velocity appears to be a relatively linear function of the neural responses in parietal cortex in our task. We focus the remaining results on exploring the linear model in detail.

In Fig. 3, we plot histograms of the spike rate on the mean trajectory, as well as the corresponding single trial rasters, for a single session for four single cells. The width of the line which plots the trajectory is proportional to the mean spike rate in the corresponding time bin. These cells show activity typical of our population in that they respond similarly for segments with similar directions across the shapes. These four cells belong to an ensemble of 13 cells that we explore in detail in the following figures. The cell number in the triangle histogram plot corresponds to the cell number in Fig. 4, which shows the filter coefficients, *h* (see methods, *Eq. 3*), for all 13 cells from this ensemble. Figure 4*A* shows the filter coefficients for the cells, when considered as part of the ensemble, and *B* shows the filter coefficients estimated separately for each cell. When the coefficients for each cell were estimated as part of an ensemble, *Eq. 3* was estimated once for all cells simultaneously. However, when the filter coefficients were estimated for each cell separately, *Eq. 3* was fit separately for each cell. The coefficients in the two cases are generally different because the filter coefficients estimated by the linear analysis are modified to account for correlations between the neurons to optimize the prediction of the hand velocity (Zhang et al. 1998). This is similar to the difference between the population vector and the optimal linear estimator (Salinas and Abbott 1994). Therefore the transfer functions estimated when the cells are treated as an ensemble depend arbitrarily on the other cells that were simultaneously recorded. The main features of the transfer functions, especially of the transfer functions which have the largest amplitudes (for example, *cells 1, 8*, and *13*), are relatively unaffected by estimating them as part of the ensemble. However, as *cell 8* shows, even small differences in the velocity transfer functions can lead to position representations that look much different. These transfer functions are also closely related to the commonly estimated spike-triggered averages, in our case of hand velocity. However, our transfer functions are controlled for the autocorrelation of each neuron's response.

Figure 5 shows the frequency domain representation of the neural response, the average transfer function, and the predicted and estimated hand velocity, for the ensemble of 13 cells which were shown in Fig. 4. It can be seen that the neural response (Fig. 5*B1*), which has had the mean subtracted, has a large peak <1 Hz and that the average transfer function (Fig. 5*B2*) is approximately a low-pass filter. The transfer function is a low-pass filter because there is residual power in the neural signal at high frequencies, but there is no power in the trajectories (Fig. 5*B3*) at those frequencies. Thus for the linear model, the high-frequency power in the spike trains is noise. The two lines in Fig. 5*B2* correspond to the two different cases shown in Fig. 4, *A* and *B*. The single cell transfer functions had higher gains because the transfer functions calculated for the ensemble are controlled for the other cells in the ensembles. Therefore they are the partial contribution of a particular cell to the velocity. Figure 5*B3* shows the power in the measured and the predicted hand velocities (averaged across the *x* and *y* dimensions) as well as the power in the noise and the signal-to-noise ratio (SNR), which is defined as the ratio of the power in the predicted signal to the power in the noise. It can be seen that the power in the noise is relatively constant across the frequencies of the movements, whereas the SNR peaks at ∼1 Hz. This peak, however, is clearly due to the peak in the signal. Therefore the model appears to be able to reconstruct the power in the hand velocity relatively well, at all relevant frequencies. For comparison, Fig. 6, *A* and *B*, plots the population average of the measured and predicted hand velocity as well as the noise and the SNR. Figure 6*A* shows the average across the entire population; Fig. 6*B* shows the average using only ensembles with Rsqr >0.5. The SNR for the total population average has a peak <1 Hz, whereas the ensembles with better prediction performance have a peak SNR that is more similar to the measured hand velocity and more similar to our example ensemble given in Fig. 5. Thus Fig. 6*B* likely provides a better estimate of the noise in our analysis because the results in *A* are due to the lack of signal in the smaller ensembles.

We also explored the performance of our example ensemble on a number of individual trials. In Fig. 7*A*, we show the measured and predicted hand velocity for the first three trials in which the triangle was drawn for the same example ensemble the transfer functions of which are shown in Fig. 4. The trajectories are truncated because we did not record data past the end of the trial, and we are using sensory data to reconstruct the drawings. It can be seen that there is a reasonable match between the predicted and the measured trajectories. Figure 7*B* shows the integrated velocity traces for the same three trials that are shown in *A*. The rightmost column shows the average of all 30 trials in which the triangle was drawn for this ensemble. Figure 7, *C–E*, shows example position traces for the other three shapes. The individual trajectories were reconstructed somewhat noisily. However, the average trajectories show that the model captured the main features of each shape.

Figure 8*A* shows the average performance across ensembles of the linear model as a function of the width of the filter (*w* in *Eq. 3*), for both causal and noncausal filters. The causal filter estimated the hand velocity only with neural activity that preceded the movements in time (τ = 0 in *Eq. 3*), whereas the noncausal filter included neural activity that followed the movement in time by 200 ms (τ = 8 in *Eq. 3*). This plot suggests that sensory information contributes to the representation in parietal cortex because the noncausal filter is more accurate than the causal filter. Most of the information that precedes the movement is present in the interval from 500 ms to the time of the movement. Extending the filter past 500 ms before movement onset adds only a small amount of additional predictive capability. This has to be interpreted carefully with respect to the maximum lead at which neural activity predicts hand velocity because the hand velocity is also autocorrelated, and thus a spike that predicts a feature of the trajectory 200 ms in the future will also predict features of the trajectory further into the future because the trajectory also predicts features of the trajectory into the future. The average differences in the Rsqr between the causal and noncausal filters are rather small, being <0.04 at their maximum. Figure 8*B* shows a plot of the Rsqr of the 700-ms noncausal filter versus the number of cells in the ensemble. This plot shows that for small ensemble sizes, the Rsqr increases approximately linearly with the number of neurons in the ensemble. However, for larger ensembles, the increase in the Rsqr saturates.

In Fig. 9, we show a plot of the eye position overlaid on the evolving copy trajectory for a single representative trial for each shape. Each column shows a snapshot of the eye-position trace superimposed on the drawing at a different point in the trial with the trial time increasing from left to right. The monkey normally made a saccade to the template, which is not shown, when it appeared, then made a saccade back to the drawing area as the trial began, and then made a sequence of small saccades to follow the progress of the drawing. Thus the eye position changes considerably through the course of a trial. Although there is a stereotyped relation between the eye and hand positions, multiple regression analyses showed that only ∼20% of the variance in either the *x* or the *y* hand position could be accounted for by the combined effects of the *x* and *y* components of the eye position and even less of the hand velocity could be predicted by the eye position. Thus the variability in the eye position was only weekly correlated with the hand kinematics. This regression was, however, carried out at a lag of 1 (i.e., 25 ms) between the eye and the hand position because this was the lag used in our decoding analyses (see following text and *Eqs. 5* and *6*). It is possible that other lags could have revealed a much larger correlation between the hand and the eye (Ariff et al. 2002), although we did not explore this possibility systematically.

We also fit a model which looked for an effect of eye position on the filter coefficients (*Eqs. 5* and *6*). The main goal of this analysis was to see if some of the noise, or unexplained variance in the hand velocity, was due to the fact that our basic model did not take into account eye position. To explore this possibility, we fit a model that included an interaction term between neural activity and eye position. If the filter coefficients are thought of as preferred directions at various lags, the interaction terms allow for a shift in the transfer function as a linear function of the eye position (see *Eq. 6*). Because we are using the extra sum of squares principle (Draper and Smith 1998) to test for the significance of the interaction term, we wanted to test for its effect in the presence of both linear terms. Therefore both models contain linear terms for eye position and neural activity (compare *Eqs. 4* and *5*). The interaction model includes an additional term for the interaction between the eye position and the neural activity. Figure 10*A* shows a comparison between models with and without interaction terms with both models including a linear term for eye position. Across the population, there appears to be little effect of eye position on the tuning functions. This result could be due to the fact that only a subset of the population shows the effect, and therefore carrying out this analysis at the ensemble level washes out the effect as well as the fact that the model becomes quite complex at the ensemble level. Therefore we also carried out the analysis at the single-cell level. The results are plotted in Fig. 10*B*. Although a few cells seem to show some effect, across the population, there is relatively little change in the variance explained.

## DISCUSSION

We have explored the ability of ensembles of neurons to predict the velocity of the hand in a copy task using linear and nonlinear models to decode the activity of the ensembles. The linear models outperformed the nonlinear models and were able to account for 60–80% of the variance of the movement trajectories. Each cell had associated with it a transfer function, which was the average partial trajectory through which the hand moved before and after a cell fired a spike. The filters for individual cells were low-pass, reflecting the fact that the hand movements were low frequency. We also found that an interaction between the neural activity and the eye position did not strongly affect the performance of the decoding algorithm, suggesting that the tuning functions of these cells were not in eye-centered coordinates or were only weakly affected by eye position.

The SPC is richly interconnected within the cortical network that subserves reaching movements (Caminiti et al. 1996). On the sensory side, it receives somatosensory afferents from area 2 of sensory cortex (Jones et al. 1978), which contain information about arm position and velocity (Mountcastle and Powell 1959), and visual afferents from area MIP in the anterior bank of the intraparietal sulcus (Caminiti et al. 1996), which gets visual input from area 7m and area V6a. The SPC is thus capable of integrating information about hand position and potential targets for reaching movements. On the motor side, the SPC has connections with primary motor cortex (Caminiti et al. 1996; Jones and Powell 1970) and the supplementary motor area (Pandya and Seltzer 1982) as well as a weak, though consistently reported, connection with the caudal periprincipalis area of prefrontal cortex (Pandya and Kuypers 1969; Petrides and Pandya 1984; Yeterian and Pandya 1985). Subcortically, area 5 is primarily connected with the lateral posterior and anterior pulvinar nuclei of the thalamus; these connections being the characteristic that distinguishes area 5 from area 2, which is connected with the ventral basal nuclei (Jones et al. 1979; Yeterian and Pandya 1985). There is relatively little known about the thalamic nuclei with which area 5 connects (Steriade et al. 1997). Along with its position in the parietal frontal network involved in reaching movements, area 5 also has direct projections to the spinal cord (Coulter and Jones 1977; Galea and Darian-Smith 1994; Murray and Coulter 1981), similar to the other somatomotor areas (Dum and Strick 1996; Kuypers 1981). The cortical spinal projections from the parietal cortex differ from those from the frontal cortex in that they tend to synapse on the dorsal horn in the spinal cord gray matter as opposed to the projections from the frontal cortex, which synapse on the ventral horn (Coulter and Jones 1977). This cortical-spinal projection gives the SPC the ability to directly modulate sensory responses and indirectly modulate motor output at the level of the spinal cord.

The anatomical projections, as well as previous neurophysiological results (Ashe and Georgopoulos 1994; Georgopoulos and Massey 1985; Mountcastle and Powell 1959; Prud'homme and Kalaska 1994; Tillery et al. 1996), suggest that the SPC contains both position and velocity information. In our analyses, we estimated hand velocity from the neural responses. We do not consider the problem of estimating the relative contribution of position and velocity to parietal cortex neural responses (Paninski et al. 2004) because we are principally interested in the accuracy of the representation. Given that SPC is highly interconnected with the cortical and subcortical reaching network, it is not surprising that it contains an accurate representation of hand velocity. We found that the linearly filtered activity of small ensembles of 6–18 neurons accounted for 60–80% of the variance in hand trajectories. It seems likely that larger ensembles would allow a better prediction of the hand velocity, although our performance may have been asymptoting with only 16–18 cells. Some theoretical analyses have suggested that nonlinear models would be required to extract all of the available information from larger ensembles (Shamir and Sompolinsky 2004). Further analyses and the actual recording of larger ensembles will be required to establish the limits of the effectiveness of linear or other models. We also fit both causal and noncausal models to predict the hand velocity. The causal models were less effective than the noncausal models, presumably because they were unable to utilize the sensory information in SPC. The average decrease in Rsqr, however, was <0.04 at a filter width of 700 ms. These results suggest that neural activity in parietal cortical areas 2 and 5 could be useful in driving prosthetics devices (Pesaran et al. 2002), assuming it is not strictly sensory in origin. It is also interesting to note that task factors other than position and velocity, for example, the shape being drawn (Averbeck et al. 2003), may be affecting neural responses in area 5, and these influences will also decrease our ability to decode velocity.

The effectiveness of any decoding algorithm is dependent on two factors, how well the variable of interest is represented in the neuronal population and how effective the algorithm is at extracting the information from the neuronal signal. Linear models for decoding neural activity have proved successful in a number of different situations, including motion-sensitive neurons in the fly (Bialek et al. 1991), responses of LGN neurons to movies (Stanley et al. 1999), V1 neural responses to drifting gratings (Jones and Palmer 1987), the representation of reaching movements in motor cortex (Paninski et al. 2004; Wessberg et al. 2000), and the transfer of information from synaptic input to output (Eckhorn et al. 1976). Eckhorn et al. (1976) further showed that linear decoding was able to extract between 90 and 99% of the total information in the input signal to a cell. Furthermore, Bialek and his colleagues (1993) have suggested, based on empirical observations and a model of neural firing rates, that if mean neural firing rates are on the scale of the autocorrelation interval of the signal to be estimated, it will not be possible to do better than a linear model. Analyses of the position representation in S1 have shown that hand position is often encoded nonlinearly (Tillery et al. 1996). However, the movements made in our task were relatively small (a few centimeters). Thus even if this system is generally nonlinear, the linear approximation may be valid over the range of movements tested.

We have also explored the effect of eye position on the transfer function of our neurons in the SPC. We found that allowing the transfer functions to shift with eye position, in general, did not improve the ability of our model to predict the hand velocity. This differs from results reported in the parietal reach region (Batista et al. 1999), as well as area 5 (Buneo et al. 2002). There are several reasons that can account for this difference, including differences in the task and the analytical techniques employed as well as the exact areas from which the respective studies recorded. Perhaps the largest difference is that our recordings were located relatively anterior and lateral, and published results (Buneo et al. 2002) suggest that the effect of the eye position on the movement field of a neuron decreases at more anterior locations in the cortex.

Thus we have shown that a simple linear model can capture much of the useful signal in simultaneously recorded neural responses. In some respects, the linear model can be considered an extension of a rate code because instead of counting spikes within a window, which amounts to giving each spike in the window the same weight, we simply compute a weighted sum of the spikes within a window to get our estimate. Furthermore, this model was able to extract up 80% of the variance of a complex hand movement, using small ensembles of neurons. Thus parietal cortex contains a robust neural representation of hand velocity.

## GRANTS

This work was supported by National Institute of Neurological Disorders and Stroke Grant NS-17413, by the United States Department of Veterans Affairs, and by the American Legion Brain Sciences Chair.

## Footnotes

The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “

*advertisement*” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

- Copyright © 2005 by the American Physiological Society