Journal of Neurophysiology

Motor Cortical Representation of Position and Velocity During Reaching

Wei Wang, Sherwin S. Chan, Dustin A. Heldman, Daniel W. Moran

Abstract

This study examines motor cortical representation of hand position and its relationship to the representation of hand velocity during reaching movements. In all, 978 motor cortical neurons were recorded from the proximal arm area of rostral motor cortex. The results demonstrate that position and velocity are simultaneously encoded by single motor cortical neurons in an additive fashion and that the relative weights of the position and velocity signals change dynamically during reaching. The two variables—hand position and hand velocity—are highly correlated in the standard center-out reaching task. A new reaching task (standard reaching) is introduced to minimize these correlations. Likewise, a new decoding method (indirect OLE) was developed to analyze the data by simultaneously decoding both three-dimensional (3D) hand position and 3D hand velocity from correlated neural activity. This method shows that, on average, the reconstructed velocity led the actual hand velocity by 122 ms, whereas the reconstructed position signal led the actual hand position by 81 ms.

INTRODUCTION

There has been much research—as well as debate—as to how the CNS controls arm movements in voluntary visuomotor reaching tasks. When reaching for a visually cued object in space, the CNS must identify the location of the object and subsequently transform this kinematic information into a series of muscle excitations capable of moving the hand to the desired location. As noted by Bernstein over 70 yr ago, this sensorimotor transformation is quite complex given the dynamic nature of the musculoskeletal system (Bernstein 1935). For the CNS to perform these transformations, the current arm position and arm velocity must be known.

In his seminal work, Bernstein (1935) argued that there is not a unique and unambiguous relationship between muscle excitation and arm movement kinematics. For instance, the nonlinear relationship between muscle activation and subsequent muscle force is highly dependent on both current muscle length and lengthening velocity (Zajac 1989). Muscles generate moments about the joints that vary with arm position. These muscle moments combine with internal joint reaction loads to generate joint torques. Although a specific joint torque acts locally, it induces joint accelerations throughout the whole system that vary significantly with the current position and velocity of the arm. These accelerations are integrated over time to reconstruct current hand position and velocity. Mathematically, this transformation can be represented as Math(1) where n is the number of degrees of freedom in the arm; θ, θ̇, and θ̈ are n × 1 vectors of joint angle, angular velocity, and angular acceleration, respectively; M is the mass matrix of the arm that varies with joint angle; T is the joint torque vector that varies with muscle activation (A), joint angle, and joint angular velocity; V is the vector of inertial torques (e.g., Coriolis forces, etc.); G is the gravity-induced torque vector; and E is a torque vector representing the external forces and torques applied to the system (Bernstein 1935; Chan and Moran 2006; Kane and Levinson 1985; Yamaguchi et al. 1995). Equation 1 is a nonlinear, coupled, second-order differential equation that relates muscle activation [A(t)] to hand position. This equation mathematically illustrates Bernstein's point that a unique relationship between muscle excitation and hand position during movement cannot exist. Therefore the CNS must be able to internally process Eq. 1 to accurately control hand position during a visuomotor reaching task.

Although it is generally agreed that Eq. 1 must be processed in some form in the CNS, there is great debate as to which structures represent/integrate the various aspects of such an internal model. For instance at one end of the spectrum, proponents of the equilibrium point hypothesis believe Eq. 1 is solved implicitly by kinematic feedback from muscle spindles rather than an explicit inverse dynamic solution; thus only kinematic information is contained in supraspinal structures (Bizzi et al. 1984; Feldman 1986). At the other end of the spectrum, the original work of Evarts (1968) suggested that the inverse solution to Eq. 1 was performed in cerebral cortex such that primary motor cortex represented muscle force or activation (i.e., kinetics). Regardless of which structures process which terms in Eq. 1, the computation of muscle activation requires information on the current state of the musculoskeletal system: arm position and velocity. Without knowledge of current arm location and velocity, the CNS would be incapable of accurately computing muscle activation.

Although the CNS requires information on current arm position and velocity to accurately control the arm, this does not necessarily imply that cerebral cortex requires or contains this information. Information on current arm position and velocity can be obtained both visually and proprioceptively. In proprioception, muscle spindles measure arm position and velocity and relay this information to the spinal cord. If the spinal cord performed the inverse computation of Eq. 1, then descending signals from cortex could be as simple as “final desired hand position” requiring no cortical representation of current hand position and/or velocity. However, patients with large-fiber myopathy (i.e., lack of spindle proprioception) were found to make reasonably accurate reaching movements in the presence of vision (Sainburg et al. 1993). This suggests a cortical representation of current hand position and velocity by visual pathways.

The representation of hand velocity in motor cortex has been extensively studied over the last 20 yr. Georgopoulos and colleagues found a high correlation between motor cortical activity and hand-movement direction in two- and three-dimensional (2D and 3D, respectively) reaching tasks (Georgopoulos et al. 1982, 1986, 1988; Kalaska et al. 1983). Later Schwartz and colleagues showed that both hand speed and direction could be encoded by a population of motor cortical cells in both reaching and drawing movements (Moran and Schwartz 1999a,b; Schwartz 1992, 1993). The representation of hand position was also examined by several studies using multiple linear regression during a center-out task. It was shown that motor neuron firing rate was modulated by hand position under static conditions (Georgopoulos et al. 1984). Similar to the encoding of hand velocity, it was illustrated that hand position is encoded as a dot product between hand position vector and positional gradient, which was defined as the vector along which a difference in hand position leads to the greatest difference in neural activity (Kettner et al. 1988). It was also observed that there were neurons that encoded hand position while the hand was moving, although velocity representation was more dominant than position representation (Ashe and Georgopoulos 1994). Ebner and colleagues examined the temporal relationship between the cortical representations of position and velocity during reaching movements and argued that the representations of target direction, target position, and target distance appear sequentially (Fu et al. 1995). These previous studies showed that both position and velocity are encoded by motor-cortical neurons. However, the studies suffered from the fact that in the classic center-out task, there are high correlations between position and velocity (Chan and Moran 2006; Reina et al. 2001).

A recent study by Donoghue's group argued that both position and velocity can be encoded simultaneously by a single motor cortical neuron (Paninski et al. 2004) during a pursuit-tracking task. Their analysis found that motor cortical neurons encoded both position and velocity equally. However, the pursuit-tracking task allows only limited hand speeds (i.e., 4 vs. 40 cm/s achieved in reaching tasks). Given that directional encoding in motor cortical neurons is “gain field modulated” (i.e., multiplied) by speed (Moran and Schwartz 1999b), the relatively slow movements of the pursuit-tracking tasks limit the velocity response in the cortical activity.

Given the limitations of the aforementioned studies, a more comprehensive study would help to illuminate exactly how hand position and velocity are represented in motor cortical areas. This study uses two new behavioral reaching tasks—random reaching and standard reaching—to reduce position and velocity correlations during reaching. These tasks dissociate position and velocity while preserving the nature of stereotypical 3D reaching movements. The first goal of this study was to determine how both position and velocity information are simultaneously represented in motor cortical activity during reaching movement.

In recent years, several studies were performed in the area of brain–computer interfaces (BCIs) where motor cortical activity has been decoded and used for closed-loop kinematic control of an external device (e.g., robotic arm or computer cursor). Most of these studies used a linear decoding method similar to the classic population vector algorithm (Georgopoulos et al. 1986) to extract a velocity signal that was subsequently integrated in time to yield a positional control signal (Carmena et al. 2003; Taylor et al. 2002; Wessberg et al. 2000). For kinematic control of an external device for short periods of time under medium to large movement speeds, integrating a decoded velocity signal is quite sufficient. For slower speeds, the gain field modulation of speed on directional tuning makes decoding velocity difficult; thus decoding positional information directly from motor cortical activity is more appropriate. Thus the second major goal of this study was to decode position and velocity simultaneously and independently from a population of motor cortical neurons during 3D reaching tasks.

METHODS

The experimental paradigm design, surgical procedures, neurophysiological recordings, and daily animal care were approved by the Institutional Animal Care and Use Committee and the Animal Studies Committee and followed all guidelines set by the Association for Assessment and Accreditation of Laboratory Animal Care and the Society for Neuroscience.

Two monkeys (Macaca fascicularis) were trained using operant conditioning to perform behavioral tasks within a virtual reality (VR) simulator as illustrated by Fig. 1A. The monkeys sat in a custom-made primate chair with their heads constrained for neurophysiological recording. Real-time 3D images were generated by a graphics workstation (Octane, Silicon Graphics). An optoelectronic tracking system (Optotrak 3020, Northern Digital) was used to track the 3D position of the hand in real time and feed it back to the graphics workstation. The monkeys performed various reaching tasks with their right arms within the computer-controlled environment and received a liquid reward after each successful trial.

FIG. 1.

Virtual reality setup and the standard reaching task. A: 3-dimensional (3D) image is displayed by a liquid crystal display (LCD) panel. Image is projected downward and reflected into the monkey's eyes by a mirror placed in front of the monkey and tilted at 45°. Orange ball represents the virtual cursor, whose position corresponds to the position of the marker on the monkey's hand. Green ball represents a virtual target that the monkey is supposed to reach for. B: standard reaching task is illustrated. Workspace is a virtual cube, with a dimension of 100 × 100 × 100 mm, evenly divided into 8 smaller cubes corresponding to the 8 octants in 3D space. Eight movements are made within each octant, yielding 64 total movements. For every movement, the hold-A target is located at one corner of an octant and the hold-B target is always located at the opposite corner of the same octant, i.e., the movement always starts from a corner of an octant and goes diagonally across the octant. Figure also illustrates a sample movement, with the orange ball and the green ball representing the hold-A and hold-B targets, respectively. Arrow shows the direction and trajectory for this movement.

Initially, the monkeys performed the classic 3D center-out reaching task (Georgopoulos et al. 1986). At the beginning of this task, a target appeared at the center of the workspace. The monkey captured this center target with the cursor sphere and held it for a random interval (hold-A time). Next, the center target disappeared and one of the eight peripheral targets appeared. These eight targets were located at the eight corners of a virtual cube (10-cm sides). The targets were presented in a pseudorandom order and the monkey had to reach the target and remain there for a second random hold period (hold-B time) to make a successful reach. Five repetitions were made for each target so that a total of 40 center-out movements were made for each cell recorded in this study.

The random reaching task was designed to extensively sample the 3D hand position and 3D velocity (both speed and direction). The temporal structure of this task was identical to the center-out task. However, the monkey started from a randomly located target in the 3D workspace and reached for a second randomly located target. For each neuron recorded during this task, the subject made ≥100 movements. Given the random nature of both the initial and final targets, the subject made reaches of varied lengths, durations, and speeds.

In the standard reaching task, the workspace was divided into eight small cubes, corresponding to eight octants (Fig. 1B). A reaching movement started from one of the eight corners of a small cube and ended at the opposite corner of that cube using the same timing parameters discussed earlier for center-out movements. For a complete set of standard reaching data, eight distinct reaching movements were performed within each one of the eight small cubes, yielding a total 64 different reaches. The 64 reaches were presented in a pseudorandom order. One complete set of data was collected for each neuron recorded during this task.

After the monkeys were well trained, a circular stainless steel recording chamber with an inner diameter of 16 mm was implanted over the contralateral primary motor cortex. Standard stereotaxic technique was used to center the recording chamber over the gyrus of proximal arm area of primary motor cortex (rostral M1). Recording electrodes were mounted in a 16-channel microdrive (Ekhorn System, Thomas Recording) and arranged into a linear array with interelectrode spacing of approximately 300 μm. Each electrode could be moved independently with a precision of 1 μm (Baker et al. 1999). For one subject, additional recordings from the same area were performed with chronically implanted microelectrode arrays (Center for Neural Communication Technologies, University of Michigan, Ann Arbor, MI). Single-unit activities were sorted with time–amplitude windows (Mountcastle et al. 1969) using customized spike-sorting software (OpenProject 1.5, Tucker-Davis Technologies). All stable and discriminable single units were recorded during the tasks and every attempt was made to record from all layers of the cortex. After the units were isolated, the subject performed the center-out task followed by either the standard reaching or random reaching task. After each data set, an examination was performed to determine whether each neuron responded to passive manipulation of either the shoulder or elbow of the contralateral arm.

All recorded neurons found to be in proximal arm area of rostral M1 (i.e., M1 gyrus only) were analyzed. Because neurons with low firing rates were not excluded, the firing rates of all neurons were computed using partial binning and square-root transform to ensure accurate firing-rate estimations (Moran and Schwartz 1999b; Reina et al. 2001; Richmond et al. 1987; Singh et al. 2002). Furthermore, when averaging activity across multiple neurons, the firing rates of all neurons were first normalized by their root-mean-square (RMS) values so that each neuron was equally weighted in the average.

Cortical activity model

This study investigated the motor cortical representation of 3D hand position and 3D hand velocity. This study's hypothesis is that the relationship between motor cortical neural activity and hand kinematics is Math(2) where f is the instantaneous neural activity at any time t; b0 is the baseline activity of a single neuron; B⃗v, B⃗p, V⃗, and P⃗ are all 3D vectors; B⃗v is the preferred direction (PD) of a neuron; and B⃗p is the positional gradient (PG) of the same neuron. The magnitudes of B⃗p and B⃗v are the depths of modulation for position and velocity, respectively. The relative depth of modulation is the ratio between depth of modulation and baseline activity. V⃗ is the hand velocity and P⃗ is the hand position. τp is the time delay between neural representation of hand position and actual hand position, whereas τv is the delay for the hand velocity. The first three terms are based on the cortical activity model that encodes both velocity and speed (Moran and Schwartz 1999b). The last term represents the neuronal encoding of 3D hand position (Kettner et al. 1988). The model proposed in this study combines two previously presented models and suggests that multiple movement variables are simultaneously encoded in motor neuronal activity. It also assumes that the relationship between the neural representation of hand position and hand velocity is additive rather than multiplicative (i.e., gain field) with different temporal delays. The study presented here tested several aspects of the proposed model with single-unit data recorded from the proximal arm area of rostral primary motor cortex.

Hand kinematics

The kinematic marker coordinates obtained from the optoelectronic tracking system were transformed into a monkey-centered reference frame. In this reference frame, the x-axis pointed anteriorly, the y-axis pointed superiorly, and the z-axis pointed to the right side of the monkey. The origin of this reference frame was the center of workspace where the center ball appeared in the center-out task. The position data, sampled at 100 Hz, were low-pass filtered with a cutoff frequency of 10 Hz (Moran and Schwartz 1999b; Woltring 1986). Movement onset was defined as the time when the hand speed first exceeded 15% of the peak speed and movement offset was defined as the time when the hand speed first dropped to <30% of the peak speed. The period between movement onset and offset was defined as movement time. Reaction time was defined as the time between distal target appearance and movement onset. To ensure that hand speed remained close to zero during the hold-B time, the time point where the hand speed first dropped to <15% of peak speed was identified and as long as the speed stayed at <15% of peak speed, that time was classified as hold-B time. Each trial was divided into three time periods: hold-A time, reaction and movement time, and hold-B time.

Average positional tuning

The average firing rates during each hold period (hold-A and hold-B) were calculated. Because the hand is not moving during the two hold periods, the neural representation of velocity should be zero (Moran and Schwartz 1999b). Under these conditions, Eq. 2 simplifies to the model proposed by Kettner et al. (1988) Math(3) where is the average neural activity; , ȳ, and are the average hand position during hold-A time or hold-B time; and bp,x, bp,y, and bp,z constitute the Bp vector in Eq. 2, which is the positional gradient (PG) of the neuron. A multiple linear regression was performed for each neuron over data obtained during the hold-A time and hold-B time to compute the PG using Eq. 3. Neurons with p-values <0.05 were considered significantly modulated by the 3D hand position.

Average velocity tuning

Once the PG of each neuron was determined, a similar analysis was performed over reaction and movement time to compute the preferred direction (PD) of each neuron. However, during movement time, both position and velocity could affect neural activity. The neural activity arising from position was estimated by multiplying the actual hand position vector during the reaction and movement periods with the PG vector calculated in the previous analysis during the hold times. To remove the effect of position, the estimated neural activity representing position was subtracted from the total neural activity. This allows Eq. 2 to be simplified as follows Math(4) where is the average neural activity; Math, Math, and Math are the average hand velocity terms during movement time; and bv,x, bv,y, and bv,z constitute the Bv vector in Eq. 2, which is the PD of the neuron. A multiple linear regression was performed using Eq. 4. Neurons with P values <0.05 were considered significantly modulated by the 3D hand velocity.

A neuron that is tuned to both position and velocity will have both PG and PD vectors. Two analyses were performed to examine whether PD and PG of the same neuron point in the same direction in 3D space. In the first analysis, a spherical correlation was calculated to examine the correlation between PD and PG (Fisher and Lee 1986; Georgopoulos et al. 1988). In the second analysis, the PD and PG of each neuron were rotated simultaneously in exactly the same way such that all the PDs were aligned along the anterior direction. The rotated PGs then revealed the relative 3D orientation between PD and PG. For example, if PD and PG of the same neuron always point in the same direction, all of the rotated PGs will cluster around the anterior direction.

Interaction between position and velocity tuning

Our hypothesis states that a neuron encodes position and velocity information in an additive manner during the movement (Eq. 2). To test this hypothesis, the neural activity during movement time was further examined using the standard reaching data. The standard reaching task contains sets of movements made within eight different movement spaces (i.e., octants). For each neuron, the eight sets of movements were divided into two classes depending on their locations relative to the neuron's preferred spatial location (PG vector). The first class was the four octants that were generally in the direction of the PG and the second class was the remaining four octants (which were generally in the opposite direction of the PG). Within each class, a velocity tuning curve was constructed, where neural activity was graphed against the velocity–PD angle. These two curves represent movements within the “preferred” spatial location and movements within the “antipreferred” spatial location. The difference between those two curves reflects the effect of hand position on velocity tuning. The two curves were fitted with a standard cosine tuning function Math(5) where f is the neural activity and θ is the velocity–PD angle; b0 is the baseline firing rate of the fitted cosine curve; and b1 is the amplitude of the curve or the “absolute depth of modulation.” Changes in b0 would shift the tuning curve vertically, representing an additive effect of position on velocity tuning, whereas changes in b1 would scale the tuning curve and represent a multiplicative or gain field effect of position on velocity tuning.

A similar analysis was performed to determine the effect of velocity on position tuning. The neural activities were first classified based on whether the movement direction was along a neuron's preferred direction or opposite to its preferred direction. Then, within each class, a hand position tuning curve was constructed. Again, cosine functions were fit to the two curves, except θ was the hand position–PG angle. The baselines and depth of modulations for the two position tuning curves were computed and compared. As in the previous analysis, if b0 changes between the two classes, then there is an additive relationship between velocity and position tuning. If b1 changes, there is a multiplicative or gain field relationship between velocity and position tuning.

Multibin temporal analysis

For neurons tuned to both position and velocity, the positional depth of modulation was examined as a function of time during reaching. The movement time of each reach was divided into 40 equal time bins. Using the same bin width, 30 prebins (before movement) and 30 postbins (after movement) were defined (Reina et al. 2001). After the 100 time bins for each trial were determined, neuronal firing rates were computed and low-pass filtered with a cutoff frequency of 10 Hz. For each neuron, the baseline and nondirectional speed component were computed [i.e., by averaging the neural activity across all standard reaching movements (Moran and Schwartz 1999b)] and subtracted from the total neural activity time series. Because the baseline and the nondirectional component of the firing rate were eliminated in the multibin analysis, the instantaneous cortical activity (i.e., Eq. 2) can be simplified into the following equation Math(6) By averaging the eight movements made within the same octant in a standard reaching task, as shown in the following equation Math(7) the average velocity term is zero, provided the subject moved at similar speeds. However, the average position is not zero—it is the center of the octant where all eight movements were made. Therefore the neural activity averaged from movements made in the same octant is only a function of hand position. For each of the 100 time bins, this averaged neural activity was calculated and fit with a cosine function using Eq. 5, with θ being the position–PG angle (i.e., a positional tuning curve was generated for each time bin). The depths of modulation of those tuning curves were used to find the temporal dynamics of position tuning.

Population decoding analysis

This study introduces the indirect optimal linear estimator (indirect OLE) method to simultaneously decode both 3D hand position and 3D hand velocity. The indirect OLE method has three steps. First, the encoding vectors are computed (PDs and PGs) using multiple-linear regressions (i.e., Eqs. 3 and 4) from standard reaching and random reaching data. Second, an optimal set of decoding vectors is computed directly from the encoding vectors. These optimized decoding vectors will minimize the effects of correlations between PDs and PGs and reduce the effects of nonuniform distributions of PDs and PGs. Third, movement kinematics are estimated using the decoding vectors and instantaneous firing-rate data from the center-out task. For full mathematical details of the indirect OLE algorithm, see the appendix.

The actual and estimated hand velocity curves were averaged over eight center-out targets and three dimensions. The time lag at which the maximum cross-correlation between the actual and decoded average hand velocity occurred was taken as the delay between neural activity and kinematics (Moran and Schwartz 1999b). For hand position, the signal was a sigmoidal shape. So, instead of cross-correlation, the estimated hand position signal was slid forward in time to minimize the RMS error between the estimated and the actual position signal. The time lag with the minimum RMS error was chosen as the best estimate of delay between neural representation of position and actual hand position.

RESULTS

In all, 978 motor cortical neurons were recorded from the proximal arm region of rostral M1 in two hemispheres of two monkeys. Most of the recorded neurons were related to passive shoulder and/or elbow movements. No neurons from the bank of the central sulcus were recorded; thus only rostral M1 neurons were obtained in this study. The average hold-A time was 417 ms (SD: ±205 ms) over 15,395 trials. The average movement time was 545 ± 242 ms and the average hold-B time was 434 ± 222 ms.

Average positional tuning and average velocity tuning

The neuronal firing rate was examined as a function of hand position when the hand was held statically during the random reaching task. Figure 2A shows how firing rate is related to hand position for a representative neuron when the hand was held 80 mm away from the center of the workspace. The neuron was most active when the hand was held within the posterior part of the workspace. The neuron firing-rate data were averaged along the direction orthogonal to the positional gradient and plotted as a function of hand position along the anterior/posterior direction and superior/inferior direction. The firing rate varied in a linear fashion with hand position (Fig. 2B). These results confirm the existence of 3D positional tuning in motor cortical neurons during static hold time and support the linear encoding model proposed by Kettner et al. (1988). Figure 3 illustrates the distribution of the neurons with different tuning properties. Overall, 542 neurons (55%) were tuned to hand velocity and 312 neurons (32%) were tuned to hand position.

FIG. 2.

Neural activity as a function of static hand position for a representative motor cortical neuron. A: spherical surface plot shows the neural activity as a function of hand azimuth and altitude when the hand was held still and 80 mm away from the center of the workspace. Coordinate system is monkey-centered, with x-axis pointing to the anterior direction, y-axis pointing to the superior direction, and z-axis pointing to the right. For each point on the surface, its azimuth and altitude correspond to the azimuth and altitude of the hand. Distance from each point on the surface to the origin is proportional to the neural activity. Neural activity is color-coded as illustrated by the color bar. This neuron's preferred direction computed from multiple linear regression is shown as the black line pointing approximately to the posterior direction. This is reflected in the neural activity as well because the neuron is more active when the hand was in the posterior part of the workspace. B: activity of the same neuron, averaged over left/right direction (z) as a function of hand anterior (x) and superior (y) position. Neural activity is well fit by a linear function.

FIG. 3.

Distribution of 978 motor cortical neurons with different tuning properties. Multiple linear regression analysis was applied to data during hold time (both hold-A and hold-B) to compute the positional gradient and P value of the fit. Same method was applied to data during movement time to compute the preferred direction and the P value. All reaching data, including center-out, random reaching, and standard reaching movements were pooled for the analysis and P values <0.05 were considered statistically significant. A fairly large population of neurons was significantly tuned to both hand position and hand velocity.

To quantify the various position and velocity tuning parameters for future modeling efforts, the statistical distribution of each model parameter was mathematically modeled. The histogram in Fig. 4A shows the distribution of the baseline average firing rates for 312 position-tuned neurons (position only and position- and velocity-tuned neurons). This distribution was fit using an exponential distribution, with an average baseline firing rate of 14 Hz (P < 0.05). Similarly, the distribution of baseline firing rates for 542 velocity-tuned neurons (Fig. 4B) (velocity only and both position- and velocity-tuned neurons) was fit using an exponential distribution with an average baseline firing rate of 12 Hz (P < 0.05). Figure 4C shows the histogram of the relative depths of modulation for position-tuned neurons, which was fit with a Gamma distribution (a = 1.71, b = 21.76, and P < 0.05). Figure 4D shows the histogram of relative depths of modulation for velocity-tuned neurons and the fitted Gamma distribution (a = 2.41, b = 35.98, and P < 0.05). The average relative depth of modulation for position-tuned neurons was 37%. The average relative depth of modulation for velocity-tuned neurons was 87%. A line of slope of 0.28 (r2 = 0.33, P < 0.05) best represents the relationship between velocity tuning and position tuning (Fig. 5). Therefore the depth of modulation for velocity tuning was approximately fourfold the depth of modulation for position tuning.

FIG. 4.

Histograms of baseline firing rates and relative depths of modulation. Black bars show the actual number of neurons and the dotted lines show the predicted number of neurons based on the fitted distribution given the same total number of neurons. A and B: histograms of baseline firing rates for 312 neurons that are tuned to hand position and 542 neurons that are tuned to hand velocity, respectively. Both are well fit by exponential distributions (P < 0.05). C and D: histograms of depths of modulation for neurons that are tuned to position and velocity, respectively. Both are well fit by Gamma distributions (P < 0.05).

FIG. 5.

Scatterplot illustrating the ratio between position depth of modulation (mpos) and velocity depth of modulation (mvel). For 230 neurons that are tuned to both position and velocity, mpos was plotted against mvel and fit with a simple linear equation: mpos = 11.59 + 0.28mvel (P < 0.05, r2 = 0.33).

The uniformity of the PD and PG distributions was tested with a 3D Rayleigh test (Mardia and Jupp 2000). Both PGs and PDs were nonuniformly distributed in 3D space (P < 0.05) (Fig. 6). Two-dimensional Rayleigh tests were used to examine whether the distributions were uniform within the individual planes (Batschelet 1981). The PDs were uniformly distributed only in the coronal plane (P < 0.05), which is consistent with previous studies (Moran and Schwartz 1999b). However, in both the sagittal and horizontal planes, the PDs were more clustered along the anterior/posterior direction. The PGs were nonuniformly distributed within all three planes. In the sagittal and horizontal planes, similar to the PDs, the PGs tended to cluster around the anterior/posterior direction. In the coronal plane, more PGs pointed toward the superior-right direction. In general, over the population, both PGs and PDs showed a similar trend and tended to point to anterior/posterior direction.

FIG. 6.

3D distribution of preferred directions (PDs) and positional gradients (PGs). AC: circular histograms composed of 18° bins showing the distribution of PDs in the sagittal, coronal, and horizontal planes, respectively. A total of 542 neurons tuned to velocity were analyzed. In both sagittal and horizontal planes, more neurons had PDs along the anterior/posterior direction than along the other direction (P < 0.05). In the coronal plane, PDs were uniformly distributed. DF: same circular histograms showing the distribution of PGs. A total of 312 neurons tuned to position were analyzed. Similar to the distribution of PDs, more PGs were along the anterior/posterior direction. In addition, more PGs were clustered around the superior-right direction in the coronal plane (P < 0.05).

For neurons tuned to both position and velocity, the correlation between the PDs and the PGs was computed with spherical correlation analysis to determine whether a relationship existed between these two parameters. A spherical correlation of 1 means PDs and PGs are highly correlated and point in the same direction, whereas a spherical correlation of 0 means there is no correlation. For neurons recorded in this study, the average spherical correlation was 0.4. The directional correlation was also analyzed by computing the angles between PD and PG (Fig. 7A). The angles between PDs and PGs had a unimodal distribution, with an average angle of 65 ± 41°. Finally, the PD vectors were all rotated in space to point anteriorly. The PG vectors were also rotated using the same transform of its corresponding PD vector (Fig. 7, B and C). There were approximately equal chances for the rotated PGs to point to a location above or below the rotated PDs, as well as to the left or to the right of the rotated PDs. This indicates that there is no systematic 3D configuration between the PD and the PG.

FIG. 7.

Correlation between the directions of PDs and PGs. A: histograms of angles between PGs and PDs for 230 neurons tuned to both position and velocity. B: PDs were all rotated and aligned along the anterior direction. Same rotation was applied to the PGs. Histogram shows the distribution of the rotated PGs along the superior/inferior direction and the number of neurons with the rotated PG above the PD was approximately the same as the number of neurons with rotated PGs below the rotated PD. A large number of neurons had PGs located close to 0°, which corresponded to conditions where PGs and corresponding PDs pointed to similar directions. C: distribution of PGs along the left/right direction. This plot illustrates that there were approximately equal numbers of neurons with PGs pointing to the left and right of their PDs.

Interaction between position and velocity tuning

Figure 8A shows the effect of hand position on velocity tuning during movement time. Neural activity was consistently higher for movements made in the “preferred” position than for movements made in the “antipreferred” position. The difference was independent of movement velocity direction, as indicated by the upward shift of the velocity tuning curve. Figure 8B compares the baselines and depths of modulation of the fitted tuning curves based on their 95% confidence intervals. There was a higher baseline level for movements made within the “preferred” position, suggesting that position shifts velocity tuning curves rather than scaling them. Figure 8C shows the position tuning curve was higher for movement along the PDs: the upward shifting was more prominent than that in Fig. 8A. The difference in neural activity was constant across all hand positions in Fig. 8C. Figure 8D shows the same pattern as Fig. 8B, indicating that hand velocity has a shifting (not scaling) effect on position tuning. These shifts in tuning curves illustrate that hand position is encoded simultaneously with hand velocity during movement time and that the neural activity can be described by the additive model proposed in Eq. 2. By comparing the depth of modulation for velocity tuning in Fig. 8A and depth of modulation for position tuning in Fig. 8C, position depth of modulation was shown to be much smaller than velocity depth of modulation, thus agreeing with Fig. 5.

FIG. 8.

Interaction between position tuning and velocity tuning during movement time. Neural activity is averaged across all neurons tuned to both position and velocity. A: neural activity is plotted as a function of the angle between velocity and the PD. Filled circles are neural activities when the hand was moving within the “preferred” spatial location (+PG) and the empty circles represent neural activities when the hand was moving within the “antipreferred” spatial location (−PG). Error bars are the SE. Neural activity was shifted upward when the movement was made within the “preferred” location regardless of velocity direction. Two cosine functions were fit to the data represented by the filled and empty circles, respectively. B: baselines and depths of modulation between the two fitted cosine tuning curves are compared. Error bars show the 95% confidence intervals. Baseline activity in the “preferred” location was significantly higher than the baseline activity in the “antipreferred” location. Contrarily, the depth of modulation did not differ significantly. Position tuning mainly had a shifting effect on velocity tuning, suggesting an additive model. C and D: same analysis was performed, but the roles of PD and PG were exchanged. Neural activity is plotted as a function of the angle between hand position and the PG. Filled circles represent neural activities when the hand was moving along the preferred direction (+PD), whereas the empty circles represent neural activities when the hand was moving against the preferred direction (−PD). Similarly, neural activity was shifted upward regardless of hand position when the hand was moving along the PD, again supporting the additive relationship between position tuning and velocity tuning.

Spatiotemporal tuning surfaces

In Fig. 9A, the neural activity of the 301 velocity-only–tuned neurons was plotted as a function of time and the hand velocity–PD angle. Before movement onset, neural activity was the same across different velocity–PD angles. During movement time, neural activity showed directional tuning with the strongest activity along the PD. In addition, neural activity scaled with hand speed, echoing the stereotypical bell-shape profile of reaching movements. During the hold-B time, approximately the last 400 ms, hand speed and neural activity returned to zero.

FIG. 9.

Spatiotemporal tuning of cortical activity during center-out movement. A: neural activity, averaged over 301 neurons tuned to velocity only, is shown as a function of time and the velocity–PD angle. Time 0 represents movement onset. Baseline and nondirectional component were removed before the analysis. Dotted black lines show the averaged neural activity at 15, 45, 75, 105, 135, and 165°. Both the vertical axis and the color of the surface show the neural activity. B: neural activity is plotted as a function of time and position–PG angle and averaged over 79 neurons tuned to position only. Analysis was the same as for A, except position–PG angle was used. C: same as A, except that the neural activity was averaged over 222 neurons tuned to both position and velocity. Neural activity exhibited the critical features of both A and B.

In Fig. 9B, the neural activity of 79 neurons tuned to position only was plotted as a function of time and position–PG angle. Neural activity did not follow the bell-shape speed profile. Instead, it changed according to hand displacement from the center of the workspace. The modulated activity also remained at the end of movement.

In Fig. 9C, the neural activity of the 222 neurons tuned to both position and velocity was plotted as a function of time and velocity–PD angle. This plot is a hybrid of Fig. 9, A and B. During movement time, the neural activity modulated with hand movement direction and hand speed, similar to Fig. 9A. In consequence of the correlation between velocity–PD and position–PG, the neural activity was modulated according to position at the end of movement, as evidenced by the tilted edge of the neural activity surface.

Population decoding analysis

Six degrees of freedom, including 3D hand position and 3D hand velocity, were decoded simultaneously from motor cortical activity. The standard reaching and random reaching data were used to compute the encoding vectors, whereas the center-out movement data were used to test the decoding method. The 224 neurons tuned to hand position were used to decode hand position. Similarly, the 387 velocity-tuned neurons were used for velocity decoding. The r2 values between actual and decoded movements are 0.76 for position and 0.83 for velocity (Fig. 10). For center-out movement, the velocity signal peaks in the middle of the movement and returns to zero, whereas the position signal rises monotonically.

FIG. 10.

Decoding 3D hand position and 3D hand velocity simultaneously. Thick lines represent the actual hand position or velocity and the thinner lines represent the decoded hand position or velocity. Left and right columns in each panel show the decoding results for position and velocity, respectively. Movements along the anterior/posterior, superior/inferior, and right/left directions are plotted in the top, middle, and bottom rows, respectively. PDs and PGs computed from the multiple linear regression were used to compute the optimal linear estimators to independently decode position and velocity. A: decoding results for center-out movement toward the target located at the anterior, inferior, and right corner of the workspace. B: decoding results for the movement toward the target located at the posterior, superior, and right corner of the workspace.

Figure 10 shows that there are temporal delays between the cortical representation of hand kinematics and the actual hand kinematics and these delays are different for position and velocity. The actual hand velocity lagged the cortical representation of velocity signal by 122 ms (Fig. 11A), similar to the 145-ms lag reported earlier (Moran and Schwartz 1999b). For position decoding, the actual hand position lagged the cortical representation of the position signal by 81 ms (Fig. 11B), which was slightly shorter than the delay for hand velocity signal.

FIG. 11.

Temporal delay between cortical representation of kinematic parameters and the expression of actual hand kinematics. Thick lines represent the average of the actual hand position or velocity and the thinner lines represent the average of the decoded hand position or velocity from the cortical activities. Both actual and decoded movements were averaged over all 8 center-out targets. A: decoded position led the actual hand position by 81 ms. B: decoded velocity led the actual hand velocity by 122 ms.

Temporal dynamics of hand position tuning

Previous analyses assumed that the relative weights between position tuning and velocity tuning were constant throughout the reaching movement. This assumption was tested by examining the position depth of modulation as a function of time. The effect of hand velocity on neural activity was removed by averaging standard reaching movements made within the same octant. The neural activity for each of the 100 bins was fit with a cosine-tuning function using the average position of each octant as the independent variable. The resulting 100 fits were statistically significant (P < 0.05) for all 100 bins. Figure 12 shows depth of modulation for position changes with time, which significantly decreases just before movement onset and gradually increases near the end of the movement. The temporal lag between occurrence of the minimum position depth of modulation and the maximum hand speed was found to be 200 ms.

FIG. 12.

Temporal dynamics of the position depth of modulation. Thick black line represents the positional depth of modulation and the thinner line is the average hand speed profile for standard reaching movements. Data corresponding to 8 movements made within the same octant were averaged first. Average velocity was zero and the average position was the center of the octant within which the movements were made. This operation removed the effect of hand velocity and the averaged neural activity was regressed to the averaged hand position by fitting a cosine tuning curve. This operation was performed for each of the 100 bins. Neural activity was well fit with the cosine function and the amplitude of the cosine function (i.e., the absolute depth of modulation) was plotted as a function of time with its maximum normalized to 1. Averaged speed profile for standard reaching movements was superimposed, as well. Position depth of modulation did not stay constant, changing during the reaching movement, and was significantly reduced around movement onset and gradually recovered. Negative of position depth of modulation was taken and its cross-correlation with the speed profile gave a temporal lag of 200 ms.

DISCUSSION

Experiment paradigm design

Various behavioral paradigms were previously developed to study motor cortical representation of movement parameters. One of the most popular designs was the center-out task introduced by Georgopoulos and colleagues (1982). It was widely used in both electrophysiological studies (Fu et al. 1995; Moran and Schwartz 1999b; Schwartz et al. 1988; Scott and Kalaska 1997) and human psychophysics studies (Thoroughman and Shadmehr 2000). However, one problem is that hand velocity and hand position are inherently correlated in the center-out task. This makes it very difficult to independently study the cortical representation of position and velocity. To dissociate hand position and hand velocity, this study introduced two new behavioral paradigms: random reaching and standard reaching.

The center-out, random reaching, and standard reaching tasks all belong to the category of “step-tracking” movement. In these tasks the subjects can compute/determine the entire trajectory before movement onset. Because of this knowledge, step-tracking tasks are much faster (typically an order of magnitude in peak speeds) than random pursuit-tracking tasks (Paninski et al. 2004) where impending movements are unknown. Second, the pursuit-tracking movement is different from stereotypical arm reaching movement and may involve different motor control mechanisms. Pursuit tracking relies heavily on on-line visual feedback, whereas stereotypical reaching uses more feedforward control, especially during the ballistic part of the movement (Paillard 1982). Therefore the standard reaching and random reaching tasks form an optimal set of behavioral tasks for investigating position and velocity encoding during reaching movements.

Both position and velocity are represented in motor cortex

This study confirms the existence of a position representation in the motor cortex during reaching. This representation was present not only during the static target-holding times (Georgopoulos et al. 1984; Kettner et al. 1988), but also during the movement time (Ashe and Georgopoulos 1994; Paninski et al. 2004). Both position and velocity are encoded in the form of a vector dot product between the encoding vector and the encoded physical variable, which has classically been referred to as “cosine-tuning function” (Georgopoulos et al. 1982, 1986, 1988). The encoding vectors (PD for velocity and PG for position) are spatially correlated and point to similar directions. The 3D distributions of the encoding vectors were both nonuniform distributions with similar patterns. More encoding vectors lie along the anterior–posterior direction than along the other directions. Scott and colleagues (2001) observed a similar distribution of PDs and argued that this was a result of the nonisotropic mechanical property of the arm.

Previous studies in pursuit-tracking experiments suggested that position and velocity were encoded simultaneously (Paninski et al. 2004), whereas similar studies in reaching tasks suggested a sequential encoding where the cortical representations of target direction, position, and distance were represented in a sequential order (Fu et al. 1995). Although these studies suggest a cortical encoding dichotomy between pursuit-tracking and reaching movements, our results support both these previous findings. Temporally, it was found that position and velocity were encoded simultaneously during reaching; however, the saliency or “depth of modulation” of position encoding decreased during movement onset and returned to premovement levels near the end of the movement. During the movement, positional depth of modulation decreases and velocity depth of modulation increases proportionally to the speed. These changes in cortical activity could easily account for the appearance of serial encoding of position and velocity seen in the Fu et al. (1995) multiple regression study. In addition, Fu et al. (1995) observed that more than one fourth of the neurons had statistically significant partial R2 values for both position and velocity signals at the same time.

Our results suggest that the relative weights of the position and velocity signals change dynamically during the movement. The position signal was weighted more during static holding time than during movement time. There is a 200-ms delay between occurrence of the minimum position depth of modulation and the maximum hand speed. When the position depth of modulation profile and the speed profile were temporally shifted so that they overlapped, the relationship between position depth of modulation and hand speed was well fit by a straight line, suggesting that the position depth of modulation was inversely proportional to temporally shifted hand speed. Therefore tentatively, the cortical activity model proposed in Eq. 2 can be further improved by adding a scaling factor to scale the positional gradients based on the prediction of hand speed Math(8) Here, the weight for the position modulation w is a function of hand speed and τw denotes the temporal lag between the position depth of modulation curve and hand velocity profile.

Equation 8 assumes that the decrease seen in positional coding during movement onset in reaching tasks is a function of speed. Another explanation for the decrease in saliency of positional coding is that it is a function of movement initiation. For instance, if the subject were to perform a drawing task such as tracing a figure at speeds comparable to reaching (Moran and Schwartz 1999a; Schwartz and Moran 1999; Schwartz et al. 2004) in each of the eight octants in the workspace, it is possible that the positional depth of modulation could increase to premovement levels soon after movement initiation while the hand was still moving at a fairly significant speed. A third explanation for the decrease in position coding during initiation of reaching tasks is that it could be signaling a difference in motor control strategies. For instance, the initial portion of a reach was previously described as a feedforward movement requiring little visual feedback (Paillard 1982). However, before movement onset as well as near the end of the reach, visual feedback is critical; thus the depression in positional coding could be a reflection of enhanced feedforward control versus visual feedback control. More study is necessary to determine the neurophysiological factors behind the decrease in positional saliency during movement onset.

Velocity representation in motor cortex covaries directly with movement speed (Moran and Schwartz 1999b). Equation 8 suggests position representation in motor cortex covaries inversely with movement speed during reaching. Velocity was previously reported to be dominant in motor cortical activity during reaching (Ashe and Georgopoulos 1994). However, a recent study argued that about equal amounts of position and velocity information are represented in motor neurons (Paninski et al. 2004) during pursuit tracking. The hand speed in that study was much slower than that achieved during natural reaching movements, which should reduce the velocity contribution to the variance of the neural firing rate. At the population level, our study found that 55% of approximately 1,000 neurons were tuned to velocity and only 32% of those neurons were tuned to position. Furthermore, at the single-neuron level, for those tuned to position and velocity, the velocity depth of modulation was found in this study to be fourfold greater than the position depth of modulation. This is important because depth of modulation is proportional to both signal-to-noise level and information rate. So overall it seems that both position and velocity tuning are modulated by speed (one inversely and one directly), but for natural movements, velocity is represented greater than position in motor cortex.

This study analyzed the relationship between hand kinematics (position and velocity) and motor cortical activity. However, previous studies suggested that arm kinematics (i.e., joint angles and joint angular velocities) may be better correlated to motor cortical activity than hand kinematics (Caminiti et al. 1991; Scott and Kalaska 1997). Our previous studies showed that hand kinematics and arm kinematics are highly correlated during center-out–style reaching tasks, making it difficult to discern which coordinate system best correlates with motor cortical activity (Chan and Moran 2006; Reina et al. 2001). Furthermore, a preliminary study from our laboratory found that motor cortical activity in the proximal arm area is also significantly modulated by hand orientation kinematics in addition to the hand translational kinematics analyzed in this study (Wang et al. 2004). The behavioral tasks used in this study were specifically designed such that the animal did not significantly vary hand orientation within the workspace. If it did, additional hand orientation kinematic terms would be required in Eq. 2. Regardless of whether motor cortical activity is represented in extrinsic coordinates (hand translation/orientation) or intrinsic coordinates (joint angles), the relationship between position and velocity encoding in motor cortex found in this study would be the same for either coordinate system.

Population decoding of 3D hand position and 3D hand velocity

The additive formulation shown in Eq. 2 makes it possible to construct a 6D encoding vector that concatenates PG and PD, which can be thought of as a “preferred movement” vector, in 6D space. One way to predict movement kinematics from preferred movement vectors is to use the population vector algorithm (PVA) (Georgopoulos et al. 1986). In addition to providing a fairly accurate estimation of the velocity signal, PVA can also illustrate the neural delay between cortical representation of movement and the actual expression of movement (Moran and Schwartz 1999a,b). Thus it is natural to extend PVA to 6D space to decode both 3D hand position and 3D hand velocity and compare the neural delays for position and velocity signals. One disadvantage of PVA is that the encoding vectors have to be uniformly distributed in the encoded space and, because preferred directions and positional gradients are correlated, the encoding vectors are not uniformly distributed in 6D space.

A second candidate is the optimal linear estimator (OLE) method, originally proposed to accurately decode movement direction with a small number of neurons whose preferred directions were nonuniformly distributed (Salinas and Abbott 1994). Thus it is a perfect candidate to be applied in 6D space to simultaneously decode position and velocity. When PVA and OLE were compared with simulated data, OLE performed much better than PVA when PDs and PGs were correlated (Wang et al. 2003). However, the disadvantage of traditional OLE is that it is not able to determine the neural delay between the cortical representation of movement kinematics and the actual hand movements (Carmena et al. 2003; Hatsopoulos et al. 2004; Paninski et al. 2004). This study introduced the indirect OLE method as a solution that combines the advantages of both PVA and OLE. It can dissociate the correlations between PDs and PGs to provide an independent estimation of position and velocity (OLE property) and it can reveal neuronal delays between cortical representation and kinematics (PVA property).

Experimentally, the indirect OLE has two advantages. First, the neurons used in decoding studies do not need to be recorded simultaneously. The encoding vector of a specific neuron was computed based on the firing rate of the individual neuron only and its corresponding hand kinematics. This property will allow experimentalists to be able to more easily compare neurons across different recording sessions and different tasks because each neuron's encoding vector is independent of all the other neurons, which is not the case with traditional OLE. This will also greatly benefit off-line decoding studies that aim to understand the neural representation of physical variables at the population level. Second, the indirect OLE method's computation of its weights is more efficient and stable, especially at high neuron counts, than the original OLE method. The original OLE procedure involves the inversion of a large n × n matrix, where n is the number of neurons versus a 6 × 6 matrix for the indirect OLE (see appendix).

In summary, both position and velocity signals are well represented in motor cortex during static target-holding time as well as movement time; however, the velocity signal is more pronounced, especially during movement. Both 3D hand position and 3D hand velocity signals can be reconstructed from a population of motor cortical neurons. It is suggested that the position signal and velocity signal could differ in the delays between the cortical representation of the signal and its actual expression. Future studies that apply circular drawing movement with sinusoidal position and velocity signals will be able to better elucidate the temporal features of the cortical representations of hand position and hand velocity.

APPENDIX

The indirect optimal linear estimator (OLE) method decodes both 3D hand position and 3D hand velocity simultaneously using the following equation Math(A1) where is the estimated movement; W is the optimal linear decoding weights, or decoding vectors; R is the neural activity; t is the total number of time points for decoding analysis; and n is the total number of neurons. The subscripts denote the dimension of each matrix. Each row of is a 6D vector containing estimated 3D hand position and 3D hand velocity in the format of [ŷMathMathMath] at a particular time instant. Each row of R represents the neural activities of all the neurons at a particular instant time. Each row of W contains the decoding vector in 6D for an individual neuron.

Many studies that implemented the original OLE method computed the decoding vectors directly from hand kinematics and neuronal firing rates based on a training data set (Carmena et al. 2003; Paninski et al. 2004). Different from the traditional method, to find out the decoding vectors in W, the indirect OLE method explicitly used the encoding model to compute the decoding vectors. It first computed the preferred direction and the positional gradient for each individual neuron as described in Eqs. 3 and 4 based on the random reaching and the standard reaching data. The 3D preferred direction and 3D positional gradient were concatenated to make a 6D movement encoding vector for each neuron. The encoding vectors for all the neurons engaged in the decoding study were then put into matrix B of size n × 6. Each row of B is the 6D encoding vector corresponding to a specific neuron. The indirect OLE method computed the decoding vectors in W from the encoding vectors in B using the following relationship between the encoding vectors and the decoding vectors Math(A2) where I is the identity matrix. Equation A2 indicates that the multiplication between the matrix W containing the decoding vectors and the transpose of matrix B containing the encoding vectors is an identity matrix with the size of n × n, where n is the number of neurons. This relationship was previously studied in 2D space (Eliasmith and Anderson 2003) and below is the proof of the preceding equation in 6D space. The symbols used in the equations are listed first, followed by the actual derivation:

[t × 6]
estimated hand movement (3D position and 3D velocity)
K [t × 6]
actual hand movement (3D position and 3D velocity)
R [t × n]
neural activities
B [n × 6]
6D encoding vectors (PGs and PDs)
W [n × 6]
6D decoding vectors
I [n × n]
identity matrix
n
total number of neurons used in decoding
t
total number of bins (i.e., time points) used in decoding
M+
pseudoinverse of a matrix
M
transpose of a matrix

The original OLE formation can be written in matrix form as Math(A3)

To find out the decoding vectors in W, we need Math(A4)

Based on the definition of the Moore–Penrose Matrix Inverse, we have Math(A5)

After removing the baseline activity and the nondirectional velocity term, the 6D linear encoding model proposed in Eq. 2 (see methods) can be written in matrix form as Math(A6) Inserting Eq. A6 into Eq. A5 yields Math(A7) Multiply both sides of Eq. A7 by BMath(A8) which can be rewritten as Math(A9) Thus we have Math(A10)

To solve for the decoding vectors, we have Math(A11)

Once the decoding vectors were computed, the multibin firing-rate data during center-out movement were applied to Eq. A1 to obtain a continuous estimation of 6D hand kinematics. Neurons tuned to hand position and neurons tuned to hand velocity were used to decode the position and velocity signal, respectively.

GRANTS

This work was supported by the Whitaker Foundation and was performed in a facility supported by National Center for Research Resources Grant C06 RR-015502.

Acknowledgments

The authors thank H. Hobbs, M. Tyler, and M. LaConte for assisting in the experiments.

Footnotes

  • The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

REFERENCES

View Abstract