Journal of Neurophysiology

Body-Centered Visuomotor Adaptation

John J. van den Dobbelsteen, Eli Brenner, Jeroen B. J. Smeets

Abstract

Previous research has shown that humans generalize distortions of visuomotor feedback in terms of egocentric rotations. We examined whether these rotations are linked to the orientation of the eyes or of the shoulder of the arm that was used. Subjects moved a hand-held cube between target locations in a sequence of adaptation and test phases. During adaptation phases, subjects received either veridical or distorted visual feedback about the location of the cube. The distortions were changes in azimuth either relative to the eyes or to the shoulder. During test phases subjects received no visual feedback. Test phases were performed either with the arm that was exposed to the distorted feedback or with the unexposed arm. We compared test movement endpoints after distorted feedback with ones after veridical feedback. For the exposed arm, the spatial layout of the changes in endpoints clearly reflected the small differences between a rotation around the shoulder and around the eyes. For the unexposed arm, the changes in endpoints were smaller for both types of distortions and were less consistent with the distortions. Thus although the adaptation closely matches the imposed distortion, it does not appear to be directly linked to the orientation of the eyes or of the exposed arm.

INTRODUCTION

During visually guided reaching movements, visual information about the target's location must be integrated with kinesthetic information about the position and movements of the hand. Several researchers have proposed that to do so the movement endpoint is specified in an egocentric frame of reference (Berkinblit et al. 1995; Carrozzo et al.1999; Flanders et al. 1992; McIntyre et al. 1997, 1998; Soechting and Flanders 1989; Soechting et al. 1990; van den Dobbelsteen et al. 2001). Evidence for this includes the fact that people adapt much more readily to distortions of visual feedback that correspond with transformations with respect to the body than to distortions that are defined with respect to the world (van den Dobbelsteen et al. 2003). It is often assumed that retinal and extra-retinal information are combined to determine the target's location relative to the head. The corresponding position of the hand is determined by combining visual information with kinesthetic information about the orientation of the wrist, elbow, shoulder, and neck. Our ability to generate appropriate motor behavior under changed visual feedback suggests that these visuomotor transformations are under adaptive control.

Vetter et al. (1999) tried to determine the coordinate system of the adjustable visuomotor transformations by studying the adaptation to mismatches between actual and displayed finger position during pointing movements. Exposure to a lateral shift of visual feedback about finger position within a small area induced changes in movement endpoints over the entire workspace. The pattern of generalization was best described as a rotation of the workspace within a spherical coordinate system. A rotation with respect to the eyes captured the pattern of generalization slightly better than a rotation with respect to the shoulder of the exposed arm. These results led the authors to conclude that the observed changes in endpoints were due to adjustments in sensorimotor processes that operate in eye-centered coordinates (Vetter et al. 1999).

The experimental approach that Vetter et al. (1999) used to characterize the most suitable coordinate system for describing the changes in endpoints has been widely used in psychophysical studies concerned with the control of arm movements. It is assumed that the description that best fits the changes at the behavioral level will identify the reference frame of the mechanisms that adapt to the incorrect feedback. However, when confronted with a mismatch between visual and kinesthetic information, the brain does not necessarily interpret the imposed distortion as a pure rotation around a single axis. For instance, it is possible that both adjustments related to the orientation of the eyes and of the arm occur for a single distortion even if an eye-centered rotation would fit the data perfectly. Indeed, several previous studies on goal-directed arm movements suggest that the transformation of the target's location into positions of the hand occurs within a spherical reference frame related to both the visual target and shoulder of the effector arm (Flanders et al. 1992; Soechting and Flanders 1989; Soechting et al. 1990); this could be consistent with a combination of adjustments related to the orientation of the eyes and of the arm. Vetter et al. (1999) did not consider the possibility that the changes in endpoints correspond to combined rotations around the eye and shoulder.

To examine whether the adaptation occurs in relation to specific anatomical substrates (e.g., shoulder angle), we can complement the analysis of patterns of generalization with the study of intermanual transfer. For obvious anatomical reasons, any adjustments that are associated with rotations of the eye (or head) will influence both arms, whereas adjustments that are associated with the shoulder (or elbow) need only influence the arm in question. Thus if we find that adjustments that are best described by rotations relating to the orientation of the eyes (or head) the two arms should be affected in the same manner. In contrast, if we find adjustments that are best described by effector specific rotations (such as rotations around a shoulder), we only expect changes in movement endpoints for that effector. Thus if the observed changes in endpoints in the study of Vetter et al. (1999) were exclusively due to readjusting the relationship between the visual target position and the sensed eye orientation, one would expect the same changes in endpoints for the arm that was not exposed to the distorted feedback.

Several adaptation studies have shown that generalizations of adaptation to distorted feedback are partly brought about by adaptive processes that are not shared by both arms (Cunningham and Welch 1994; Hamilton 1964; van den Dobbelsteen et al. 2003; Welch et al. 1974). van den Dobbelsteen et al. (2003) investigated adaptation of arm movement endpoints to translated feedback with a method comparable to that of Vetter et al. (1999). Subjects were exposed to distorted feedback while they made movements with one of their arms and were subsequently tested without feedback while they made movements with the unexposed arm. The transfer of adaptation to the unexposed arm was substantial but incomplete (van den Dobbelsteen et al. 2003), perhaps because adaptation involved adjustments at a level that is shared by both arms as well as adjustments at the level of the exposed arm. This finding supports the previously mentioned proposal that the adaptation consists of adjustments related both to the eyes and effector arm. In the present study, we try to test this hypothesis by studying both generalization and intermanual transfer of adaptation.

We investigate adaptation to distortions of visual feedback that mimic a change of azimuth either relative to the eyes or to the shoulder. In the experiment, subjects positioned a hand-held 5-cm cube at the location of a three-dimensional visual simulation of such a cube. We compared endpoints of movements performed without visual feedback (test movements) after distorted visual feedback with ones after veridical visual feedback. Test phases were either performed with the exposed or the unexposed arm (in 2 separate sessions). We determined how subjects adapt to eye- and shoulder-centered distortions and examined the transfer of adaptation to the unexposed arm. If the two types of distortions yield qualitatively different patterns of generalization in accordance with the imposed distortion, we could conclude that adaptation can be related to various aspects of body posture. If adaptation really occurs in relation to specific anatomical substrates, then the anatomy predicts that there will be much more transfer for adjustments related to the eyes, than for adjustments related to the shoulder. We here examine whether this is so.

METHODS

Subjects

Fifteen subjects (22–45 yr of age) participated in two experimental sessions that were performed on separate days. All reported normal visual acuity (after correction) and binocular vision. All subjects gave their informed consent to participate in this study. The work forms part of an ongoing research program for which ethical approval has been granted by the appropriate committees of the Erasmus MC.

Apparatus

The experimental apparatus is the same as that used in van den Dobbelsteen et al. (2003). Images were displayed on a Sony 5000-ps 21-in monitor (30.0 × 40.4 cm; 612 × 816 pixels), located in front of and above the subjects' head. The images were generated with a Silicon Graphics Onyx computer at a rate of 120 Hz. Standard anti-aliasing techniques were used to achieve sub pixel resolution, and the images were corrected for the curvature of the monitor screen. The images were viewed by way of a mirror, which enabled us to present three-dimensional scenes within the arm's workspace without obstructing the arm's movements. Liquid crystal shutter spectacles (CrystalEyes 2, weight: 140 g, StereoGraphics, CA) were used to present alternate images to the two eyes at the 120-Hz frame rate (60 Hz/eye) for binocular vision. All images were red because the liquid crystal shutter spectacles have least cross talk at long wavelengths.

Subjects held a rod attached to a 5- cm cube in their unseen hand (hand-held cube) and were instructed to align this cube with a stationary three-dimensional (3D) wire frame of a cube (target cube) that appeared beneath the mirror. During trials in which subjects received feedback about the position and orientation of the hand-held cube (feedback phases), an additional rendition of a cube (feedback cube) was presented at the (transformed) location of the hand-held cube. The luminance of each Lambertian surface of the feedback cube depended on the orientation relative to a virtual light-source above and to the left of the subject. There was also a virtual diffuse illumination to ensure that all surfaces facing the subject were visible. The surfaces of the feedback cube were translucent and therefore did not occlude the target cube. A spatial discrepancy was sometimes introduced between the hand-held cube and the feedback cube (see Distortions). The feedback cube moved and turned whenever the hand-held cube was moved or turned. The total delay between a movement and the adjustment of the image was ∼16 ms.

A movement-analysis system (Optotrak 3010, Northern Digital, Waterloo, Ontario, Canada) registered the positions of active infrared markers that were attached to the hand-held cube to the distal part of the right shoulder (near the acromioclavicular articulation at the outer extremity of the clavicle) and to the shutter spectacles. We defined the location of the shoulder as the position 7 cm below the marker that we attached to the shoulder. At the start of each experiment, the positions of the subjects' two eyes relative to the markers on the shutter spectacles were determined using standard calibration techniques. During the experiment, eye position (not eye orientation) was inferred from the positions of these markers and used to render the images with the appropriate perspective for that eye at that moment. The use of a semi-transparent mirror enabled us to check, at the start of the experiment, whether the images of the (veridical) feedback cube correctly displayed the position and orientation of the hand-held cube. During the experiment, the room was dark so that subjects were unable to see anything but the virtual cubes. An opaque surface was placed just under the mirror to make completely sure that subjects could never see their hand or the real cube.

Procedure

An experimental session started with the subject holding the hand-held cube in his right hand beneath the mirror. Subjects were instructed to move the cube that they held as accurately as possible to the position indicated by the target cube. The rationale behind using the task of aligning cubes instead of, for instance, a sphere, is that it gives us more control over the different postures adopted by the subjects. Changes in posture may affect adaptation (Baraduc and Wolpert 2002; but see van den Dobbelsteen et al. 2003).

A movement was considered to have come to an end when the subject moved the center of the hand-held cube <2 mm within 300 ms. The movements were smooth, and all subjects reported that they were able to align the cubes before the next trial started. The starting position of the hand for each subsequent movement was the endpoint of the previous movement. In a previous study in which we used the same task, we thoroughly investigated the influence of the starting position of the hand on the subsequent endpoint of the movement and found that such an influence was negligible for this task (van den Dobbelsteen et al. 2001).

The target cube could appear randomly in one of eight positions beneath the mirror. These eight positions were at the corners of an imaginary 18-cm box. During trials in which subjects received no feedback (test phases), this imaginary box was in an upright position. During feedback phases, the box was rotated 45° around a horizontal axis through its center, so that the target cube was presented at each of eight other positions. The orientation of the target cube was fixed relative to the earth and did not change across the different test phases. Each of the two sessions involved the same four experimental conditions (see Distortions). Each condition was repeated six times within one session. The order in which the six repetitions of the four conditions were presented was chosen at random and was different for each subject.

Each condition had four consecutive phases: a veridical feedback phase, a postveridical test phase, a distorted feedback phase, and a postdistortion test phase. In the veridical feedback phase, the subjects aligned the hand-held cube with the target cube with continuous veridical visual feedback about the hand-held cube's position and orientation. In the postveridical test phase, the subjects aligned the hand-held cube with the target cube without visual feedback of the hand-held cube. The distorted feedback phase was identical to the veridical feedback phase except for the introduction of a spatial discrepancy between the position and orientation of the feedback cube and those of the hand-held cube (see Distortions). The postdistortion test phase was identical to the postveridical test phase and was used to evaluate changes in movement endpoints (relative to the postveridical test phase) as a result of the altered visual feedback during the distorted feedback phase. In each phase, each of the eight targets was presented once. Each trial took ∼1 s (depending on the reaction time and movement speed of the subject, so the duration of 1 phase was only ∼8 s). The veridical and distorted feedback phases were always performed with the right hand. In the first session, subjects also used their right hand during test phases.

In the second session, they used their left hand during test phases. In this session, the cube was transferred to the left hand according to the following procedure. The images disappeared at the end of each phase and subjects heard a tone. They were instructed that on hearing the tone they should keep the hand that is holding the hand-held cube still and move the other hand to the hand-held cube. When they had transferred the hand-held cube to the other hand, a new target cube appeared, and the subjects performed the next phase with the previously unused hand. After that a similar procedure was used to return the hand-held cube to the right hand. Thus in the first session, all phases were performed with the same hand, whereas in the second session, all test phases were performed with the hand that was not used during feedback phases.

Distortions

During the distorted feedback phase of each experimental condition, we introduced a spatial discrepancy between the hand-held cube and the visual feedback. This distortion could be an eye-centered rotation (2 conditions) or a shoulder-centered rotation (2 conditions). The two different conditions for each type of distortion were rotations in opposite directions. Measured eye and shoulder positions were used when introducing the distortions of visual feedback.

For the eye-centered distortions, we rotated the simulated position and orientation of the feedback cube around a position between the eyes (cyclopean eye). The axis of rotation was orthogonal to a vector from the cyclopean eye to the center of the current target. It lay in the plane defined by this vector and the direction of gravity. In this manner, we defined the axis of rotation relative to the required viewing direction so that the axis of rotation was the same relative to the position and orientation of the eyes for all target positions. We used the cyclopean eye rather than each eye individually to ensure that the resulting images would correspond with real objects. This means that the deformation does not really correspond with rotations around the eyes, but such rotations provide a good approximation.

For the shoulder-centered distortions, the axis of rotation was similarly defined to be orthogonal to a vector from the shoulder position (7 cm below the shoulder marker) to the center of the target. In this case, the axis of rotation was roughly stable relative to the upper arm for all target positions. The magnitude of the rotation was 4.8° for all distortions. The magnitude of the discrepancy between the hand-held cube and the feedback cube therefore depended on the distance of the hand-held cube relative to the center of rotation and varied between 3 and 5 cm. The distortions affected both the position and the orientation of the feedback cube. The simulated shape and size was always correct for the visually presented position and orientation.

Analysis

We determined each subject's average movement endpoints after veridical and after distorted feedback for each combination of target location and direction of the distortion for each session (exposed arm, unexposed arm). The differences between these endpoints were expressed as vectors and represent the changes in endpoints caused by the distorted feedback. We determined the common rotation around the average position of the cyclopean eye or shoulder that best fits the changes of the average endpoints. The axes of rotation for this analysis (1 for the eye and 1 for the shoulder) were the same as the ones used to produce the distorted feedback. The rotation found when fitting the applied distortion (based on the subject's average eye or shoulder position) was used to quantify the amount of adaptation for each subject, type of distortion and session. To obtain the common rotation that fitted the data best, we separated each change in endpoint a⃗i into a component r⃗i that would be accounted for by a rotation (note that the same rotation for all targets i results in different vectors r⃗i, depending on the position of the target i relative to the point of rotation) and a component (the error vector e⃗i) that is not accounted for by the rotation (note that e⃗i = a⃗i − r⃗i). We determined the magnitude of the rotation for which the sum of the lengths of the error vectors Math|e⃗i| was minimal (summed over targets i). The ratio between the common rotation found and the magnitude of the distortion (which was 4.8° for all distortions) gave us the percentage of adaptation. This value was used to evaluate whether there were individual differences in the extent of adaptation to eye-centered and shoulder-centered distortions and to determine the magnitude of intermanual transfer for each subject.

The subjects were exposed to eye- and shoulder-centered distortions, but the changes in endpoints do not necessarily mimic rotations around the eye or around the right shoulder. We therefore determined the extent to which the changes in endpoints mimicked rotations around the eye, around the right shoulder, or around a position intermediate between the eye and shoulder (half way between the cyclopean eye and the shoulder). To quantify how well a common rotation within these hypothetical coordinate systems describes the changes in the endpoints, we averaged the changes in endpoints over subjects to get rid of as much of the random variability as possible, so that in this analysis, the error vector e⃗i reflects the systematic deviations from the hypothetical coordinate system that is evaluated.

For each type of distortion and session (exposed arm, unexposed arm), we computed the error e⃗i for all combinations of target location i and the two directions of the distortion. The magnitude of these 16 values of the error |e⃗i| was used to determine how well the changes in endpoints were captured by the rotation that was fitted to the data. The length of e⃗i is the part of the response (the change in endpoints) that cannot be explained as adaptation within the hypothesized coordinate system. As a measure of the extent to which the adaptive response deviated systematically from compensation within the hypothesized coordinate system, we defined the relative unexplained response as the value of |e⃗i|/(|e⃗i| + |a⃗i|). A large value means that the systematic change in endpoints in response to a distortion has little resemblance to compensation within the coordinate system that is evaluated. Low values show that the pattern of generalization corresponds to the evaluated coordinate system. For each type of distortion and session (exposed arm, unexposed arm), the relative unexplained response was determined for a rotation centered at the eyes, for one centered at the shoulder and for one that was intermediate between the eye and shoulder. Each relative unexplained response was based on a combination of the data of eight target locations (i) and two directions of the distortion.

During the unexposed arm condition, subjects performed the task with their left arm. We therefore also evaluated hypothetical coordinate systems linked to the left arm. We estimated the position of the left shoulder by taking the mirror symmetric position of the right shoulder in the midsaggital plane. We performed paired t-test (paired for all combinations of target location i and direction of the distortion) to determine whether the average relative unexplained response that we obtained differed from each other. We consider a P value of <0.05 as a significant difference.

RESULTS

Figures 1 and 2 show the averages of the subjects' movement endpoints for each target position in two different formats. The data are shown for the exposed arm and are shown separately for the two types of distortions. The changes are approximately in the direction of the applied distortion, showing that the distorted feedback results in a uniform change that corresponds with the distortion. The subjects were exposed to rotations of 4.8° for both types of distortions. For the exposed arm, the average common rotation component of the change was 2.1 ± 0.4° (mean ± SD), corresponding to 43 ± 10.5% adaptation. The solid lines in Fig. 2 illustrate this common component. Figure 3 shows individual differences in the magnitude of adaptation for both types of distortions. The subjects' adaptations ranged from 20 to 70%. Subjects' adaptations for the different types of distortions are highly correlated (r = 0.77, P = 0.0004), showing that the tendency to adapt is not preferentially linked to a given distortion for different subjects.

FIG. 1.

Projections of the average movement endpoints of the exposed arm during test trials. Averages for each target position are shown for both directions of the 2 types of distortion. Left: the positions of the 8 target cubes relative to the subject. Top and bottom (viewed from the front and above, respectively): the target positions overlap. Middle and right columns: the average endpoints for the eye-centered and shoulder-centered distortion, respectively. The axes in these graphs indicate the distance relative to the average position of the cyclopean eye or the distance relative to the shoulder. The lines show the shifts in the average endpoints after exposure to the distortion (from the average endpoints during post veridical test phases). Note that these latter averages deviate from the centers of the target cubes due to systematic biases in the perceived position of the target and of the unseen hand (Van Beers et al. 1998). Ellipses show the average (center) and the between-subject variability (SDs in the direction of highest variability and in the orthogonal direction) of the endpoints during post distortion phases.

FIG. 2.

Average movement endpoints for the exposed arm. The figure displays the same data as in Fig. 1 in the plane of the distortion (after transforming all positions to the coordinate system that was used to induce the distortion). The vertical axis indicates the distance relative to the average position of the cyclopean eye or the average position of the shoulder. The horizontal axis indicates the average position in the orthogonal direction with respect to the endpoints after veridical feedback. The horizontal solid lines show the shifts in the average endpoints after exposure to the distortion. Ellipses show the average (center) and the between-subject variability of the shifts in endpoints during post distortion phases (the lengths of the ellipses' axes correspond to the SDs in the direction of highest variability and in the orthogonal direction). The left and right dotted lines indicate the size of the distortions in both directions (corresponding to rotations of –4.8 and 4.8°, respectively). The oblique solid lines display the size of the common rotation that was found after fitting an eye-centered or shoulder-centered rotation to the endpoints.

FIG. 3.

Each subject's percentage adaptation for both types of distortions. - - -, equal adaptation for the 2 distortions.

Figures 4 and 5 show the average movement endpoints for the unexposed arm. The changes in endpoints are much smaller for the unexposed arm (an average of 13% adaptation, SD 6.3%) did not differ between the distortions. The adaptation was smaller for the unexposed arm for all subjects, irrespective of the distortion (see Fig. 6). Figure 6 shows that roughly one-third of the adaptation found for the exposed arm transferred to the unexposed arm.

FIG. 4.

Projections of the average movement endpoints of the unexposed arm. For details, see the legend of Fig. 1.

FIG. 5.

Average movement endpoints for the unexposed arm. For details, see the legend of Fig. 2.

FIG. 6.

Each subject's percentage adaptation for the exposed and unexposed arm for the 2 types of distortions. - - -, equal adaptation for the 2 arms.

Figure 7 shows that there were clear differences in the extent that the hypothesized coordinate systems could explain the changes in endpoints induced by the distortions. For the exposed arm, the model that corresponds to the applied distortion explains the data best. Paired t-test (see Table 1) indicates that fitting an eye-centered model to the data obtained for the eye-centered distortions results in a significantly lower relative unexplained response than fitting any of the other hypothesized coordinate systems. The data obtained for the shoulder-centered distortion are explained significantly better by fitting a coordinate system centered on the right shoulder than fitting any of the other hypothesized coordinate systems. The latter is also true for the changes in endpoints that were found for the unexposed arm. For the unexposed arm and the eye-centered distortion, the unexplained response is significantly lower for the eye-centered model than for the right-shoulder-centered model or the intermediate coordinate system. However, the difference between the eye-centered model and coordinate systems with an origin at the left shoulder or intermediate between this shoulder and the eyes did not reach significance (the values for the latter 2 are even slightly lower than for the eye-centered model). Thus fitting the model that corresponds to the applied distortion always resulted in a lower unexplained response for the exposed arm. This was also largely true for the unexposed arm but the differences between the models are less clear for the eye-centered distortion. This is possibly partly because the relative unexplained response is generally much higher for the exposed arm than for the unexposed arm.

FIG. 7.

Relative unexplained response: the changes in endpoints that cannot be accounted for by a common rotation within the hypothetical framework. ○, the relative unexplained response obtained for the eye-centered distortions (averaged across the 2 directions and 8 target positions). ▪, the results for the shoulder-centered distortions. The error bars represent the SE across the average of the relative unexplained response for the 2 directions and 8 target positions. Left and right: the results for the exposed and unexposed arm, respectively.

View this table:
TABLE 1.

Statistical analysis on the relative unexplained response

DISCUSSION

In this study, we investigated subjects' ability to adapt goal-directed movements to eye- and shoulder-centered distortions of visual feedback. Our subjects aligned a hand-held cube with their unseen hand with a visual simulation of such a cube. Between test phases they were exposed to either veridical or distorted visual information about the position and orientation of the hand-held cube. Subjects received feedback during eight movements and were subsequently tested on eight other target positions than the ones for which feedback had been presented. In separate sessions, we tested the hand that was used during exposure to the feedback and the one that was not. Comparing test phase movement endpoints after distorted visual feedback with ones after veridical feedback revealed changes both for the exposed and the unexposed arm. The results show that subjects were able to quickly register the imposed mismatches between vision and kinesthesia and to alter their visuomotor control to compensate for part of the distortion. Intermanual transfer of adaptation was present for both types of distortions but was not complete for either.

The pattern of pointing errors in previous studies indicated that intended arm movement endpoints (visually perceived target locations) are initially defined relative to the eye (McIntyre et al. 1997; van den Dobbelsteen et al. 2001). Moreover, the variable errors found in these pointing studies did not depend on the hand that was used or its starting position, which is consistent with the errors arising an eye-centered representation of the intended arm movement endpoint that is independent of the arm movement. Vetter et al. (1999) proposed that the changes in subjects' pointing behavior after laterally shifted feedback reflected adjustments within such an eye-centered reference frame because the pattern of generalization was best captured by a rotation centered near the eyes. Consistent with the results of Vetter et al. (1999), we find that when subjects adapt to eye-centered distortions, the changes in endpoints are best modeled by a rotation around the eyes.

However, if the visuomotor system had achieved this adaptation by a modification at the level of the eyes (i.e., before the divergence point for right and left arm control), then the changes in endpoints should have been equal for both arms. This was not the case. The transfer of adaptation to the unexposed arm was incomplete. These findings are in line with those of prism adaptation studies. The eye-centered distortions that we used in the present study correspond to prism-induced displacements, and a lack of intermanual transfer is a well-documented finding in that paradigm (Choe and Welch 1974; Hamilton 1964; Harris 1963; Taub and Goldberg 1973; Wallace and Redding 1979; Welch et al. 1974). If the adaptations were completely unrelated to the arm, we would not expect the arm that was tested to matter. Thus adaptation involves adjustments of parameters that are linked to the arm even if an eye-centered reference frame best captures the pattern of generalization.

There are pointing studies that suggest that the transformation of information about target location into a motor command involves the specification of the endpoint of the movement in a reference frame centered at the shoulder (Berkinblit et al. 1995; Flanders et al. 1992; McIntyre et al. 1997, 1998; Soechting and Flanders 1989; Soechting et al. 1990). Such a reference frame would incorporate arm specific kinesthetic information, and adjustments would occur at a level of visuomotor processing that is specific to either the left or the right arm. The lack of intermanual transfer is consistent with adjustments that are linked to the arm in such a manner. Moreover, our subjects were able to adapt appropriately to shoulder-centered distortions. However, if this adaptation had really been related to the shoulder of the exposed arm, then it is not clear why part of the adaptation transferred to the unexposed arm. Even if adaptation had been distributed between eye- and shoulder-centered rotations, we would have expected only the eye-centered component to transfer, so that the data for the unexposed arm would best fit a rotation related to the eye even if the distortion was centered at the shoulder, which was not the case.

The hypothesis that adaptation to distortions is linked to specific anatomical substrates is inconsistent with a direct comparison of the patterns of generalization that are induced by the eye- and shoulder-centered distortions. The values of the relative unexplained response are of equal magnitude for both distortions when fitted with the corresponding models. This shows that one is equally able to adapt to these different distortions. If adaptation were restricted to adjustments within a specific reference frame (which could also involve a reference frame that was not considered in this study), one would expect the pattern of generalization to have been biased toward this reference frame. However, we do not observe systematic deviations of the changes in endpoints from the reference frames that were used to induce the distortions. We therefore conclude that this is not the case.

The exact nature of the parameters that are changed during visuomotor adaptation is not yet clear. The spatial information required for visuo-kinesthetic re-alignment is provided by different sensors and encoded in different spatial parameters (e.g., joint angles, muscle stretch, limb orientation). To be able to adapt movement endpoints to altered visual feedback of the hand, the imposed distortion must be interpreted as changes in these internally specified parameters (Clower and Boussaoud 2000; Hay et al. 1971; van den Dobbelsteen et al. 2003). The adaptation that we found for the exposed arm shows that rotations around the eye and around the shoulder can be interpreted in this manner. However, for both types of distortions, the spatial characteristics of intermanual transfer indicate that the adjusted parameters are not directly linked to the anatomical substrates (eye or shoulder orientation) that correspond to the distortion. Part of the adjustments was in the visuomotor processes that are shared by both arms as shown by the transfer of adaptation, but a large part was linked to the exposed arm. Perhaps the adjustments are distributed between multiple sensorimotor transformations that link visual to kinesthetic information (Kitazawa et al. 1997; Redding and Wallace 1996; Rossetti et al. 1995).

Electrophysiological recordings from single neurons support the view that the brain makes use of multiple spatial codes and indicate that the parietal cortex is central to the construction of these representations. Neurons in the parietal cortex are modulated by retinal, eye orientation, and arm-related signals (Andersen et al. 1985; Batista et al. 1999; Buneo et al. 2002; Lacquaniti et al. 1995). A view that emerges is that individual neurons are not dedicated to the coding of spatial information in a single reference frame, but that each neuron contributes to several spatial representations that are distributed over populations of neurons. Subsets of neurons may contribute to multiple representations of space by weighting the convergence of activity differently (Burnod et al. 1999). This raises the interesting possibility that the weighting of different sensory signals changes during adaptation and that this influences movement endpoint specification within multiple frames of reference. In such a coding scheme, the apparent independence of different frames of reference that is reported in psychophysical studies is an emergent property at the behavioral level, whereas the neural mechanisms underlying the different reference frames do not operate independently from each other. This is compatible with our finding that adaptation to distortions within one frame of reference is not confined to adjustments at the corresponding level. We conclude that subjects are able to adapt natural reaching movements to both eye- and shoulder-centered distortions of visual feedback and that during adaptation multiple parameters that link visual to kinesthetic information are altered.

GRANTS

This research was supported by the Netherlands Organization for Scientific Research (NOW/ MAGW 402-01-017-D).

Footnotes

  • The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

REFERENCES

View Abstract