Journal of Neurophysiology

Visual-Shift Adaptation Is Composed of Separable Sensory and Task-Dependent Effects

M. C. Simani, L. M. M. McGuire, P. N. Sabes

This article has a correction. Please see:


Visuomotor coordination requires both the accurate alignment of spatial information from different sensory streams and the ability to convert these sensory signals into accurate motor commands. Both of these processes are highly plastic, as illustrated by the rapid adaptation of goal-directed movements following exposure to shifted visual feedback. Although visual-shift adaptation is a widely used model of sensorimotor learning, the multifaceted adaptive response is typically poorly quantified. We present an approach to quantitatively characterizing both sensory and task-dependent components of adaptation. Sensory aftereffects are quantified with “alignment tests” that provide a localized, two-dimensional measure of sensory recalibration. These sensory effects obey a precise form of “additivity,” in which the shift in sensory alignment between vision and the right hand is equal to the vector sum of the shifts between vision and the left hand and between the right and left hands. This additivity holds at the exposure location and at a second generalization location. These results support a component transformation model of sensory coordination, in which eye–hand and hand–hand alignment relies on a sequence of shared sensory transformations. We also ask how these sensory effects compare with the aftereffects measured in target reaching and tracking tasks. We find that the aftereffect depends on both the task performed during feedback-shift exposure and on the testing task. The results suggest the presence of both a general sensory recalibration and task-dependent sensorimotor effect. The task-dependent effect is observed in highly stereotyped reaching movements, but not in the more variable tracking task.


Artificial shifts in the visual feedback of the arm, due either to optical prisms or to virtual feedback systems, result in a rapid adaptation of visually guided reaching (Ghahramani et al. 1996; Held and Gottlieb 1958; von Helmholtz 1925). For convenience, we will refer to both of these experimental paradigms as visual-shift adaptation. Although visual-shift adaptation is one of the best-studied examples of sensorimotor learning, there remains substantial disagreement over some of its basic behavioral properties, and there is almost nothing known about the underlying physiological mechanisms. A major reason for these difficulties is that the adaptive response to shifted visual feedback is multifaceted, and there have not been adequate tools for characterizing and quantifying the components' adaptive responses. Here, we argue that this form of adaptation is made up of at least three quantifiable component responses: two forms of sensory recalibration and a task-dependent sensorimotor effect.

Since the pioneering work of Held and colleagues (Held and Freedman 1963; Held and Gottlieb 1958), the principal measure of adaptation has been the reach aftereffect, i.e., the difference in endpoint errors between pre- and postexposure reaches. The aftereffect is the only measure of adaptation used in most recent studies of adaptation to shifted virtual visual feedback (Ghahramani et al. 1996; Held and Durlach 1993; Kitazawa et al. 1997; Krakauer et al. 2000; Vetter et al. 1999), and it has been the exclusive measure used in physiological studies of this effect (Baizer and Glickstein 1974; Baizer et al. 1999; Kurata and Hoshi 1999; Martin et al. 1996; Weiner et al. 1983; although for a counter example see Lee and van Donkelaar 2006). As we will argue here, however, the reach aftereffect is unlikely to be an accurate measure of any individual component of adaptation and is therefore of limited use for comparing to specific physiological changes that accompany visual-shift adaptation. Better measures of adaptation are required to attain the goals of identifying the neural circuits that are responsible for the individual components of visual-shift adaptation and determining how sensorimotor feedback drives learning within and across these components.

Here we present a psychophysical approach to measuring and analyzing the components of visual-shift adaptation. The main methodological tool is a set of alignment tasks that are designed to measure the sensory calibration between the hand and eye or between the right and left hands (van Beers et al. 1996, 2002). Subjects are asked to align the fingertip of their unseen hand either with a visual target or with the other unseen hand. Because no time constraints are imposed and subjects are free to adjust their hand position until satisfied, errors in this task reflect miscalibration of the relevant sensory alignment, rather than any movement-specific errors. Subjects performed these alignment tasks both before and after an exposure period, in which they perform visually guided arm movements with their right hand while viewing shifted visual feedback of that hand. The change in performance between the pre- and postexposure tests is our measure of sensory recalibration, i.e., the component of the overall visual-shift adaptation attributable to changes in sensory localization or intersensory calibration.

We perform two experiments using this tool. In the first experiment, we study the nature of sensory recalibration. We demonstrate that sensory recalibration can be decomposed into shifts in two component transformations, one between eye and body (visual localization) and one between body and arm (proprioceptive localization). The vector sum of these two effects is shown to be equal to the overall shift in alignment measured between the eye and arm. This additivity holds at both the location in the workspace where subjects were exposed to the shifted feedback and at a generalization location. In the second experiment, we perform a similar analysis of the traditional reach aftereffect. Reaching requires sensory localization of the target as well as other downstream computations, many of which are task specific, such as trajectory planning. By comparing the reach aftereffect to our alignment measure of sensory recalibration, we attempt to identify which of these computations are affected by the visual shift. We demonstrate that the reach and alignment aftereffects agree when exposure to the visual shift occurs during a nonreaching task. In contrast, when exposure occurs during reaching, the reach aftereffect is larger and the two effects are not significantly correlated. These results suggest that the reach aftereffect can be decomposed into a general sensory recalibration effect and other effects that are exposure-task dependent.



This study was approved by the UCSF Committee on Human Research. Twenty-eight right-handed participants gave written informed consent and were paid for their participation. Subjects were naive to the purpose of the experiment. All subjects were healthy, ranged from 20 to 34 yr of age, and had normal or corrected-to-normal vision.

Twelve subjects (10 female, 2 male) participated in two experimental sessions for experiment 1. Eleven subjects (8 female, 3 male) participated in experiment 2: 10 in two experimental sessions and one only in the tracking exposure session (subsequently described).

When subjects participated in two experimental sessions, the sessions were always separated by ≥24 h.

Experimental setup

The experiments made use of a virtual reality setup, illustrated in Fig. 1. Subjects were seated with their right arm resting on an 8-mm-thick horizontal table. The right wrist and index finger were immobilized in the neutral position by a brace and splint.

FIG. 1.

Experimental setup. A: side view of the virtual feedback setup. B: top view of the table with experimental landmarks and a sample reach path. Exposure box was not displayed to the subject.

For experiment 1, the left arm was located below the table and, when in use, was placed with palm facing up and the index finger touching the bottom surface of the table. The left wrist and index finger were not restrained. In experiment 2, subjects held a mouse with their left hand, which they used to signal the start or end of certain trials (see following text).

In all experiments the torso was lightly restrained by a harness attached to the chair back.

Subjects viewed visual objects that were displayed on a rear-projection screen by a 1,024 × 768 pixel liquid crystal display projector. The position of a horizontal mirror was calibrated so that images on the projection screen appeared to lie just above the plane of the table. In addition, the room lights were dimmed and subjects' view of their arms and shoulders were blocked by the mirror and a drape. The positions of both index fingers were tracked at 120 Hz with an infrared tracking system (Optotrak, Northern Digital, Waterloo, Ontario, Canada). This setup allowed us to compute the hand position and velocity on-line and to provide real-time visual feedback of the hand. In both experiments, visual feedback of the position of the hand right was given in the form of a 1-cm white disk centered at the tip of the index finger or shifted from this location by a fixed vector (the visual feedback shift).

Experimental design

Experiments were organized into two or three blocks of trials. Each block consisted of an exposure phase and a test phase. During the exposure phase, subjects performed repetitions of a single exposure task that included (possibly shifted) visual feedback of the right index finger. During the test phase, subjects performed a variety of test tasks in which no visual feedback was available. Periodic exposure trials were also added to the test phase to maintain adaptation. In the first block, feedback during the exposure trials was unshifted and the test trials measured the preshift baseline performance. The second block was the shift block, with an 80-mm leftward or rightward visual feedback shift during exposure trials. Experiment 2 also included a third block with unshifted feedback that provided a second measure of baseline performance. This third block was included to remove any shift-independent drift in performance (e.g., due to fatigue) that might differ between the exposure tasks (see following text). The aftereffects that constitute the main experimental measures in this paper were computed as the difference in test performance between the shift block and the baseline block(s).

In the following sections we describe the different test and exposure tasks that were used in these experiments, followed by the specific trial sequences for each experiment.

Reach exposure task.

Subjects were required to reach accurately with their right arm to a visual target starting from a fixed unseen start location. For each subject, the start location was chosen by projecting the midpoint between the eyes into the plane of the table and then moving approximately 150 mm in the sagittal (+y) direction. At the beginning of a reach trial, subjects were guided to the start position (“start” in Fig. 1) without explicit visual feedback of the location of the fingertip position or the start position (the “arrow-field” method; Sober and Sabes 2005). Specifically, an array of 16 arrows appeared at a randomized location in the workspace. The direction and magnitude of the arrows were adjusted on-line to indicate the direction and relative distance from the right fingertip to the start location. When the fingertip had moved to within 5 mm of the start location, the arrows disappeared.

As soon as the arrow field disappeared, a reach target appeared in the form of an open circle, 20-mm radius, at a random location within a 100 × 60-mm box centered 300 mm sagittally from the start location (“exposure box” in Fig. 1). For two subjects the exposure box could not be reached comfortably, and so a reach distance of 250 or 270 mm was used. After a variable delay of 500–1,500 ms, the target flashed and a tone sounded, indicating that the movement should begin. Subjects were instructed to then move rapidly and accurately to the target. During the movement, the feedback disk was illuminated when the instantaneous tangential velocity of the fingertip dropped to either 95% (early visual feedback) or 15% (late visual feedback) of the peak velocity value for that movement. To complete the movement, the center of the feedback disk had to be within 5 mm of the center of the reach target and the fingertip had to stop moving. Typically, this required corrective movements after the end of the primary reach. Finally, subjects were required to hold the final position, with visual feedback, for 500 ms after completion of the movement. The target and the feedback were extinguished at the end of the trial.

Tracking exposure task.

Subjects were required to track a moving target with the tip of their right index finger with continuous visual feedback. Subjects first moved their right hand to the “neutral position” to the right of the experimental workspace. This triggered the appearance of a filled, green 20-mm-diameter circle at the start location for the trial, which was chosen to be the point that the target reached approximately 667 ms into its trajectory (see following text). Subjects then moved their right index finger to this start location (without visual feedback) and clicked the mouse with their left hand when they were ready to begin tracking. At this point, visual feedback of the fingertip was illuminated and the target appeared in the form of an open green 20-mm-diameter circle. The target then began moving along the target trajectory. By design, the target circle intersected the start circle 667 ms into trajectory and the two circles merged into a single filled target circle that continued along the target trajectory. This arrangement allowed subjects to estimate the initial position and velocity of the target before it intersected the start position, facilitating accurate tracking at the beginning of the trial. Subjects were instructed to try to keep their index finger on the filled target circle as it moved along its trajectory. Targets followed a randomly generated Lissajous trajectory: sinusoidal x- and y-components with random phase and frequencies chosen uniformly between 0.2 and 0.3 Hz. Trajectories were centered in the exposure box and scaled to fit within the box. The total trial duration of the trajectory, starting from the mouse click, was 6 s.


No visual feedback was given during any of the test tasks subsequently described. All test tasks were performed at two locations: the exposure target, at the center of the exposure box, and a second generalization target, also located 300 mm from the reach start location but 45 ° to the right of midline (see Fig. 1). There are two motivations for including a generalization target. First, because the exposure tasks are also used as test tasks, the aftereffects measured at the exposure location could include effects other than those typically considered to be forms of sensorimotor adaptation, such as the use of remembered visual or proprioceptive positions from exposure trials. The generalization target trials control for such effects. Second, because we expect that the aftereffect vectors will differ between the two target locations (Ghahramani et al. 1996; Krakauer et al. 2000; Vetter et al. 1999), the inclusion of the second target provides a more stringent test of our hypotheses.

Reach test.

Reach test trials were identical to reach exposure trials except for two differences. First, no visual feedback was given at any point in the trial. Second, test reaches were considered completed when the fingertip stopped moving, as long as the finger had moved ≥150 mm from the start location.

Tracking test.

Tracking test trials were identical to tracking exposure trials except that visual feedback was not given at any point during the trial.

Right-to-visual alignment test.

For the right-to-visual alignment test, subjects were asked to accurately align their unseen right fingertip with a visual target. The goal is to measure changes in the calibration between visual localization and the felt position of the right hand, i.e., eye–arm recalibration. This and the following two alignment tasks were based on the methods of van Beers et al. (1998). We wanted these tests to measure changes in sensory calibration, independent of stereotyped behaviors such as reaching. This was done in part by allowing subjects as much time as needed to complete each trial.

At the beginning of each trial, the right fingertip was guided by the arrow-field method to a start position located near the target (red arrows indicated right-hand movement). When the fingertip reached the start location, a 20-mm-diameter red disk appeared at the target location. After a variable delay (250–750 ms), a tone instructed subjects to place their right index fingertip on the visual target as accurately as possible. Trials were completed when subjects either verbally indicated to the experimenter that they were satisfied with the alignment (experiment 1) or clicked a mouse button with their left hand (experiment 2).

The selection of the start position is important in this test because it could bias the alignment endpoint. To avoid such a bias, we chose each start point uniformly from an annulus centered on the endpoint of the previous right-to-visual alignment trial (and similarly for the two alignment trials subsequently described). For the very first such trial, the distribution was centered on the actual target location. In experiment 1, the annulus had a 30-mm inner radius and 50-mm outer radius. In experiment 2 radii of 40 and 60 mm were used. These values represent a compromise between starting too close to the target, in which case subjects might be reluctant to move at all, and starting too far, in which case the motor response might be too similar to the stereotyped reaching task.

Left-to-visual alignment test.

The second type of alignment trial measures adaptive changes in the calibration between visual localization and the felt position of the (unexposed) left hand. The protocol for left-to-visual trials was identical to right-to-visual trials except that green arrows and a green target indicated that the left hand was to be used instead of the right hand.

Right-to-left alignment test.

The third type of alignment trial measures adaptive changes in the calibration between proprioception of right and left hands. At the beginning of each trial, the left hand was guided to the target location on the underside of the table with a green arrow field. The right hand was then guided with red arrows to the start location on the top surface of the table. After a variable delay, an audible “go” tone was played. Subjects were instructed to then adjust the position of their right arm, without moving the left arm, until the right and left fingertips were felt to be aligned. Trials were completed when subjects verbally indicated to the experimenter that they were satisfied with the alignment. No visual target was provided during these trials. Furthermore, subjects were instructed to close their eyes throughout the alignment period, and compliance was monitored on every trial by the experimenter.


The goal of this experiment was to analyze the sensory recalibration that follows exposure to shifted visual feedback. In particular, we wanted to test various model predictions (see following text) for the relationship between the adaptive recalibrations seen in the right-to-visual, left-to-visual, and right-to-left alignment tests. Experiment 1 consisted of two blocks: the preexposure baseline block and the shift block. A brief practice period was also conducted at the beginning of each session.

Exposure phase.

The exposure phase of experiment 1 consisted entirely of reach exposure trials at the exposure target location. In Block 1, the exposure phase had 15 trials with unshifted feedback. In Block 2 the exposure phase had 50 shifted-feedback trials, with the shift ramped ≤80 mm rightward over the first 15 trials. The timing of the visual feedback (early or late) and the direction of the feedback shift (80 mm, leftward or rightward) were varied across sessions. This 2 × 2 design yielded a total of four possible exposure conditions. The 12 subjects participating in experiment 1 were equally distributed across these four exposure conditions during the first experimental session. For the second experimental session, each subject was exposed to conditions of feedback timing and shift direction that were opposite of that in their first session.

Test phase.

The test phase in experiment 1 consisted of four trial types, the reach test and the three alignment tests. A single test sequence consisted of eight trials in randomized order, with one repetition of each of the four tests at both the exposure target and the generalization target. To maintain adaptation, each test sequence was followed by two exposure trials of the same type used in the exposure phase. The test phase of each block consisted of 12 test sequences, for a total of 120 trials (96 test trials and 24 refresh exposure trials).


Experiment 1 focuses on the sensory effects of visual-shift adaptation. In experiment 2 we ask whether these sensory effects can be dissociated from other components of the adaptive response, such as changes in motor planning or execution. Because these latter components are likely to be task dependent, we hypothesized that the relative magnitudes of these two types of responses would depend on the task performed during the exposure period.

Exposure phase.

In experiment 2 there were two types of sessions, differing only in the exposure task: in reach exposure sessions all exposure trials were reach trials with late visual feedback and in tracking exposure sessions all exposure trials were tracking trials.

In both cases, the exposure phase in Block 1 had 15 trials with unshifted feedback, and the exposure phase in Blocks 2 and 3 had 50 trials. In the latter cases, the feedback shift was ramped up (Block 2) or down (Block 3) over the first 15 trials of the exposure phase.

Test phase.

Three test tasks were used in this experiment: the reach, tracking, and right-to-visual alignment tests. During each test sequence, subjects performed six trials in randomized order, with one repetition of each of the three tests at both exposure and generalization targets. As in experiment 1, two additional refresh exposure trials were added after each test sequence. The test phase of each block consisted of 10 test sequences, for a total of 80 trials (60 test trials and 20 refresh exposure trials). A brief practice period was also conducted at the beginning of each session.

Data analysis


For alignment trials, the endpoint was defined as the last time the tangential velocity fell to <2 mm/s. For the reach task, we wanted to focus on the primary reaching movement and to discard the secondary corrective movements that were sometimes seen. Therefore the movement end was defined as the first time after the velocity peak that the velocity fell to <2 mm/s. We also performed an off-line visual inspection of every reach trajectory. In cases where a clear corrective submovement began before the velocity fell to this criterion, the endpoint was set to the time of the minimum in the velocity profile that separated the two submovements.


For each test and target, we computed an adaptation aftereffect by subtracting the average positional error for the baseline block(s) from the average error for the shift block. For the alignment and reach tests, the positional error was defined as the vector difference between the movement endpoint (as defined earlier) and the target location. For the tracking task, the positional error was measured as the average vector difference between the fingertip and target positions. This average was computed over the final 4 s of each trial, after correcting for any lag in the fingertip position. (A nonnegative lag value was estimated by minimizing the sum-squared-error between the fingertip position and the lagged target position during the last 4 s of the trial.)

For each test and target, we computed the SE of the reach endpoint vectors in each block. The SEs for the adaptation aftereffects are then given by the sum of the endpoint SEs for the two blocks. In addition, a permutation test (Good 2000) was used to test for significant adaptation (aftereffect >0, P < 0.05) of each test at each target.

To facilitate direct comparison of aftereffects following leftward and rightward visual shifts, the sign of aftereffects was inverted (reflected about the origin) for cases where the visual shift is in the rightward direction. Because rightward shifts should yield leftward (−x) aftereffects and vice versa, this convention means that an adaptive aftereffect will always have a positive component along the x-axis. This convention is used in all figures and analyses. Finally, we will use the symbols REACH, TRACK, RV, LV, and RL to refer to the aftereffects measured with the reach, tracking, right-to-visual, left-to-visual, and right-to-left test tasks, respectively.

Models of intersensory calibration

Consider the computations required to perform the right-to-visual alignment task. Subjects must compare the visually perceived location of the target to the felt position of the arm, with the latter based on proprioception, efference-copy, and other nonvisual sensory cues. Some form of sensory transformation must be performed to make this comparison.

One model for this process is a series of component sensory transformations between eye, body, and arm-based representations, the “component transformation model” (Fig. 2A). In this model, the visual representation of the target Xvis might first be converted into a body-centered representation Xbody by integrating information such as the position and orientation of the eye and head. Next, Xbody would be transformed into an arm-based representation of the target location Xright, which could then be compared directly with the felt position of the right arm. Alternatively, the comparison could be made in the body-centered or visual representations. In any of these cases, however, comparing the visual target with the felt position of the arm requires both an eye–body and a body–arm transformation.

FIG. 2.

Two models of sensory coordination. Top panels: sensory transformations required for performing the alignment test tasks. Spatial variable X (i.e., location of the fingertip or target) can have multiple neural representations including Xvis, a visual- or eye-based representation; Xbody, a body-centered representation; and the arm-based representations Xleft or Xright. A and B: schematic diagrams of the sensory transformations used in the alignment test tasks under each model. Transformations outlined in red are those involved in visual coordination of the right arm, and are thus the most likely sites of adaptive changes during the experimental exposure blocks. C and D: predicted effects of adaptation to shifted feedback on the 3 alignment tasks. Boxes represent the additive effects of adaptive recalibration in the corresponding sensory transformations. EB, eye–body recalibration; BA, body–arm (right) recalibration; EA, direct eye–arm recalibration. Colored arrows represent the comparisons required for each alignment task, along with the value of the predicted aftereffects: LV for the left-to-visual task, RV for right-to-visual, and RL for right-to-left.

We next consider how this comparison changes after exposure to shifted visual feedback. As part of our model, we assume that adaptive recalibration will occur only in transformations that are used during the exposure period. In our experiments, exposure trials consist entirely of visually guided movements with the right hand, and so adaptation would occur only in the eye–body and body–arm (right) transformations (highlighted in red in Fig. 2A). We further assume that recalibration at each sensory transformation has a cumulative, additive effect on the transformed signals. For example, eye–body recalibration (EB) would result in the transformation Xbody = Xvis + EB, and body–arm (right) recalibration (BA) would result in Xright = Xbody + BA (Fig. 2C). Because we will allow these additive effects to vary with the target location, this assumption can be viewed as a first-order approximation to the true nonlinear response and is thus a fairly unrestrictive simplification. Under this model, the aftereffect of the right-to-visual task will be the sum of the two component recalibrations: RV = EB + BA. Similarly, if we assume that there is no adaptation in the body–arm (left) transformation, then the aftereffects for the left-to-visual and right-to-left tasks should be LV = EB and RL = BA (Fig. 2C). These equalities can be combined into a testable prediction involving only experimentally measured quantities Math(1) In words, the aftereffect measured by the right-to-visual task should be the vector sum of the aftereffects measured in the other two tasks. We will refer to this as the “additivity prediction” of the component transformation model, and it will be tested in experiment 1.

A key element of the component transformation model is that representations and transformations are shared across tasks. For example, the same body-centered representation and body–arm (right) transformation are used for the right-to-visual and right-to-left alignment tasks (Fig. 2, A and C). As an alternative, we consider the “direct transformation model” (Fig. 2, B and D). In this model, there are separate transformations (or sequences of transformations) that allow for direct comparison of any commonly coordinated sensory streams. For example, visually guided control of the right arm would make use of a dedicated (or “direct”) eye–arm (right) transformation. Direct transformations might arise from the self-organization of sensory representations (i.e., unsupervised learning) given that eye and arm movements are correlated during the execution of natural movements (Ariff et al. 2002; Land and Hayhoe 2001; Neggers and Bekkering 2001; Sailer et al. 2000). In this model, the eye–arm (right) transformation would be the most likely site of recalibration after our exposure trials. Thus the right-to-visual alignment task would be a direct measure of the recalibration: RV = EA, and the other alignment tests would show no aftereffect (Fig. 2D). Even if there were transfer of learning to the direct eye–arm (left) and arm–arm transformations, however, this model predicts no particular relationship between the aftereffects measured in the three alignment tasks.


The additivity prediction states that the aftereffects of the three alignment tests should obey a linear equality, Eq. 1, which can be rewritten as Math Math(2) where ERV,2 is the mean positional error in the right-to-visual test in Block 2 (shift block), ERV,1 is the mean error in Block 1 (baseline block), and similarly for the other terms. Note that Eq. 2 has a special form: it is a linear combination of sample means, with the coefficients summing to zero. This means that we can use a contrast analysis for two-dimensional (2D) data (Stevens 1996) to test the hypothesis that the additivity prediction does not hold (i.e., that the vector sum in Eq. 2 is significantly different from zero). We performed this analysis separately for each subject, experimental session, and test target. We used the same approach to perform two control comparisons in which the right-to-visual aftereffect is compared directly to the reach and right-to-left aftereffects.


Data sets were excluded from analysis based on either of two criteria. First, if a subject did not exhibit significant adaptation at the exposure location in the test version of the exposure task, data from that session were rejected. Second, after completing the experimental session, all subjects were asked whether they ever felt that the visual feedback was not aligned with their finger. If subjects reported noticing the feedback shift, data from that session were rejected.

In experiment 1, one session was excluded by the second criterion. In addition, we found post hoc that in three sessions, subjects made exceptionally large corrective submovements (∼10 cm) after the end of each reach trajectory, making it difficult to quantify the aftereffect. For these three sessions, only the reach test trials were removed from subsequent analysis. In total, 12 late visual feedback and 11 early visual feedback sessions were included in analyses of the alignment tests in experiment 1: 10 late visual feedback and 10 early visual feedback sessions were included in the reach test analyses.

In experiment 2, from a total of 21 sessions, three sessions were excluded by the first criterion: one reach exposure session and two tracking exposure sessions. As a result, nine sessions each of reach exposure and tracking exposure were included in the analyses of experiment 2.


Experiment 1

The goal of experiment 1 was to determine whether there is a quantitative relationship between the visual-shift–induced aftereffects measured with the three sensory alignment tasks.


The performance of a sample subject on all test conditions in the early visual feedback condition is shown in Fig. 3. The top eight panels show the raw positional endpoint errors, separated by test and target location. The difference in the mean errors between Block 2 and Block 1 (black arrows) is the measured adaptation aftereffect. All tests showed highly significant aftereffects at both targets (permutation test, P < 0.001).

FIG. 3.

Positional errors and aftereffects for all test trials from a sample experimental session with early visual feedback. Top 4 rows: open circles are raw positional errors for each trial in Block 1 (baseline); filled circles are data for Block 2 (post-exposure). Each panel has data from a single test and target. Black arrows represent adaptation aftereffects. Bottom row: aftereffect vectors for the 4 tests at each target are grouped together into a single plot. LV and RL are plotted “head-to-tail” to provide a visual test for the additivity hypothesis, RV = LV + RL. Ellipses are the 95% confidence limits for the aftereffect or aftereffect sum.

This sample subject is typical of the entire data set, as shown in Fig. 4. With only a few exceptions (Fig. 4, open symbols), significant adaptation was observed at both the exposure and generalization targets for the right-to-visual and right-to-left alignment tests and the reach test. Furthermore, the group mean of these three aftereffects is significantly different from zero for both targets. In contrast, the adaptation aftereffects observed in the left-to-visual test (LV, Fig. 4, second row) are smaller and are significant in only approximately half of the experimental sessions. When LV is significant, it tends to point more along the y-axis, either toward or away from the subject, despite the fact that the visual shift was always along the x-axis. The group mean for this aftereffect is also not significant at either target. We note that the lack of significance in the LV effect is not simply due to more variable performance in this task. In fact, the endpoint variability does not differ greatly across the four tests used in this experiment (Supplementary Fig. S1).1

FIG. 4.

Aftereffects for all subjects. Each data point represents the aftereffect vector for a single subject, with separate panels for each test and target. Significant aftereffects (permutation test, P < 0.05) are marked with filled symbols. Ellipses represent the covariance across subjects of the aftereffect for a given test and target (drawn at the 95th percentile). Solid ellipses signify group means that are significantly different from zero (MANOVA, P < 0.05).


As described in methods and illustrated in Fig. 2, if sensory coordination between the eyes and the arms relies on a series of shared component transformations, then the alignment test aftereffects should obey the simple linear relationship, RV = RL + LV, which we have called the “additivity prediction.” The bottom two panels of Fig. 3 show the aftereffect vectors for each test type for the sample data set discussed earlier, plotted together by target. The RL and LV vectors have been placed head-to-tail to represent the vector sum of these effects. At both the exposure and generalization targets, we see that RV is approximately equal to that sum, i.e., the additivity prediction of the component transformation model appears to hold. At both targets, a contrast analysis was unable to reject the additivity prediction (P = 0.93, exposure target, P = 0.70, generalization target; see methods for details). As a comparison, the right-to-visual and reach aftereffects were significantly different at both targets (contrast analysis, P < 0.005).

These sample data are representative of the entire data set. To quantify these effects, we plotted RV against the sum RL + LV, separately for each target location and each spatial dimension (Fig. 5, top row). The additivity prediction is supported by the high correlation coefficients and near-unity regression slopes. The correlations between RV and RL + LV are highly significant for both spatial dimensions at both targets (P < 10−7). This prediction holds even at the subject-by-subject level: in all but two cases (filled symbols in Fig. 5, top right) RV was not statistically distinguishable from the sum of the other two tests (contrast analysis, P < 0.05).

FIG. 5.

Tests of the additivity prediction and control comparisons for all subjects. Top row: each data point represents aftereffects measured in a single session at the exposure (right column) or generalization (left column) target (n = 23). Blue circles, x-component of the aftereffects; red squares, y-component. Filled symbols represent cases where the RV and RL + LV vectors differed significantly (contrast analysis, P < 0.05). Colored lines are the best linear fits to the data; solid lines are for significant regressions (P < 0.05). Middle row: same as top row, but RV is compared to RL (n = 23). Bottom row: same as top row, but RV is compared to REACH. Dashed blue line represents a nonsignificant regression (n = 20).

One potential concern regarding these results is that the LV aftereffect is generally smaller in magnitude than those measured from the other tests. The apparent additivity might thus reflect the simple equality RV = RL, which could be due to some form of acquired response bias for the right arm. To address this possibility, we consider how well either the RL or REACH aftereffects alone predict RV (Fig. 5, middle and bottom rows). Across subjects the correlations between these tests are much weaker than for the additivity prediction. Within sessions, contrast analyses revealed significant differences (P < 0.05) between the compared aftereffects in roughly half of all cases (21/46 target-by-session comparisons for RL; 24/40 for REACH).

To further test the additivity prediction, we fit the linear regression model RV = aRL + bLV for each of the two targets and spatial dimensions. Here we describe the key results of this analysis (complete results are given in Supplementary Table S1). In all four cases the values of a and b are both significantly different from zero (P < 5 × 10−4, with one exception: P < 0.05 for b, y-dimension, generalization target). Furthermore, the values of these weights are not significantly different from unity (which is the additivity prediction), except for the case of the y-dimension at the exposure target, where the regression weight for RL was less than unity (a = 0.65, Pa≠1 = 0.03). This shows that both the left-to-visual and right-to-left tasks play an important role in the additivity and provide additional support for the prediction. Furthermore, these findings rule out a simple response bias as an explanation for the additivity.


The aftereffects measured with the left-to-visual task are generally not parallel to the direction of the visual shift (see Fig. 4). In fact, most of the significant LV effects (8/10) contain a greater component in the y-dimension than in the x. Given this marked noncollinearity with the visual shift, one might ask whether this change really reflects an adaptive response to the visual shift.

We first note that the effects appear to reflect a true change in sensory alignment and do not arise from sampling or measurement noise. This claim is based on two observations. First, in 10 of 24 sessions we observed a statistically significant effect at the 95% confidence level, many more than would be expected by chance due to sample noise. Second, the directions of the LV aftereffects measured at the two targets are highly correlated across sessions, and these directions do not correlate with the axis of greatest measurement variability (for more details see Supplementary Fig. S3).

We next ask why we might see adaptive effects that are not collinear with the visual shift. One idea is that adaptation of a sensory modality should proceed more easily or more rapidly along the directions in which that modality is less precise (Ghahramani et al. 1997; van Beers et al. 2002). We can test this hypothesis by estimating the 2D covariance ellipses of the visual and proprioceptive sensory modalities from our task performance variabilities at the exposure target (van Beers et al. 2002). As predicted by the preceding argument, there is a correlation between the direction of greatest visual uncertainty and the direction of the LV effect (circular association ρT = 0.269, P = 0.032; see Fisher 1993; for more details see Supplementary Fig. S3). Furthermore, a comparable degree of correlation was observed between the axis of greatest right-hand proprioceptive uncertainty and the angle of the LV effect (ρT = 0.168, P = 0.021). These correlations suggest that the noncollinearity between the visual shift and the sensory aftereffects may reflect an optimal learning strategy, and could indeed be adaptive.


Because feedback timing has previously been reported to affect the relative magnitudes of visual and proprioceptive adaptation (Redding and Wallace 1990, 1992; Uhlarik and Canon 1971), we included both an early feedback condition (feedback turned on just after peak velocity) and a late feedback condition (feedback turned on late in the deceleration phase). A MANOVA analysis was performed to determine the effect of the visual feedback timing (early vs. late) on each of the four aftereffect measures.

We found that the timing of the feedback had a significant and consistent effect on all three alignment aftereffects (Fig. 6A). However, the effect appears to be primarily a rotation of the aftereffects: a separate ANOVA of the magnitude of the aftereffects showed no significant effect of feedback timing.

FIG. 6.

Effect of feedback timing and target location on adaptation aftereffects. A: each arrow represents the mean aftereffect across sessions for a given test and feedback timing: solid lines, early feedback; dashed lines, late feedback. All feedback-timing effects were significant except the REACH effect (MANOVA: RV, P < 0.005; LV, P < 0.05; RL, P < 0.001; REACH, P = 0.057). B: effect of target location: solid lines, early feedback; dashed lines, late feedback. Only the REACH effect was significant (MANOVA, P < 0.001).

We also tested whether the effects were different at the exposure and generalization targets (Fig. 6B). There was a highly significant effect of target on the reach aftereffect (2D MANOVA and ANOVA on effect magnitude, P < 0.001). The three alignment tests, on the other hand, showed no significant differences. On average, the alignment tests showed a slight rotation across targets, comparable to that observed for the reach aftereffect, but an ANOVA on alignment effect angle was not significant.

Experiment 2

Our predictions for experiment 1 were based on a model in which visual-shift adaptation drives sensory recalibration. Furthermore, we have interpreted our alignment tests to be measures of those sensory changes. However, all of our tests are necessarily sensorimotor in nature. In experiment 2, we ask how the alignment aftereffect compares with the aftereffects measured in two other sensorimotor tasks (target reaching and target tracking) after exposure to shifted visual feedback during these same two sensorimotor tasks.

We begin by showing that the movement kinematics are quite different across the three tasks. We then show that the measured aftereffect depends on both the exposure task and the test task. Finally, we argue that the results point to two separable components to the adaptive response: a general sensory recalibration and task-dependent sensorimotor effect.


Three test tasks were used in experiment 2: the right-to-visual alignment test, the reach test, and the tracking test. These tasks were designed to require very different movement kinematics, and these differences are evident in the sample trajectories shown in Fig. 7. Reaching movements were highly stereotyped, with nearly straight paths and stereotypically bell shaped velocity profiles (Hollerbach and Atkeson 1987). In contrast, the much shorter movements in the right-to-visual alignment test were also much more variable. The paths were typically not straight and the velocity profiles were multipeaked, suggestive of the presence of multiple submovements. Movements in the tracking test were highly variable by design because the target trajectory was different on each trial (three example paths are shown in Fig. 7E). A more detailed kinematic analysis supports the conclusion that the sensorimotor output is quite different across the three test tasks (Supplementary Fig. S4).

FIG. 7.

Sample trajectories from one subject for each of the test measures in experiment 2. Three sample movement paths for each of the reaching (A), alignment (C), and tracking (E) tests. In all plots the black circle is the 20-mm-diameter visually displayed target, drawn to scale. In E, the target trajectory is shown in dotted lines and the target circle is drawn at the end of its trajectory. Velocity profiles are shown for all reaching (B) and alignment (D) tests at the exposure target in Block 1 for the sample subject. Profiles are aligned at the first time step in each trial where velocity exceeded 20 mm/s. Colored velocity profiles correspond to the sample paths shown on the left. Note the different scales across panels.


For each subject, we computed the adaptation aftereffect for each combination of test, target, and exposure condition. The averages across subjects are shown in Fig. 8. Following reach exposure, the aftereffect measured with the reach test appears to be larger, on average, than that measured with the other two tests, especially at the exposure target (Fig. 8, top panels). In contrast, there is little difference between the tests after tracking exposure (Fig. 8, bottom panels).

FIG. 8.

Mean adaptation aftereffects in experiment 2. Each data point is the mean aftereffect vector across subjects for a particular test; ellipses represent the SE, plotted at the 95% confidence limit (n = 9). Each panel contains data for a single target and exposure condition. Note that in this and the following plots, aftereffects for rightward visual shifts are inverted (reflected about the origin).

These impressions were quantified with pairwise comparisons of the three tests, using the x-component of the adaptation aftereffects as the dependent measure (Fig. 9).

FIG. 9.

Test differences in experiment 2 quantified with pairwise comparisons across subjects. Each bar represents the mean difference (±SE) in the x-component of the adaptation aftereffect for the pair of tests noted at the left. Each panel presents data from a single target and exposure condition. Paired t-tests determined significance of test differences (*P < 0.05; **P < 0.01; n = 9).

After reach exposure, the reach aftereffect was significantly greater than the tracking or alignment effects (paired t-test: P < 0.01 at the exposure target, P < 0.05 at the generalization target), yet these latter two effects were indistinguishable (Fig. 9, top row). The average reach aftereffect was 50% larger than the alignment aftereffect at the exposure target and 43% larger at the generalization target. On the other hand, following tracking exposure the aftereffects measured with the three tests were generally indistinguishable. The one exception is that the tracking test had a significantly larger aftereffect than that of the reach test (P = 0.03) at the exposure target. Although these results were computed using two baseline blocks (Blocks 1 and 3; see methods), no qualitative difference was seen when Block 1 alone was used in the analyses.

The increased magnitude of the reach aftereffect following reach exposure suggests that this test measures an additional exposure-dependent component of adaptation beyond what is measured with the other two tests. This interpretation is further supported by an analysis of the correlations across subjects between the x-components of the various aftereffects. Following tracking exposure, the reach aftereffect is highly correlated with both other measures and the regression line between the tests is not significantly different from identity (Fig. 10, bottom panels). In contrast, following reach exposure the reach aftereffect is larger (as described earlier) and not significantly correlated with the other tests (Fig. 10, top panels). These results suggest that sensory recalibration is the primary source of the reach aftereffect following tracking exposure and that a separate task-dependent effect contributes to the reach aftereffect following reach exposure.

FIG. 10.

Scatterplots comparing the reach aftereffect to the alignment and tracking aftereffects. Each data point represents the x-component of the appropriate pair of aftereffects for a single subject. Colored lines represent the orthogonal regression between the 2 aftereffects performed separately for each target location: solid lines are for significant correlations (P < 0.05); dashed lines are for correlations that are not significant.


Additive components of sensory recalibration

In experiment 1, we focused on the sensory effects of visual-shift adaptation, measured with the alignment tests. The main result is that the measured shift in sensory calibration between vision and the felt position of the right hand is precisely predicted by the vector sum of the shifts between vision and the left hand and between the left and right hands.

Furthermore, this additivity holds at the “generalization” target where no visual feedback was received. These results imply that the sensory alignment between vision and the right arm relies on two separable components. As subsequently described in more detail, we interpret these components as transformations between visual, proprioceptive, and body-centered spatial representations.

Previous attempts to identify and measure separate components of the adaptive response focused on the distinction between “visual” and “proprioceptive” adaptation. Almost all of these studies relied on tests that compare visual and proprioceptive localization to an internal sense of “straight ahead” (Hamilton and Bossom 1964; Harris 1965; Hay and Pick 1966; for a more recent review see Redding and Wallace 1997). It has often been noted that these measures of adaptation are additive, i.e., the sum of the visual and proprioceptive aftereffects matches the magnitude of the reach aftereffect (Harris 1963; Hay and Pick 1966; Redding and Wallace 1978; Welch et al. 1974; Wilkinson 1971). This traditional approach to decomposing visual-shift adaptation has three major limitations, which we discuss in the following paragraphs.

First, although these measures are often thought of as direct assays of visual or proprioceptive localization, they rely on a comparison between sensory-derived signals and an internal reference, the sense of “straight ahead.” Unfortunately, this subjective reference is ambiguous, malleable, and context dependent (Harris 1974; Welch 1986). Indeed, the sum of these two sensory measures sometimes exceeds the magnitude of the reach aftereffect (“overadditivity”), and this difference is likely due to shifts in subjects' sense of straight-ahead (Redding and Wallace 1978; Templeton et al. 1974). In contrast, we have measured the sensory effects of visual-shift adaptation using the left hand as the nonadapting reference, an approach first suggested by Harris (1965) and used by Templeton et al. (1974) and van Beers et al. (2002). Of course this approach is also not a pure measure of visual or proprioceptive localization, but rather of the alignment between these sensory modalities. Nonetheless, an explicit sensory reference is likely to be less susceptible to context-dependent effects or subject misinterpretation than subjective or remembered locations.

Second, previous reports of additivity have compared the sum of the “visual” and “proprioceptive” effects to the reach aftereffect. We would expect significant subject-by-subject departures from this form of additivity, given the weak level of correlation that we and others (Redding and Wallace 2006) have observed between the reach and alignment aftereffects following reach exposure. We note, in fact, that almost all previous reports of additivity have based their argument on mean aftereffects across subjects. Here, we designed our alignment tests to minimize the effect of any nonsensory components of adaptation. As a result, we have observed very close agreement on a per-subject basis between the shift in right-to-visual alignment and the sum of the right-to-left and left-to-visual shifts.

Last, comparisons to “straight ahead” can measure recalibration only in the azimuth, and thus provide only a single, scalar measure of sensory recalibration. Using the location of the left hand as a reference, we are able to measure two-dimensional shifts anywhere in the planar workspace. This allowed us, for example, to measure the sensory effects of visual-shift adaptation at a generalization target where visual feedback was never available. The results at this generalization target highlight the potential power of our approach. We found that the alignment aftereffect had approximately the same magnitude at the exposure and generalization targets, whereas the reach effect was significantly larger at the exposure target after reach exposure. Previous studies on the generalization of reach adaptation have given quite mixed results, with some studies showing only local adaptation and others showing robust generalization (Bedford 1989; Ghahramani et al. 1996; Krakauer et al. 2000; Vetter et al. 1999). We think these differences are likely explained by the fact that reach aftereffect confounds several different underlying effects, and the relative contributions of these effects may have varied greatly in the different experimental paradigms. Resolving this issue requires the ability to measure component effects across the workspace.

Models of sensory coordination

The additivity that we observed in experiment 1 supports the component transformation model of sensory coordination (Fig. 2, A and C). In particular, it suggests that intersensory coordination relies on sequences of shared transformations. These results are not consistent with a model in which sensory coordination is due to direct, dedicated transformations between coupled sensory streams, e.g., between vision and the right hand (Fig. 2, B and D).

It is important to note, however, that the distinct predictions we have drawn from the component and direct transformation models are based on our interpretation of the alignment aftereffects as a recalibration of sensory transformations. If the shifts in alignment that we observed were actually due to adaptive changes in the sensory inputs to these transformations, then both models would be consistent with the data. It is not implausible that sensory recalibration could be due to peripheral or early sensory effects. For example, we have described the left-to-visual aftereffect as a recalibration in the transformation from an eye-centered to a body-centered coordinate frame (eye–body shift). This change is likely due to shifts in felt eye and head position (Crawshaw and Craske 1974; Lackner 1973). In traditional prism-based studies of adaptation, these shifts resulted in part from prolonged deviation of the eyes or head from midline during the exposure period (Ebenholtz 1974; Guerraz et al. 2006; Mars et al. 1998; Paap and Ebenholtz 1976). Although the virtual feedback setup eliminates this particular source of adaptation, it is likely that other peripheral or early sensory factors play some role in the sensory recalibration we have observed. Nonetheless, the primary involvement of high-level central processes in visual-shift adaptation is well documented by lesion studies (Baizer and Clickstein,1974; Baizer et al. 1999; Kurata and Hoshi 1999; Martin et al. 1996; Newport et al. 2006; Weiner et al. 1983), functional imaging (Clower et al. 1996), and the ability of prism adaptation to ameliorate deficits with a clear central origin (Maravita et al. 2003; Rossetti et al. 1998). As long as a significant central component exists, the distinction between the models holds.

The model predictions described earlier rely on another critical assumption: that feedback acquired during the use of one set of transformations does not drive adaptation in other, unused transformations. In the case of the component transformation model, this means that the body-to-arm (left) transformation does not change during exposure to shifted visual feedback of the right arm (Fig. 2C). In fact, the presence of significant intermanual transfer in the reach aftereffect is well documented (e.g., Choe and Welch 1974; Taub and Goldberg 1973; Wallace and Redding 1979). However, these effects are most likely primarily due to shifts in felt eye and head position, i.e., to eye-to-body recalibration (Choe and Welch 1974; Cohen 1967; Craske 1966; Craske and Gregg 1966; Harris 1965; Welch 1986). In fact, our results suggest that if there is any shift in the body-to-arm (left) transformation, it is minimal. To see why, consider what the component transformation model in Fig. 2, A and C would predict in the case of a significant generalization of the body–arm recalibration to the left arm. In that case, LV and RV would be of similar magnitude and RL would be near zero because the effects at the two arms would cancel. However, we observed the opposite pattern: RL and RV are of comparable magnitude and LV is typically small. This observation is consistent with little or no recalibration of the body to left-arm transformation.

Adaptation and sensory uncertainty

We observed a novel property of sensory recalibration during visual-shift adaptation: the individual component effects need not be collinear with the visual shift. Most notably, the aftereffects vectors measured in the left-to-visual alignment task were typically oriented >45° from the direction of the visual shift (Fig. 4). A detailed analysis of the effect is the topic of a future study. However, we showed here that such noncollinearity may reflect a sensible learning strategy to deal with nonisotropic variability in sensory signals: more adaptation occurs along the axis of greatest sensory uncertainty (for a similar argument see van Beers et al. 2002).

Another powerful insight into the learning rule underlying visual-shift adaptation can be obtained from the relative magnitudes of the eye–body and body–arm shifts. Redding and Wallace found an inverse relationship between the amount of time that the hand was visible during exposure reaches and the relative magnitude of the visual aftereffect. We saw no such effect in our comparisons of early and late visual feedback (Fig. 6). This difference is likely explained by the fact that in the earlier study, the timing of visual feedback was confounded with the amounts of the hand and arm that were visible. It is plausible that because more of the body was visible during early feedback trials in Redding and Wallace (1990), the uncertainty of visual localization was reduced in those trials. By the argument applied earlier, we would thus expect less visual adaptation in the early feedback trials performed in that study. In contrast, in our study the nature of the feedback was held constant, and only the feedback timing was changed. This timing difference did not affect the relative magnitudes of the adaptive responses.

A task-dependent aftereffect

In experiment 2 we investigated how the adaptation measured with three different test tasks varies with exposure task. We found that the aftereffect is largely independent of the test task, with one notable exception: the reach aftereffect is significantly larger after reach exposure. We also found that although the reach and alignment aftereffects were highly correlated following tracking exposure, they were only weakly correlated after reach exposure. These results suggest the following conclusions: the three tests measure the same underlying effect after tracking exposure (because they correlate well across subjects), this effect is likely to be sensory recalibration, and the larger reach aftereffect following reach exposure is due to a separate task-dependent effect.

Many previous studies have shown that the reach aftereffect can exceed the sum of the visual and proprioceptive aftereffects, measured with respect to straight-ahead (Choe and Welch 1974; Harris 1965; Redding and Wallace 1988, 1996; Templeton et al. 1974; Uhlarik and Canon 1971; Welch et al. 1974). The authors of these studies have typically interpreted this difference as evidence for a separate component of adaptation, as we have here. The nature of this component, however, has not been clear. In previous studies, the difference could have arisen, in part, from shifts in the perception of straight-ahead (Harris 1974; Welch 1986) or from cognitive corrective strategies driven by knowledge of the prism shift (Bedford 1993). However, these factors likely account for at most a portion of the difference observed in previous studies, and the design of the present study eliminates them altogether.

Welch et al. (1974) called this component of the reach aftereffect an “assimilated error-corrective response,” highlighting the role that explicit error feedback plays in the magnitude of the difference (Welch 1969; Welch and Rhoades 1969). Indeed, Magescas and Prablanc (2006) have shown that a reach aftereffect can be obtained by reach error signals alone, without any direct intersensory conflict, providing further evidence for the error-corrective model. The role of explicit reach errors in driving a task-dependent effect is addressed in a supplemental version of experiment 2, in which the exposure task consisted of a series of “slicing” or “reversal” movements (Sainburg et al. 1993). Specifically, subjects were asked to reach sagittally out and back without any explicit target, eliminating any error signal derived from comparison of target and reach endpoint. The results were qualitatively similar to those obtained in the reach exposure condition of experiment 2 (see Supplementary Fig. S5). These results suggest that an explicit, visual error signal is not required to obtain a task-dependent effect. Of course an implicit error signal could still be available, for example, by comparing the actual location of the movement apex with the intended location.

The presence of error-corrective learning rules (e.g., Cheng and Sabes 2007; Donchin et al. 2003; Sheidt et al. 2001) could help explain the differences we observed in the reach aftereffect following reach versus tracking exposure. Although the reach exposure task is highly stereotyped from trial to trial, the tracking task is not. During reach exposure, the incremental effects of the error-corrective rule would accumulate across trials, and a large effect would be obtained. In contrast, because the movements in the tracking task are quite variable, the same error signal (e.g., positional error) could be observed after very different sequences of motor commands. In this case, the effects of error-corrective learning may also be variable and could partially cancel out across trials.

The relationship between the reach aftereffect and sensory recalibration

The main conclusion we draw from experiment 2 is that the reach aftereffect reflects two types of underlying adaptive changes: a general recalibration of the sensory alignment between eye and arm and the task-dependent effects just discussed. Using the alignment test as a measure of recalibration and the difference between the alignment and reach aftereffects as a measure of the task-dependent effects, we inferred that approximately two thirds of the reach aftereffect was due to sensory recalibration and one third was due to task-dependent effects following reach exposure. This conclusion is based on a comparison across exposure tasks of both the magnitude of various aftereffects and the correlations between them. This conclusion is also supported by the substantial evidence that reach adaptation can be dissociated from sensory recalibration. As described earlier, Magescas and Prablanc (2006) reported reach adaptation without intersensory conflict. Furthermore, Taub and Goldberg (1974) found that deafferented monkeys adapt even better than normal controls in a prism adaptation paradigm similar to that used here. In both cases, robust reach adaptation was observed in experimental contexts that would be expected to yield poor sensory recalibration, supporting the existence of two different types of adaptation.

If reach adaptation can occur in the absence of sensory recalibration, why do we conclude that the reach aftereffect that we have measured is a composite of sensory and task-dependent effects? Could the alignment and reach aftereffects be measuring two entirely different effects? We think this possibility is unlikely. First, the strong correlation between the tests after tracking exposure suggests that they are measuring the same underlying effect in that context. Second, there is substantial empirical and theoretical evidence that reach planning relies on multisensory estimates of hand and target locations (Rossetti et al. 1995; Saunders and Knill 2003; Sober and Sabes 2003). There is also evidence that the neural circuits that underlie reaching are also used for other tasks. For example, Snyder and colleagues have shown substantial overlap in the macaque parietal circuits for reaches and saccades (Lawrence and Snyder 2006; Snyder et al. 2000). It thus seems likely that the neural circuits required for solving a spatial localization task such as the alignment task are also involved in reach planning. Changes in these circuits would thus be expected to affect performance in our reach test.

Finally, we return to the deafferentation study of Taub and Goldberg (1974). If roughly two thirds of the reach aftereffect in our study is due to sensory recalibration, why is there no decrement in the reach aftereffect when an animal is deafferented? To answer this question, we again appeal to the error-corrective learning model. Consider the fact that in these experiments, there are at least two potential error signals that can drive learning: the reach error and the intersensory conflict (Cheng and Sabes 2007). In an intact subject, both error signals are likely to drive adaptation, and the two learning processes essentially compete. When there is no intersensory conflict, however, the reach error can still drive task-dependent effects (Magescas and Prablanc 2006). Indeed, the reach error signal is likely to be even larger in the deafferented monkey than that in the normal control because the unperturbed proprioception no longer contributes to combined sensory estimate of hand position. This difference can account for the increased adaptation observed in the deafferented monkeys, as noted by Taub and Goldberg (1974). It is also worth noting that some sensory recalibration could have taken place even in the deafferented monkey because visual feedback can be compared with an internal prediction of arm position given efference copy. Finally, a number of experimenters have shown that deafferented humans can adapt normally in other visual perturbation paradigms (Bard et al. 1995; Bernier et al. 2006; Guedon et al. 1998; Ingram et al. 2000; Pipereit et al. 2006). The preceding argument applies to these studies as well. In addition, we note that most of these experiments used a center-out reaching task and a visual perturbation that consisted of a rotation about the center. In this situation, there is no consistent sensory remapping, and so little sensory recalibration would be expected even in normal subjects.

Applications to physiological studies of adaptation

We have presented an approach to quantitatively analyzing the adaptive response following exposure to shifted visual feedback. These tools could be applied more generally to characterize the adaptive response to various forms of sensorimotor manipulations. We believe that such fine-grained behavioral analyses will be required to relate adaptive changes in behavior to changes in the underlying neural circuits.

For example, previous attempts to localize the site of prism adaptation in the brain have been, perhaps, too successful, yielding evidence that most of the neural circuits involved in reaching play an important role (Baizer and Glickstein 1974; Baizer et al. 1999; Kurata and Hoshi 1999; Martin et al. 1996; Newport et al. 2006; Weiner et al. 1983). More recently, human neurophysiological studies have found evidence that various error signals involved in sensorimotor adaptation are processed by different networks of brain areas (Diedrichsen et al. 2005; Lee and van Donkelaar 2006). The presence of multiple components of adaptation may explain the participation of many brain areas in visual-shift adaptation and related behavioral phenomena. In this case, progress on understanding the neural basis of adaptation requires the ability to dissociate and quantify these components. Further, by fully characterizing these component effects, such as how they generalize across space, we will have the potential to place strong constraints on both the structure of the neural circuits that underlie accurate sensorimotor coordination and the learning rules that maintain them.


  • 1 The online version of this article contains supplemental data.

  • The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.


View Abstract