For those movements that are directed toward objects located in extrapersonal space, it is necessary that visual inputs are first remapped from a retinal coordinate system to a body-centered one. The posterior parietal cortex (PPC) most likely integrates retinal and extraretinal information to determine the egocentric distance of an object located in three-dimensional (3-D) space. This determination requires both a retinal disparity signal and a parallel estimate of the fixation distance. We recorded from the lateral intraparietal area (LIP) to see if single neurons respond to both vergence angle and retinal disparity and if these two signals are integrated to encode egocentric distance. Monkeys were trained to make saccades to real targets in 3-D space. When both fixation distance and disparity of visual stimuli were varied, the disparity tuning of individual neurons display a fixation-distance modulation. We propose that the observed modulation contributes to a spatial coding domain intermediate between retinal and egocentric because the disparity tuning shifts in a systematic way with changes in fixation distance.
Much of primate motor behavior is manipulation of objects and other movements directed at targets in the immediate extrapersonal space. Generating useful movements requires a body-centered estimate of the objects' locations. To a very large degree, localization is provided through visual input. Locating objects in relation to the body with visual input requires a retinal disparity signal to provide a measure of the object's distance from the plane of fixation (Cumming and De Angelis 2001; Poggio 1995). If this object distance information is combined with an estimate of fixation distance, the brain has sufficient information to calculate an egocentric distance (Pouget and Sejnowski 1994). There are three important cues that can provide the needed estimate of fixation distance: the extraretinal signals vergence angle and accommodation and the vertical disparity (Foley 1980). Vertical disparity is only used by the visual system for objects >20° of visual angle (Cumming et al. 1991; Rogers and Bradshaw 1993), so it can be neglected in cases where smaller targets are viewed. Data from psychophysical experiments suggest that the vergence angle is the most important cue used for the computation of fixation distance (Foley 1980; von Hofsten 1976).
Area LIP, in the lateral bank of the intraparietal sulcus (IPS; Fig. 1) is important for attention (Colby and Goldberg 1999; Gottlieb et al. 1998), target selection, coordinate transformations for the representation of extrapersonal space (Zhang and Barash 2000), decision processes (Platt and Glimcher 1999), and intention to move (Andersen et al. 1997; Snyder et al. 1997). It is an obvious candidate for a site where retinal signals and extraretinal signals might be combined (Andersen et al. 1990b; Bremmer et al. 1997a). Neurons of area LIP are modulated by retinal disparity (Ferraina et al. 2002; Gnadt and Beyer 1998; Gnadt and Mays 1995). These disparity signals are relayed from LIP to both the frontal eye field (FEF) (Ferraina et al. 2000, 2002) and the intermediate layers of the superior colliculus (SC) (Ferraina et al. 2002; Gnadt and Beyer 1998): two important nodes in the network that controls saccadic eye movements (Wurtz and Goldberg 1989; Wurtz et al. 2001). The presence of disparity-sensitive neurons, characterized by the presence of a wide range of disparity sensitivity (Ferraina et al. 2002; Gnadt and Beyer 1998; Gnadt and Mays 1995), indicates that these pathways may be used to generate eye movements to targets located away from the fixation plane (see also Poggio 1995). In addition to the retinal disparity signal that provides a measure of the distance to the fixation plane, there is also activity in the PPC that reflects the fixation distance of a target (Sakata et al. 1980). Missing from the literature, however, is the demonstration that these two signals are combined in a way that could provide an estimate of egocentric distance. The present study was designed to investigate the presence and to describe the form of such interaction in area LIP. We find that the interaction assumes the form of a partial shift, suitable for a role of the area in the process of visuomotor transformation from a retinal to a body centered frame of reference. The results of such integration would be available to motor areas of the frontal lobe that receive parietal anatomical connections, both directly from LIP (Schall 1997) or through areas, such as the parietal MDP, 7m, 7a, AIP, that receive input from LIP (Andersen et al. 1990a; Blatt et al. 1990; Cavada and Goldman-Rakic 1989; Colby et al. 1988; Nakamura et al. 2001) and, in turn, project to both dorsal and ventral premotor areas (Caminiti et al. 1996; Rizzolatti and Luppino 2001).
A brief report of this work has appeared previously (Ferraina and Genovesio 2001).
Two rhesus monkeys (Macaca mulatta) were studied using general procedures identical to those recently described (Ferraina et al. 2002) and outlined here. Animal care, housing, and surgical procedures were in accordance with European guidelines on the use of animals in research (European Community Council Directive 86/609/ECC).
In each monkey, under general anesthesia (isofluorane), a recording cylinder was implanted centered at stereotaxic coordinates P5.0–L12.0 mm to allow recordings from area LIP. Binocular scleral search coils were also implanted. During the experiments, the monkey sat in a primate chair with its head restrained and faced two red light-emitting diodes (LEDs) positioned by robotic arms (CRS Robotics, Burlington, Canada, see Fig. 2A). To prevent the LEDs from illuminating the robot arms, the central target diameter was 0.5 mm and formed by the tip of a fiber optic that transmits the light produced from the LED and helps reduce the light diffusion. The peripheral target was a <0.5° diameter spot. All tasks were performed in total darkness, and the room was illuminated during the inter-trial epoch to avoid dark adaptation.
The animals' workspace was formed by eight isovergence surfaces (Fig. 2B), passing through the rotational centers of the two eyes and through all the potential targets subtended by the same vergence angle, calculated by taking into account the interocular distance of the two monkeys (36 and 34 mm, respectively) and separated in depth by steps of 1° of vergence angle. The closest isovergence surface corresponded to 13° of vergence angle and the farthest to 6° of vergence angle. Accordingly, the closest fixation point (FP) was located at 158 mm and the farthest FP was located at 349 mm from the monkey (149 and 324 mm were the corresponding values of fixation distances for the 2nd monkey). The studied fixation-distance range was chosen in the proximal space, roughly corresponding to the monkey's arm length, because that is where the vergence angle expresses most of its power. Rightward and upward rotations of the eyes were treated as positive. Vergence angle was computed as left eye position minus right eye position (convergence thus being positive). Conjugate eye position was calculated as the average between left and right eye position. Calibration was obtained by requiring the monkeys to fixate sequentially five LEDs, one central and the others positioned to form an angle of -10 and +10° both in the vertical and in the horizontal axis of the isovergence surface corresponding to 10° of vergence angle. During calibration, the center LED was aligned with the eye that was being calibrated while the other eye was kept covered. The same procedure was then repeated for the other eye.
Monkeys were trained in a delayed saccade task with targets in three-dimensional (3-D) space. Each trial started with the presentation of a central FP, and, after 500–800 ms of fixation, a peripheral visual target appeared and remained on until the end of the trial. The depth of the FP and peripheral targets varied between isovergence surfaces and from trial to trial. After 800- to 1,200-ms delay period, the monkey was required to make a saccade to the target within 500 ms. A memory version of the task was interleaved in a block design; data obtained using this task will be part of a future report. Recording electrodes were advanced down the lateral bank of the IPS while the monkeys performed the visual delayed saccadic task. During this preliminary exploration, eye movements were made to a two-dimensional array of targets at different elevation and eccentricity along the isovergence surface of the FP corresponding to 10° of vergence angle. Each isolated neuron was first studied in this task to determine the locus of maximal neural activity in the response field. Target eccentricity was tested from 10 to 20°; target elevation was tested from –20 to 20° in steps of 10°.
With the target centered in the response field, the fixation-distance modulation was explored while the monkeys made conjugate eye movements within isovergence surfaces (Fig. 2B, left). The disparity modulation was explored while the monkeys made disjunctive eye movements (Fig. 2B, right). The FP was maintained at one locus while the targets were arranged at different relative positions in the 3-D workspace; the targets were arranged to produce either positive or negative differences between the initial and the final vergence angle. The version component of the movement, corresponding to the mean eccentricity, was never modified (Fig. 2C). Conjugate and disjunctive eye-movements trials were intermingled within blocks. In most of the neurons tested, data were obtained during disjunctive eye movements that started from three fixation distances, corresponding to 6, 10, and 13° of vergence angle. In a subgroup of neurons, we obtained data from disjunctive eye movements starting from six fixation distances. For each condition tested, both conjugate and disjunctive eye movements were repeated for at least five trials (up to a maximum of 10 trials). Monkeys were required to fixate within a 2° (diameter) spherical window. The control was maintained separated for each eye. An eye movement was considered successful if each eye landed within a 3° (diameter) spherical window centered on the target. The accuracy and consistency from trial to trial of the vergence behavior was evaluated by off-line inspection. More than 85% of the trials resulted in a vergence error of <0.3°. All trials with vergence errors >0.5° were excluded from further analysis.
Rasters of neuronal discharges were aligned on specific behavioral events. Raw spike counts were used to measure neuronal activity during successive epochs of the tasks. Several scalar activity values were defined for various epochs: “fixation activity,” the mean discharge rate 500–200 ms before target presentation, “visual activity,” the mean discharge rate from 70 to 170 ms after appearance of the target, “delay activity,” the mean discharge rate of the last 300 ms of the delay epoch, ending when the fixation point disappears, and “presaccadic activity,” the mean discharge rate 100 ms before saccade initiation. The choice of the analysis windows originated both from off-line inspection of the data (for example, the beginning of the visual activity period was obtained from the average latency observed) as well as from the choice to obtain activities uncontaminated, as much as possible, by concurrent signals (for example, the end of the fixation period was excluded from the fixation activity because coincident with the robot approaching the target position). Our initial database of LIP neurons (152/171 of the recorded neurons) was based on the presence of significant visual or visuomotor modulation in the task used for the initial definition of the response field. Visual and delay activities were regarded as significantly modulated if they differed statistically from the fixation activity. Presaccadic activity was considered significant if it was statistically different from the delay activity. In our sample, all the modulation observed corresponded to an increase of neural discharge.
We performed a Shapiro-Wilk test of normality on each data set. Within-sample comparisons used either paired Student's t-test or nonparametric Wilcoxon signed-rank tests. For group comparisons, we used either an ANOVA or the nonparametric Kruskal-Wallis ANOVA, depending on the sample normality. For all statistical tests, P < 0.05 was set as significance level.
Area LIP was identified physiologically in the IPS by the abundance of neurons with significant visual responses and saccade-related activity. The presence of delay activity maintained in the memory version of the delayed saccade task (Barash et al. 1991; Gnadt and Andersen 1988) was used as an additional criterion for locating area LIP. In the first animal, some of the penetrations (Fig. 1) were made using electrodes labeled with fluorescent dyes (DiCarlo et al. 1996). Subsequent standard histological procedures confirmed that the recorded neurons were located in the middle third of the lateral bank of the IPS. The second monkey is currently involved in a different experimental protocol, and thus the histological material is not yet available.
As described previously (Collewijn et al. 1988; Maxwell and King 1992), saccadic eye movements between isovergence targets produce a transient deviation from conjugacy with an initial divergence followed by a convergence that brings the eyes back toward the target. Our behavioral data confirm this observation (Fig. 3A). Despite these variations, we will refer to these eye movements as “conjugate” to differentiate them from the disjunctive eye movements that are produced when targets are located at a depth different from the FP (Fig. 3B). In these disjunctive eye movements, a vergence component is introduced to the versional component of eye movement to allow the correct refixation in depth (Carpenter 1988; Erkelens et al. 1989). The vergence component could be positive or negative and depends on the disparity of the stimulus.
During binocular fixation, the FP is supposed to be accurately projected onto the centers of the foveae in each eye. However, it is well known that even in the best attempt to fixate, the eyes are never stationary (Motter and Poggio 1984) and vergence errors can occur: the visual axes may intersect in front or behind the fixation point and result in an exo or eso fixation disparity, respectively (Collewijn et al. 1988; Erkelens et al. 1989; Ogle 1967). Furthermore, a relation of the fixation disparity with the fixation distance has been described with a clear pattern of increasing underconvergence (eso fixation disparity) for increasingly near fixation (Jaschinski 2001; Jaschinski-Kruza 1993). This pattern is evident in the data obtained in both of our monkeys as shown in the examples of Fig. 4. Fixation disparity gradually becomes more positive for near fixation distances. In our data set, we always observed this linear trend that could be quantified by a linear regression through the single observations. The mean value for all the slopes obtained from significant (P < 0.05) linear regressions was 2.97 ± 1.1 (SD) min arc and indicates the average effect of the fixation distance on fixation disparity. Because this effect slightly interferes with the fixation-distance modulation of the neural activity, both the vergence angle and the target disparity in all of the analyses have been adjusted accordingly.
Neurons in the lateral intraparietal area are modulated both by fixation-distance changes and retinal disparity
One hundred and fifty-two neurons (116 from monkey B, 36 from monkey L) were analyzed while the monkey was required to make conjugate eye movements along eight different isovergence surfaces. For the most part, we will report results obtained during the delay-period because activity during this period should be related to visuomotor transformations (Andersen et al. 1990b, 1997). The results obtained from the analysis of the other two experimental epochs, visual and presaccadic activities, are reported in Table 1.
The neural activity of 45 (29.6%) of the LIP neurons studied was significantly modulated (ANOVA; P < 0.05) by fixation-distance changes of vergence angle. Figure 5A shows a typical example of a neuron with fixation-distance modulation. The modulation is still evident after correction of the vergence values for fixation disparity (Fig. 5A, right). For this neuron, neural activity increases during the preparation for eye movement to a target located along an isovergence surface when the movement is made at closer fixation distances. For most of the fixation-distance-modulated neurons (40/45; 88.9%), the neural discharge shows a monotonic increase toward near or far fixation distances. More than two-thirds of these neurons (29/40; 72.5%) showed a near-space preference. A small number (5/45; 11.1%) of fixation-distance-modulated neurons discharged maximally at an intermediate fixation distance, and these were excluded from further analyses.
To see if a disparity signal could be found in neurons with fixation-distance-related modulation, all 152 neurons tested for conjugate eye movements were also tested during the execution of disjunctive eye movements. Fixation distance was maintained constant, and monkeys were required to execute both the corresponding conjugate eye movement as well as divergence and convergence eye movements. The accompanying vergence component corresponded to the retinal disparity of the targets and varied in steps of 1°. Figure 5B shows the resulting neural modulation for the same neuron as depicted in Fig. 5A. This neuron preferred positive (crossed) disparities and was classified as a near-neuron according to Poggio's classification (Poggio and Fisher 1977). Figure 5B, right (black squares), shows the modulation after correction for fixation disparity. Furthermore, this plot shows that the modulation observed for the fixation distance can only be partially explained by the retinal disparity modulation. In fact, the white diamonds in Fig. 5B show the discharge rate of the neuron when tested during conjugate eye movements versus the corresponding average retinal disparity. It is evident from the plot that changes in retinal disparity can only account at most for a small component of the changes in the response rate.
At the population level, a significant modulation (ANOVA; P < 0.05) obtained by disparity changes was observed for 61.2% (93/152) of the neurons tested. Sixty-one neurons were classified as near and 32 as far.
Thirty-eight of the neurons modulated by the fixation distance in the conjugated eye movements were also modulated by disparity changes. Only seven neurons were sensitive only to changes in the fixation distance. All told, 65.8% (100/152) of the neurons studied were modulated by at least one of the two signals (Fig. 6).
Alignment of disparity and vergence related signals at single neuron level
The effect of fixation distance on neural activities was quantified in those neurons showing a linear trend by fitting the data to a linear model. A significant linear fit (P < 0.05) was found in 85% of these neurons (34/40), and most (32/34) passed the goodness-of-fit-test for a linear model (P > 0.05). The model was fitted to the data after correction for fixation disparity. For each replication, the fixation disparity was calculated and then subtracted from the corresponding vergence angle value. Figure 7A shows the obtained regression line for the same neuron depicted in Fig. 5A. The slope value of 3.66 indicates that the fixation distance modulates neural activity by 3.66 spike/° change in vergence angle. Similarly, we obtained a quantification of the effect of the changes in disparity on the neural activity by fitting the data to a cubic model. This model was used because we often observed that neurons discharged maximally at intermediate values of disparity in the range tested. The model was fitted to the true disparity after correction for the fixation disparity calculated for each single replication. To avoid the floor effect of zero neural activity at nonpreferred disparities, we fitted the model only to the data obtained at the preferred disparity range. Further, we used the data obtained for movements starting from the limits of the workspace to maximize the range of disparities included in the index. For neurons preferring the positive disparity range, we used the data obtained from convergence eye movement starting from the isovergence surface corresponding to 6° of vergence angle (Fig. 7B), whereas for neurons preferring the negative disparity range, we used data obtained from divergence eye movement starting from the isovergence surface corresponding to 13° of vergence angle. A significant (P < 0.05) fit to the cubic model was found in 72% of the neurons (67/93) displaying a modulation for disparity changes. From the tuning function of these neurons, we derived the value of neural activity corresponding to the maximum modulation and the corresponding value in disparity. Armed with these values, we calculated a ZERO INDEX = (DMAX – DZERO)/n; where DMAX is the neural activity corresponding to the maximum, DZERO is the neural activity corresponding to 0° of disparity, and n is the disparity value corresponding to the maximum of the tuning curve. The sign of n is positive for crossed disparities and negative for uncrossed disparities; as a consequence, a positive index refers to neurons preferring the positive disparities range and vice versa. This index is the degree to which disparity changes can modulate neural activity in the preferred range. Figure 7C shows an example of the application of this index to the neuron of Fig. 7B. For this neuron, retinal disparity modulates the firing frequency by 6.57 spike/° of change in retinal disparity. Figure 7, D–F, shows the corresponding indices obtained for a neuron modulated by negative disparities and preferring far fixation distances during conjugate eye movements. Figure 8 shows the distribution of indices (slope and ZERO INDEX) obtained at the population level for the neurons modulated (ANOVA; P < 0.05) by the fixation distance (A) and by disparity (B) changes, respectively. The median values of slope and ZERO INDEX, expressed in absolute values, were 2.7 and 4.4, respectively.
For the group of neurons modulated by both signals (ANOVA) and that displayed both a significant linear fit for fixation distance and a significant cubic fit for disparity (32), we quantitatively explored their interaction effect by using the two modulation indices described. What emerges is that the modulation observed during disjunctive movements directed to targets located at different disparities provides information consistent with the fixation-distance-modulation observed in the conjugate eye movements (Fig. 9A). In fact, the neurons increasing their discharge rates for near fixation-distances during conjugate eye movements (with a positive slope in the linear regression) showed the strongest activity for crossed disparities (during convergence disjunctive eye movements), whereas those neurons that discharged preferentially for far fixation-distances during conjugate eye movements (negative slope) showed the strongest activity for uncrossed disparities (divergence disjunctive eye movements).
The same group of neurons, modulated by both signals, has been used to further confirm that only a small component of our results can be explained by retinal disparity changes. For each neuron, we calculated, from the slope of the linear regression, the modulation obtained during conjugate eye movements in the range of the fixation distances tested. For example, the neuron in Fig. 7A displays a modulation of 25.6 spikes for 7° of changes in the vergence angle (from 6 to 13° of vergence angle). This value was contrasted to the estimated value of the maximal modulation due to the fixation disparity changes. This estimate was obtained by fitting the data collected during disjunctive eye movements. From the fitting the modulation was calculated in the range of 0.5° of fixation disparity, which corresponds to the maximum value present in our data after the correction procedure applied to the vergence angle errors (see Behavioral tasks). As a result, using the fitting of Fig. 7B, we could say that the same neuron displays a modulation of 2.1 spikes for a fixation disparity of 0.5° that is 12 times lower than the modulation for the fixation distance. Figure 9B shows these two values contrasted for all the neurons. The fitting for the disparity data presented in this figure was obtained using a fourth-order model to improve the estimate. It is evident that the modulation observed during changes in fixation distance is always stronger than the modulation that could arise from retinal disparity changes, even assuming the maximum range of fixation disparity.
Overall, our results provide evidence that neuronal activity in parietal area LIP is sensitive to both the vergence angle and the retinal disparity of the target and that these two signals, when present in the same neuron, tend to be aligned in an apparently functional way.
Integration of retinal disparity and vergence signals in LIP neurons. Contribution to egocentric representation of targets in 3-D space
If a neuron is coding egocentric distance rather than disparity per se, one should expect a modulation of its disparity tuning with vergence. Figure 10A shows the effect of the fixation-distance changes in the disparity value of targets located at the same retinal eccentricity and different distances from the subject eyes. The disparity value of an unmoving target in space becomes more positive as the vergence angle decreases. Figure 10B shows two hypothetical neurons with broadly modulated disparity tuning and an efficient shift of the disparity tuning, when tested at different fixation distances, toward an egocentric coding of the targets. One neuron increases its activity with positive disparities (black lines) and the other increases firing with negative disparities (gray lines). In each case, the shift is equal in intensity, but of opposite sign, to the vergence angle changes. This relationship assures that the discharge rate matches the discharge rate for the same unmodified egocentric distance of the target. The overall effect, a positive shift, is similar for both (convergence and divergence) idealized neurons. The presence of a coding for the egocentric distance is confirmed by the perfect overlap of the different curves obtained for the different fixation distances, when the same data are reported as a function of the distance between the targets and the subject (Fig. 10C).
Using the preceding, idealized model, we tested the interaction between disparity- and vergence-related signals for those LIP neurons (n = 51/152; 33.6%) with sufficient data: at least five replications during the execution of disjunctive eye movements made from six different fixation distances. As shown in the three example neurons of Fig. 11, the disparity-related signals were modulated by changes in the fixation distance. Both the vergence angle and the disparity values were those obtained by taking into account the fixation disparity for each single data point. The power of the resulting interaction is different for different neurons. In a sense, the neurons differed from one another in their capacity to contribute to the computation of egocentric distance to the targets. The first of the convergence neurons (Fig. 11A) showed a clear shift in disparity tuning, producing well-separated curves for the different fixation distances. The effect is less evident in both of the other convergence neuron (Fig. 11B) and in the divergence neuron (C).
As evident from the data shown in Fig. 11, in the evaluation of the effect of the vergence angle on the disparity tuning, it was often difficult to discriminate the contribution of the horizontal shift effect from that of a gain effect. Our first tentative approach to this issue was to compare the results obtained by fitting the data obtained from those neurons that show a significant modulation (ANOVA) for disparity in at least one fixation distance (n = 38/51; 74.5%), to two different models 1 2 in both models, Z is the firing rate, x is the disparity, and y is the vergence angle; in Eq. 1, s represents the parameter that takes into account the horizontal shift; in Eq. 2, g represents the parameter that takes into account the gain effect. To compare the results, we used the adjusted R2 (R2a) that balanced the cost of using and comparing models with a different number of parameters 3 where k is the number of regression parameters in the model and n is the number of data points. Therefore the R2a is always a reduced amount of R2. Figure 12A shows the results for the 19 disparity-modulated neurons with a value of R2a > 0.5 in at least one of the two models (Fig. 12A). This prevalence of the R2a obtained using the shift model suggests a stronger effect of shift, which needs further confirmation. In this direction, a different approach was to look at the results obtained by fitting the data to a “composite” model 4
This model corresponds to a combination of the two previous models expressed in Eqs. 1 and 2 and estimates the relative contributions of the parameters s and g in determining the regression accuracy. Figure 12B shows the distribution of the P values obtained for s and g for the same group of neurons of 12A. The smaller the value of P, the less likely the parameter is actually zero. For example, if P = 0.01, there is a 1% chance that the actual parameter is zero. If P = 0.95, there is a 95% chance that the actual parameter value is zero. In cases like this latter, the parameter in question can be removed from the model without affecting the regression accuracy. Figure 12B shows that the P value for the parameters s is always <0.05, the same is true for the parameters g only for two neurons. Therefore we concluded that even if the gain effect is able to explain partially the effect produced by the changes in vergence angle on the disparity tuning, the principal effect is a shift effect.
To explore quantitatively the predominant shift effect at the population level, taking into account the observation that disparity tuning, in our data, was better described by a cubic model, the same data were fitted to the model 5
Equation 5 amounts to a linear interaction between a cubic model for the disparity modulation and a linear model for the fixation-distance modulation. Z is the firing rate, x is the disparity, y is the vergence angle, and the parameter s is the interaction term between the cubic and linear models. Parameter s is a “shift parameter” related to the linear shift of the disparity tuning as a function of the vergence angle. When s = 0, the firing rate does not depend on the vergence angle, and thus there is no shift as the vergence angle is varied. When s = 1, the activity moves 1° toward positive disparity as the vergence angle decreases by 1°.
Figure 13 (A–C) shows 3-D plots of the model for the activity obtained during the delay period for the same neurons and data depicted in Fig. 11. The amount of shift is graphically represented by the displacement of the isodischarge color regions of the plot toward more positive disparity values as the vergence angle decreases. The figure shows that the shift was positive (i.e., toward positive disparities) for both the convergence and the divergence neurons.
Figure 14A shows the distribution of the significant parameter s (P < 0.05) obtained from the regression model applied to the population of disparity-modulated neurons (28/38; 73.7%). The greater part of these neurons displays a positive shift value with a median value of 0.4. In other words, as a result of the partial shift these neurons are able “to compensate” for ∼3°, on average, when the vergence angle changes by 7°. Finally we compared the regression model for the data obtained for neurons that are disparity modulated both in the visual and in the delay epochs (35/51). For those neurons that have a significant parameter s (P < 0.05) during the visual period (23/35; 65.7%), the results show a median value that was 0.42, that is, a value not significantly different from that obtained for the delay epoch (t-test; P = 0.19). The only difference was obtained by comparing the R2 of the model (Fig. 14B), indicating that the interaction between disparity and fixation-distance related signals is better expressed in the delay epoch (t-test; P < 0.05). This implies a major role for this epoch in the neural processing of visuomotor transformations.
The observed co-linearity between disparity sensitivity and vergence angle-related modulation in the first experiment is still evident in the color plot. In fact, the isodischarge regions of the convergence neurons (Fig. 13, A and B) reveal how the shift determines the reduction of the firing frequency for the isovergence movements, corresponding to zero degrees in the disparity axis, as the vergence angle decreases. The opposite effect is equally clear for the divergence neuron (Fig. 13C).
What initially appears to be a gain modulation effect based on the vergence-related modulation during conjugate eye movements is, in fact, a shift effect when several disparities are considered together. In the range of disparity tested in the present experiment, this shift never changes the disparity-selective category (near vs. far). The two populations remain segregated with a group of “near neurons” computing the visuomotor transformations for the closer part of the extrapersonal space and a group of “far neurons” working similarly for the targets farther out.
The present study is the first to demonstrate that individual LIP neurons combine visual disparity signals with fixation-distance information (the vergence angle) in a manner that can be used to determine 3-D egocentric distance to an object in space. This work validates the hypothesis that a combination of retinal information about target position and extraretinal information about the position of the eyes in the head are calculated by the CNS and supports the theory that these calculations compute the position of visual targets in head-centered coordinates (Zipser and Andersen 1988). Our results show that fixation distance modulates only a subpopulation (38/93; 40.9%) of the disparity-related neurons. The remaining disparity-related neurons are presumably involved in the representation of eye-centered coordinates used in the PPC (for a review, see Andersen et al. 1997). Previous studies in area LIP (Brotchie et al. 1995), as well evidence from humans (Brotchie et al. 2003), show that LIP integrates visual signals with head-position signals. Thus LIP is likely to contain multiple coordinate systems, and our results support rather than conflict with this view.
Although it is clear that LIP has a visual representation, it has been difficult to distinguish experimentally movement planning and attention. One hypothesis is that it codes the intention to make a saccade to a stimulus (Andersen et al. 1997; Snyder et al. 1997); the second is that it provides a map of salience stimuli in the visual world without specifying any motor plan (Colby and Goldberg 1999; Gottlieb et al. 1998; Powell and Goldberg 2000). The stronger evidence for a role of LIP as a part of the system for planning eye movements is the greater delay period activity of LIP neurons for upcoming eye movements than of simultaneously generated arm movements to a different location (Snyder et al. 1997). In addition, the presence of neurons in LIP that show nonspatial saccade-specific activation (Dickinson et al. 2003) challenges the view that LIP is involved only in representing salient spatial location. In this study, in addition to cells with an effector-specific activation, there was another group of cells showing spatial-selective responses that could be thought as attentional responses or as default plans for eye movements. Therefore it seems unlikely that fully formed planned movement are specified in LIP; instead the intended movements activity more likely represents early plans at the intermediate stages of the sensory motor transformations.
Our tasks are not designed to distinguish whether the activity during the delay period is related to attention to the target or to a motor plan because the visual attention is pinned to the spatial location of the eye movement target, but the concept that the activity in posterior parietal cortex is related to the sensory-motor transformations (Andersen et al. 1990b, 1997; Zhang and Barash 2000) is generally accepted. What is not clear is whether the representation of the extrapersonal space could subserve a range of behaviors not limited to saccadic eye movements, including selective visual processing through its connection with visual areas (Baizer et al. 1991). In either case, LIP must integrate stimuli in different modalities and provide to the areas that it projects its analysis of the 3-D extrapersonal space and/or an early eye movement plan.
Relationship to previous studies
Previous studies have demonstrated vergence angle modulation of PPC neurons in both humans (Hasebe et al. 1999; Kapoula et al. 2001) and monkeys (Gnadt 1992). Our data show that the majority (72.5%) of the vergence angle modulated neurons prefer near target locations, and their activities increase gradually as the fixation distance decreases. This agrees with earlier recordings in PPC (area 7a) by Sakata and colleagues during a visual fixation and a visual-tracking task (Sakata et al. 1980, 1985). Distance modulation of visual-related activity has also been found in V1 (Gonzalez and Perez 1998; Trotter et al. 1996) and in V2 and V4 (Dobbins et al. 1998; Rosenbluth and Allman 2002) areas. The influence of the extraretinal signals on the neuronal activity has been often described as a “gain field effect” (Andersen et al. 1990b; Bremmer et al. 1997a; Salinas and Thier 2000). The interaction between the disparity signal and the fixation distance, in the form of a gain effect, has been previously observed in area V1 (Gonzalez and Perez 1998; Trotter et al. 1996) and in a small number of the neurons tested in area MT (Roy et al. 1992). In all of these prior studies, the monkeys' vergence angle was not measured. A control experiment using prisms (Trotter et al. 1996) supported the involvement of the vergence angle in the fixation-distance modulation of area V1. Vergence angle has been measured by Cumming and Parker (1999). These authors found very little effect of the vergence angle on the disparity tuning in area V1. It should be noted that Cumming and Parker explored a range of fixation distance (50–175 cm), corresponding to changes in vergence of 1–3.5°, different from the range tested by Trotter and colleagues (2–10°), who found the maximal effect at 20 cm. Accordingly, we decided to explore the vergence influence on neural activity in the proximal space (from 15 to 35 cm from the monkeys eyes and from 6 to 13° of vergence angle) where the vergence system expresses most of its power.
In the first experiment, we found a strong spatial colinearity between the disparity category (near vs. far) and the modulation exerted by extraretinal signals. Neurons discharging preferentially during a divergence eye movement to targets located at negative disparities displayed a parallel positive linear trend for increasing fixation distances when the targets were maintained at constant disparity. Neurons preferring convergence movements displayed exactly the opposite trend. In area LIP, a similar spatial colinearity of different signals has been found during smooth pursuit between the preferred direction of movement and the eye position gain field (Bremmer et al. 1997a). The same authors observed the lack of such alignment in both areas MT and MST (Bremmer et al. 1997b). In contrast, an alignment between retinal signals and eye position signals has been found in area V3a (Galletti and Battaglini 1989) that, similarly to MST and MT, projects to LIP (Blatt et al. 1990). The importance of the spatial colinearity in the process of coordinate transformations has been validated by computational studies (Xing et al. 2000). Comparing different models, the authors found a prevalence of colinearity between receptive and gain fields only when the neural networks were trained to operate transformations from eye-centered to body-centered coordinates. In the hidden units of the models presented by Xing and colleagues (2000), the colinearity was associated with partial shifts of the receptive fields after changes in eye position. A similar colinearity of signals and presence of a partial shift was evident for the disparity tuning in the LIP neurons of our study during vergence angle changes.
In summary, the unit properties we have demonstrated here fit well with both prior recording studies and the theoretical framework surrounding the role of LIP in spatial mapping.
Partial shift modulation of retinal disparity
When the disparity signal is kept constant, i.e., during the execution of conjugate eye movements to the preferred eccentricity, the emerging modulation of the neural activity for changes of vergence angle is similar to the previously described gain modulation. When the two signals were both varied, a proportion of LIP neurons displayed a disparity modulation that was not fixed but showed a partial shift as a function of the fixation-distance changes. This is the predicted behavior of neurons using an intermediate coding between an eye and an egocentric coordinate system. Different computational models (Mitchel and Zipser 2001; Pouget and Sejnowski 1994) indicate how this partial shift could be used for the egocentric distance calculation. A preliminary report for LIP neurons (Gnadt and Mays 1991) described that the gain of the response, but not the disparity tuning, is modulated as a function of the vergence angle. Further, Lehky et al. (1990) have shown that their neural network trained to determine distance develops gain fields for vergence and disparity (see also Pouget and Sejnowski 1994). Our data show that both effects are present in area LIP with a prevalence of the shift effect. It is difficult to establish the relative contribute of the two effects because these neurons display a broad tuning for disparity; the widths of the tuning curves increase as the best disparities of the response peaks increase. Pouget et al. (2002) showed that partially shifting receptive fields emerge in a recurrent network designed to compute bidirectional coordinate transformations between eye- and head-centered coordinates. They argue that a prerequisite for the appearance of a partial shift is the convergence in a multimodal area of at least two inputs in distinct frames of reference. Partially shifting receptive fields would be a consequence of having a neural network that can perform multidirectional computations such as comparisons among sensory modalities, sensorimotor transformations, and inverse transformations from motor output to sensory coordinates (Pouget et al. 2002). LIP, as a site of convergence of visual and auditory signals with the latter codified in both eye- and head-centered coordinates (Stricanne et al. 1996), would offer the fundamental conditions for the appearance of a partial shift. It remains to be verified whether the neurons showing a partial shift are well suited to the alignment of auditory and visual maps. Such integration might be possible because it has been shown that auditory responses in area LIP are associated with visual responses in the same neuron (Grunewald et al. 1999; Linden et al. 1999). There are other examples of intermediate coding of different coordinate systems that appear as partial shifts. These have been reported in multimodal areas outside of LIP: the ventral intraparietal area (Duhamel et al. 1997), the ventral premotor cortex (Graziano et al. 1997), the parietal reach region (Cohen and Anderson 2000), and the SC (Hartline et al. 1995; Jay and Sparks 1987).
Our results strongly suggest that LIP participates in the encoding of egocentric visual spatial information through a distributed network of broadly tuned disparity neurons that display a partial shift in their tuning in the direction opposed to the changes in vergence angle. These neurons do not change their disparity category preference because they contribute to the computation of different portions of the extrapersonal space (near vs. far). In this model, intermediate coding contributes to the representation of space in different coordinate frames, eye- or ego-centered (Snyder et al. 1998). The resulting computation is available to all motor systems receiving a posterior parietal input both directly from LIP (Schall 1997) and indirectly through LIP parieto-parietal connections (Andersen et al. 1990a; Blatt et al. 1990; Cavada and Goldman-Rakic 1989; Colby et al. 1988; Nakamura et al. 2001). For example, connections between the parietal-grasp-related area AIP and area LIP (Nakamura et al. 2001) may contribute to the computation of target location relative to the hand. Just as importantly, LIP connections through the SC (Ferraina et al. 2002; Gnadt and Beyer 1998; Paré and Wurtz 1997) could supply a body-centered representation of visual targets used by the gaze-independent reaching-related neurons in the SC (Stuphorn et al. 2000).
We are grateful to R Caminiti for support throughout this research, to R. H. Wurtz, M. Paré, and Y. Trotter for critical comments on a preliminary draft of this manuscript, and to A. R. Mitz for aid with editing.
This study was partially supported by funds from the Ministry of Scientific and Technological Research of Italy.
The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
- Copyright © 2004 by the American Physiological Society