Journal of Neurophysiology

Error message

Notice: PHP Error: Undefined index: custom_texts in highwire_highwire_corrections_content_type_render() (line 33 of /opt/sites/jnl-jn/drupal-highwire/releases/20151124215058/modules/highwire/plugins/content_types/

Two hands, one perception: how bimanual haptic information is combined by the brain

Valentina Squeri, Alessandra Sciutti, Monica Gori, Lorenzo Masia, Giulio Sandini, Juergen Konczak


Humans routinely use both of their hands to gather information about shape and texture of objects. Yet, the mechanisms of how the brain combines haptic information from the two hands to achieve a unified percept are unclear. This study systematically measured the haptic precision of humans exploring a virtual curved object contour with one or both hands to understand if the brain integrates haptic information from the two hemispheres. Bayesian perception theory predicts that redundant information from both hands should improve haptic estimates. Thus exploring an object with two hands should yield haptic precision that is superior to unimanual exploration. A bimanual robotic manipulandum passively moved the hands of 20 blindfolded, right-handed adult participants along virtual curved contours. Subjects indicated which contour was more “curved” (forced choice) between two stimuli of different curvature. Contours were explored uni- or bimanually at two orientations (toward or away from the body midline). Respective psychophysical discrimination thresholds were computed. First, subjects showed a tendency for one hand to be more sensitive than the other with most of the subjects exhibiting a left-hand bias. Second, bimanual thresholds were mostly within the range of the corresponding unimanual thresholds and were not predicted by a maximum-likelihood estimation (MLE) model. Third, bimanual curvature perception tended to be biased toward the motorically dominant hand, not toward the haptically more sensitive left hand. Two-handed exploration did not necessarily improve haptic sensitivity. We found no evidence that haptic information from both hands is integrated using a MLE mechanism. Rather, results are indicative of a process of “sensory selection”, where information from the dominant right hand is used, although the left, nondominant hand may yield more precise haptic estimates.

  • handedness
  • human
  • sensorimotor
  • sensory integration

humans routinely use their hands to haptically explore objects in the environment. In many cases, both hands are used to gain information about the properties of the object. We know that haptic sensing requires the integration of spatially disparate sensory signals from cutaneous afferents in the digits with proprioceptive signals of the arm (a process of intersensory integration). However, we know very little about how the brain combines the haptic information of our two hands to achieve a single percept of an object and about the underlying mechanism of how the nervous system integrates or fuses information from two haptic systems.

To investigate issues of haptic sensing in a controlled experimental setting, virtual force environments have been used to present haptic stimuli (Chib et al. 2006; Fasse et al. 2000; Henriques and Soechting 2005; Hogan et al. 1990). In these studies, subjects usually grasped the handle or stylus of a robotically controlled manipulandum, which generated appropriate boundary forces resembling the surfaces of virtual objects. The advantage of this technique is that object shape and boundary stiffness can be modified from trial to trial (Chib et al. 2006; Henriques and Soechting 2003), shapes that are not realizable physically can be simulated (Fasse et al. 2000), and subjects' sensitivities can be computed with great precision (Henriques and Soechting 2005). Whereas a number of these psychophysical studies have investigated unimanual curvature discrimination (Louw et al. 2000; Pont et al. 1997, 1998, 1999; van der Horst and Kappers 2008; Wheat and Goodwin 2001), scant attention has been paid to bimanual curvature discrimination, because the technology for bimanual haptic environments has just recently become available.

Evidence on unimanual visuo-haptic integration showed that vision and the haptic information from one hand are integrated optimally. That is, the nervous system combines visual and haptic information in a fashion that is similar to a maximum-likelihood integrator and that redundant information from vision and hand improves the perceptual estimate (Ernst and Banks 2002). Studies on shape perception (Helbig and Ernst 2007), hand position sense (van Beers et al. 1996), and path trajectory integration (Reuschel et al. 2010) showed results consistent with a maximum-likelihood estimation (MLE) model. However, recent work on reaching and multimodal asynchronous curvature detection was not in agreement with a MLE model of visual-proprioceptive integration (Jones and Henriques 2010; Reuschel et al. 2011; Winges et al. 2010).

The extension of the MLE claim to bimanual haptic exploration would imply that two hands are better than one hand when it comes to haptic precision, because information from two hands provides added sensory redundancy. Yet, with respect to bimanual haptic perception, the evidence for or against a sensory redundancy claim is inconclusive. One report showed that unimanual haptic discrimination of curved surfaces was superior to bimanual discrimination (Kappers and Koenderink 1996), although this apparent unimanual advantage seemed to be influenced by the stimulus order and prior experience. A subsequent investigation could not confirm a unimanual advantage, instead showing that during active exploration, bimanual thresholds were in the same range as unimanual thresholds (Sanders and Kappers 2006). Other recent evidence (van der Horst et al. 2008) suggests that information is shared between the hands (i.e., between two haptic systems). This study documented that a haptic after-effect for curvature observed in one hand may transfer to the other, indicating either a shared perceptual representation or an interaction between separate but similar haptic systems.

The aim of this study was to systematically examine how bimanual haptic information about an object contour affects haptic precision and whether there is any evidence of interhemispheric haptic integration resembling a process of MLE. Specifically, we studied the sensitivity of human participants to discriminate between virtual curved object contours when one or two hands were being moved along this contour by a robot. We chose passive motion for two reasons. First, passive movements constrained by the robot assured tighter experimental control, with both hands performing simultaneous and synchronous displacements with the same speed profile. Thus any possible differences in perceptual sensitivity between unimanual and bimanual conditions cannot be explained by differences in the motor performance of the two hands. Second, previous work (Sciutti et al. 2009) had shown that the active or passive exploration of object curvature does not yield different haptic thresholds in adults. That is, active exploration is not necessarily superior to passive exploration in this task. If we could find that the perception of the two hands combined improves haptic sensitivity, we would have evidence that haptic information cannot only be transferred across body hemispheres but would also suggest that the brain uses a process of intermanual integration for haptic perception.



Twenty right-handed subjects (12 males and eight females, age = 25.3 ± 3.7 yr), with no known neuromuscular disorders and naïve to the task, participated in the experiment. All of them completed the Edinburgh Handedness Questionnaire (Oldfield 1971) and were ranked accordingly. All subjects revealed a laterality index of >30 on a (−100, 100) scale (−100: completely left-handed; 100: completely right-handed), indicating that they were right-hand dominant. All participants gave their informed consent prior to testing. The study was approved by the local ethics committee, Comitato Etico of the ASL3 of Genova (Italy).


The experimental setup was a bimanual 2 degrees-of-freedom planar manipulandum (see Fig. 1A) with a large elliptical workspace (80 × 40 cm). The mechanical structure consisted of a very rigid parallelogram mechanism powered by two direct drive brushless motors (Casadio et al. 2006). The manipulandum had low intrinsic mechanical impedance at the end-effector (inertia <1 kg; negligible viscosity and friction). It provided a high level of back-drivability and a good isotropy (manipulability index = 0.23 ± 0.02; force/torque ratio = 2.21 ± 0.19 N/Nm) with a large available force level at the handle (continuous force: 50 N; peak force: 200 N), allowing for experiencing a wide range of haptic stimuli. The controller consisted of three nested loops with a 16-kHz sampling rate (current loop) and a 1-kHz rate (impedance control loop), plus a 100-Hz virtual reality loop. The software environment was based on RT-LAB and Simulink.

Fig. 1.

Experimental setup and psychometric analysis. A: subject during bimanual exploration. B: dimensions of the curved virtual contour/path. The arc is characterized by a starting point (P1), an end point (P2), and a middle point (P3). The standard curvature is expressed in terms of maximum lateral deviation (LD) from a straight line and was set to LD = 40 mm. The length (L) was kept constant at 200 mm. The maximum value of the curvature deviation was 100 mm. C: example of a psychometric function for haptic curvature perception. The ordinate values indicate the probability of perceiving the comparison stimulus as more curved than the standard, and the abscissa represents the differences in LD between probe and standard. The gray points represent the proportion of responses in which the comparison stimulus has been perceived as more curved than the standard one for a specific range of stimuli; to obtain a psychometric curve, a cumulative Gaussian function was used to fit the data.


Subjects sat comfortably on a chair in front of the robot manipulandum (see Fig. 1A). The sagittal midline of the body was aligned to the central position of the robot workspace. To restrict upper-body motion, the trunk was strapped to the seat by belts. The center of the robot workspace was adjusted so that subjects assumed initial sagittal joint angles of ∼90° for the elbow and 45° for the shoulder. Seat position, with respect to the manipulandum, was adjusted so that the maximal arm displacement in the sagittal plane during testing did not exceed 80% of the individual's arm length.

The task required subjects to grasp the handles and to passively sense virtual curved contours (see Fig. 1B and Fig. 2), as if exploring the smooth surface of a round object. During unimanual exploration, only one of the two hands (right or left) explored the virtual contours that were either curved toward (medial) or away (lateral) from the body midline, resulting in four unimanual conditions (see Fig. 2A): lateral right (RLAT), medial right (RMED), lateral left (LLAT), and medial left (LMED). During bimanual exploration, both hands either explored two contours that had the same or the opposite orientation (e.g., medial-medial or medial-lateral), or a single contour was explored by grasping the right handle of the manipulandum with both hands. This resulted in four bimanual-uncoupled conditions (LLATRLAT, LMEDRMED, LLATRMED, LMEDRLAT; Fig. 2B) and two bimanual-coupled conditions (CLLATRMED and CLMEDRLAT; Fig. 2C). These coupled conditions allowed us to discern if coupling the two hands to one unit yields a haptic sensitivity that is different from that obtained with both hands exploring the same contour synchronously but at separate workspace locations. Figure 2 provides an overview of all 10 conditions. All subjects completed the four unimanual and the four bimanual-uncoupled conditions. Ten participants also completed the two additional bimanual-coupled conditions.

Fig. 2.

Overview of experimental conditions. A: unimanual; B: bimanual-uncoupled; and C: bimanual-coupled configurations. RLAT, lateral right; RMED, medial right; LLAT, lateral left; and LMED, medial left; CLLATRMED and CLMEDRLAT, bimanual-coupled.

During bimanual exploration, the distances between hands at the starting position were either shoulder width (N = 7) or close to body midline (N = 13). In the latter case, starting position distance varied between 3.5 cm or 13.5 cm, laterally from the body midline, to avoid that the two robot handles collided during movement. All subjects tested in the bimanual-coupled conditions had a starting distance of 7 cm. Preliminary analysis of the threshold data showed no effect of starting position on haptic acutity. We therefore pooled the data for further analysis.

Vision was occluded throughout the experiment, so the contour could only be sensed haptically. During each trial, participants were randomly presented with a sequence of two haptic stimuli in a two-interval, forced-choice procedure, separated by a 500-ms interstimulus interval. Subjects were required to discriminate between the presentation of one fixed (standard) and one variable curvature (comparison). After each trial, the participant indicated verbally which stimulus (contour) was more curved. Based on this judgment, the curvature of the virtual wall was adjusted in the subsequent trial using an adaptive procedure (QUEST algorithm; see Watson and Pelli 1983). The adaptive procedure assured that the sequence of curvature values converged to the threshold almost monotonically for all conditions. Each trial was initiated by the experimenter by pressing a button. Time between trials was kept variable (∼4–10 s), so that subjects could not predict the onset of the subsequent trial.

The standard stimulus was characterized by lateral deviation (LD) of 40 mm, corresponding to a curvature of 6.9 m−1 (see Fig. 1B; for further details about stimulus presentation, see Design and measurements). The linear path distance for all stimuli was 200 mm. The starting position of the manipulandum was identical for all trials. Before data collection, participants underwent a familiarization phase, in which they experienced the haptic forces by performing 15 curvature explorations in one of the experimental conditions. In general, the end-effector moved along an arc, starting from point P1, passing through P3, and ending at P2 and then returning to P1 (Fig. 1B). The motion law was {XEE(t)=XC+Rcos(π+ϑcos(2πt/T))YEE(t)=YC+Rsin(π+ϑcos(2πt/T))(1) where XEE and YEE are the moving end-effector coordinates, XC and YC are the coordinates of the circle center, R is the radius of the circle, ϑ is the central angle of the arc, T is the duration of the entire movement (T = 3 s), and t is the actual movement duration. The force provided by the robot had two different components: an attractive force field, which is the haptic representation of the virtual target, and a viscous force field for the stabilization of the arm, according to the following equation F=K|XTXH|(XTXH)|XTXH|+[B00B]X˙H(2) where XH and XT are the vectors of hand coordinates and target position, B is a viscous coefficient (10 Nsm−1), and K is the stiffness coefficient (60 N/m). This control allowed for the generation of a stereotyped biological speed profile, characterized by a symmetric shape with a single velocity peak and an acceleration and deceleration phase, which mimicked the profiles seen during active motion in humans. The complete exploration of each curvature lasted 3 s and consisted of forward motion along the curved surface, followed by a motion backward along the same path.

Design and measurements.

Each session lasted for ∼3 h, and each condition was presented in a block. To prevent fatigue, subjects rested a few minutes between two consecutive blocks of trials. Each condition consisted of 60 trials. In each trial, participants were exposed to the standard stimulus (LD = 40 mm) and a comparison stimulus. The order of presentation of the standard and the comparison stimuli in each trial was random and not predictable for the subject. The percentage of trials where the probe was judged as more curved than the standard was computed across the range of presented stimuli for each of the eight conditions. Subsequently, we fitted a cumulative Gaussian function to each data set, yielding eight psychometric sensitivity functions for each subject and condition (for an example, see Fig. 1C). Based on these sensitivity functions, we derived a haptic discrimination threshold (DT) for each condition in each subject. The DT was defined as the difference between the LDs in curvature at the 75% and 50% “more curved” response level.

To quantify differences between the left- and right-hand thresholds, we subtracted the mean right- from mean left-hand threshold of each subject (we refer to this variable as the unimanual difference score). To understand the relationship between unimanual and bimanual conditions, we also computed the difference between each bimanual threshold and the corresponding left or right thresholds of each subject (for example, a bimanual LLATRLAT configuration would be compared with its respective unimanual conditions, LLAT, RLAT; we refer to this variable as the bimanual difference score). A positive bimanual difference score indicated that the bimanual condition had a larger threshold than the respective unimanual condition. This comparison between the bimanual thresholds and the respective unimanual thresholds provided a measure to indicate a handedness bias, i.e., whether bimanual sensing was superior to unimanual sensing and whether a particular hand preferentially contributed to it.

We used the unimanual DTs to test the MLE model across all conditions. The MLE model predicts that the bimanual percept will result in improved sensitivity compared with unimanual perception, which is quantitatively expressed by a lower bimanual threshold (Ernst and Banks 2002). Considering each hand as a single haptic system, we can compute the bimanual threshold predicted by the model (PRED) for each subject and bimanual condition: PRED=DTR2DTL2DTR2+DTL2(3)

To test if the PRED corresponded closely to the observed data, we performed separate paired t-tests between the observed bimanual DTs (OBS) and the bimanual thresholds predicted by the model (PRED).

To avoid that possible differences between conditions were influenced by outliers, we performed an outlier analysis. Thresholds larger or smaller than the respective condition mean ± 2 SD were considered outliers and replaced with the mean threshold value of the corresponding condition. Out of 100 different thresholds, seven values from four different subjects were classified as outliers, with three outlier values coming from a single subject.


The effect of handedness on unimanual haptic sensitivity.

To fully understand bimanual haptic exploration, we first discerned possible differences between the unimanual haptic sensitivity of the two hands and examined how the orientation of a contour affected haptic thresholds. There was a tendency for the left hand to be more sensitive (Table 1), although large interindividual differences were observed. The mean threshold for the right hand was 9.58 mm (±0.57 mm SE) and 9.70 mm (±1.02 mm SE). When performing a 2 × Orientation (LAT/MED) by 2 × Hand (L/R) ANOVA, the main effect for Hand reached statistical significance (P = 0.04), underlining the notion that the means of the two were different from each other, with the left hand tending to be more sensitive than the right hand in our sample (Fig. 3A). There was no significant effect of Orientation (P > 0.05). To illustrate the individual differences in haptic sensitivity between the two hands, Fig. 3B depicts the unimanual difference score (left threshold − right threshold) for all subjects. Twelve subjects on 20 showed a difference score <0, whereas seven on 20 had smaller right thresholds. We then grouped all subjects with a difference score >0 (right hand more sensitive) against all subjects with a difference score <0 (left hand more sensitive). Subsequently, we performed paired t-tests (one-tailed) on the difference scores for each subgroup to understand if the two mean biases were statistically different from zero. Both tests reached significance (left: P = 0.0002; right: P = 0.0024).

View this table:
Table 1.

Observed haptic discrimination thresholds

Fig. 3.

A: mean values of left and right discrimination thresholds (DT), regardless of the laterality. Error bars represent ± 1 SE. B: differences between right- vs. left-hand unimanual haptic sensing. Shown are the unimanual difference scores (left threshold − right threshold) for each subject sorted in ascending order. Negative values (difference score <0) indicate that a subject had a higher right-hand threshold compared with the left-hand threshold. A positive value indicates a higher left-hand threshold with respect to the right hand.

The effect of bimanual exploration on haptic sensitivity.

To investigate how the haptic sensitivity during bimanual exploration differed from the unimanual exploration, we compared the bimanual with the corresponding unimanual thresholds (see Fig. 4 and Table 1). We found no clear evidence that bimanual thresholds were systematically lower than the unimanual thresholds. In fact, the mean bimanual threshold of the LMEDRMED condition was significantly higher than the corresponding LMED threshold (P = 0.002), whereas all other mean comparisons failed to reach significance after Bonferroni adjustments for multiple testing (P > 0.05).

Fig. 4.

Mean observed unimanual and bimanual thresholds (OBS) vs. thresholds predicted by the MLE model (PRED). The first 2 bars indicate the unimanual left- and right-hand mean thresholds, respectively. The 3rd and 4th bars show the corresponding bimanual OBS (black bars) and the predicted threshold PRED (white bars). Significant differences between conditions are indicated by *P < 0.05. Error bars represent ± 1 SE. Note that the bimanual-coupled conditions are based on N = 10 subjects, whereas all other means are based on N = 20.

To test the effect of coupling hands on bimanual haptic sensitivity, we compared the observed mean thresholds of bimanual-uncoupled conditions (LLATRMED and LMEDRLAT) with the corresponding means of coupled conditions (CLLATRMED and CLMEDRLAT). The statistical analysis revealed no significant differences between the bimanual-uncoupled conditions and their corresponding coupled conditions (P > 0.05).

Expressing the bimanual thresholds in relation to either the left- or right-hand unimanual thresholds provided more detailed information about how a particular bimanual configuration related to its corresponding unimanual conditions (see Fig. 5). If a bimanual threshold were identical to a unimanual threshold, then the bimanual difference score between the two thresholds would be 0. If the difference score were >0, then the unimanual threshold was more sensitive. If it were <0, then the respective bimanual threshold was more sensitive, considering that all bimanual-left difference scores yielded a mean of 0.77 mm (±0.83 mm SE) and a mean of −0.64 mm (±0.96 mm SE) for the bimanual right-hand comparison. Results of a respective one-way ANOVA showed that the two means were different from each other (P = 0.014). Subsequent paired t-tests (one-tailed) revealed that the bimanual-left difference score was significantly different from zero (P = 0.039), whereas the same test failed to reach significance for the bimanual-right difference score (P > 0.05). Given that the bimanual-right difference score was smaller, this implies that bimanual haptic exploration was mostly biased toward the right, dominant hand.

Fig. 5.

Unimanual vs. bimanual differences in acuity with respect to each hand. Each bar represents the mean difference score between the bimanual and the left or the right hand for each of the 4 unimanual conditions. Negative values (bimanual difference score <0) indicate a higher unimanual mean threshold compared with the bimanual mean (↓ bimanual more sensitive). A positive value indicates a higher bimanual threshold with respect to the unimanual threshold (↑ unimanual more sensitive). Error bars represent ± 1 SE.

The predictions of the MLE model on bimanual haptic integration.

With the use of the MLE model equation, the respective thresholds (PRED) were computed for each bimanual condition and compared with the respective observed bimanual threshold (OBS). A one-way ANOVA on predicted and observed DTs was significant (P < 0.001). Subsequent statistical comparisons (paired t-test, two-tailed) between PRED and OBS DTs revealed statistically significant differences for all conditions (LLATRLAT: P < 0.001; LMEDRMED: P < 0.001; LLATRMED: P = 0.02; LMEDRLAT: P = 0.001; CLLATRMED: P = 0.00; CLMEDRLAT: P = 0.02). The results indicate that the MLE model failed to adequately predict the observed thresholds but consistently underestimated them.


The two hands of humans or other primates represent independent motor systems, each separately controlled by the nervous system. Yet, the actions of both hands can be coupled and coordinated to achieve a common motor goal. The two hands may also be considered to be parts of two distinct perceptual systems, each providing haptic information about the environment and the properties of objects by processing tactile and proprioceptive cues during manual exploration. Yet, we know surprisingly little about how the brain combines haptic information derived from its two hands to achieve a single percept. Many studies about haptics focused on one hand, examining its sensitivity and how information from the proprioceptive and tactile senses is integrated.

When investigating how haptic information from the two hemispheres is combined, one needs to be concerned with the issue of handedness. The notion that the motor functions of the hands are specialized is evident and has been researched extensively (Sainburg 2005). However, it is incompletely understood whether next to motoric handedness, i.e., the preferential use of one hand to perform motor acts, humans also show perceptual handedness, i.e., a bias in perceptual sensitivity toward one hand. That is, it is meaningful to distinguish between the ability of a hand to perform motor acts and its ability to sense the environment. Within the context of haptic perception, it is thus useful to determine if humans indeed have a bias in haptic sensitivity that favors one hand and to understand how such perceptual handedness is related to motoric handedness.

Evidence for perceptual handedness.

With respect to motor handedness, there is a long-standing claim that the dominant hemisphere/limb system is specialized for the feedforward control of limb trajectories and the nondominant system for the feedback-mediated control of endpoint position (Ghez and Gordon 1987). Several studies about healthy human participants supported this claim (Bagesteiro and Sainburg 2002; Sainburg 2002; Sainburg and Wang 2002; Wang and Sainburg 2006, 2007). Research on stroke patients (Haaland et al. 2004) is also consistent with this notion by demonstrating that when right-handed stroke patients performed rapid reaching movements, patients with left-hemispheric brain damage produced deficits in control of movement speed, whereas right-hemispheric damage was associated with deficits in final position accuracy. Collectively, these findings support a specialized role of the dominant hemisphere for trajectory control and of the nondominant hemisphere in control of final limb position. In addition, recent work (White and Diedrichsen 2010) suggests that such bimanual asymmetry may be due to differences in control gain in the motor system sets for each hand. Given that hand position information is based on proprioceptive feedback from the arm, which is used for motor control as well as for haptic perception, the claim of hemispheric specialization in perceptual space is not implausible. There is direct evidence that tactile as well as proprioceptive information shows a handedness effect, with the left hand usually being more sensitive in dominant right-handed people (Goble and Brown 2008; Goble et al. 2009; Varney and Benton 1975). Thus it is plausible to expect a handedness effect for haptic perception, knowing that it is based on the integration of tactile and proprioceptive inputs.

In fact, nearly all of our right-handed participants showed a bias for one hand being haptically more sensitive, with most exhibiting a left-hand bias (see Fig. 3). In summary, the data on a left-hand bias in proprioceptive and tactile sensitivity (Goble and Brown 2008; Goble et al. 2009; Varney and Benton 1975), as well as the findings that the left hand may have a special role in limb position control during reaching (Sainburg 2005), are all consistent with the notion of a left-hand specialization for processing position feedback in dominant right-handers. Whereas our data provide some evidence for the notion of handedness in perceptual haptic space, a subsequent study comparing dominant right-handers with dominant left-handers in a larger sample is needed to substantiate this claim.

Is interhemispheric haptic information combined?

There is an ongoing debate of whether haptic information from two hands is associated with increased haptic sensitivity, which would indicate that the brain integrates bimanual haptic information in a similar way in which it integrates visuo-haptic information. The MLE model is a good predictor of visuo-haptic cue integration (Alais and Burr 2004; Ernst and Banks 2002; Ernst and Bülthoff 2004), indicating that the brain takes into account all cues available for a given property, derives estimates for the property from each cue, and then combines all of these estimates into a coherent percept by optimal weighted averaging of the cues. Yet, with respect to bimanual haptic integration, previous work on discriminating curved surfaces (Sanders and Kappers 2006; Vogels et al. 1999) showed that bimanual active exploration of two curved surfaces yields thresholds in the range of the respective unimanual thresholds, which is inconsistent with a MLE-based integration process. Our data also confirm that there is no apparent benefit to haptic precision by having redundant information from two haptic systems available. The bimanual thresholds of our participants were not systematically lower than their unimanual thresholds. In addition, the MLE model of bimanual haptic integration yielded predictions that consistently underestimated the observed thresholds, which refutes the notion that the brain uses a maximum-likelihood integrator mechanism for combining haptic information from the two hands. Thus the results from this and previous experiments imply that the nervous system does not follow the same strategy often found for cue integration between and within modalities when combining inputs from the two hands.

Our data are more consistent with the notion of sensory selection or sensory gating, i.e., a process where the brain is biased toward one sensory input or cue, which is derived from one haptic system and disregards or minimizes other sources of relevant information. Moreover, our results also show a dissociation between perceptual and motor handedness. Participants seemed not to rely on their more sensitive left hand but as a group, showed a preference for their motorically dominant right hand when making judgments about object curvature (Fig. 5). That is, sensory gating was not necessarily biased toward the input providing the more precise sensory data but on the hand that is dominant for action. At this point, it is unclear to what extent this phenomenon might be attentional in nature; i.e., maybe we cannot attend to both arms simultaneously and have an inherent tendency to attend to our dominant arm.

In summary, two hands are not necessarily “better” than one hand when it comes to haptic perception. In addition, we found no clear evidence that the brain optimally integrates bimanual information about object curvature. Our results are rather consistent with a process of sensory selection, where information from the dominant hand is used, even if the nondominant hand may yield more precise estimates for haptic discrimination.


No conflicts of interest, financial or otherwise, are declared by the author(s).


Author contributions: V.S., A.S., M.G., L.M., G.S., and J.K. conception and design of research; V.S. performed experiments; V.S. analyzed data; V.S., A.S., M.G., L.M., G.S., and J.K. interpreted results of experiments; V.S. prepared figures; V.S. drafted manuscript; A.S., M.G., L.M., G.S., and J.K. edited and revised manuscript; J.K. approved final version of manuscript.


View Abstract