When making perceptual decisions, humans have been shown to optimally integrate independent noisy multisensory information - matching maximum likelihood (ML) limits. Such ML estimators provide a theoretic limit to perceptual precision (i.e., minimal thresholds). However, how the brain combines two interacting (i.e., not independent) sensory cues remains an open question. To study the precision achieved when combining interacting sensory signals, we measured perceptual roll tilt and roll rotation thresholds between 0 and 5Hz in 6 normal human subjects. Primary results show that roll tilt thresholds between 0.2 and 0.5Hz were significantly lower than predicted by a ML estimator that includes only vestibular contributions that do not interact. In this paper, we show how other cues (e.g., somatosensation) and an internal representation of sensory and body dynamics might independently contribute to the observed performance enhancement. In short, a Kalman filter was combined with an ML estimator to match human performance, while the potential contribution of non-vestibular cues was assessed using published bilateral loss patient data. Our results show that a Kalman filter model including previously proven canal-otolith interactions alone (without non-vestibular cues) can explain the observed performance enhancements as can a model that includes non-vestibular contributions.
- multisensory integration
- semicircular canals
- otolith organs
- Copyright © 2016, Journal of Neurophysiology