It has recently been demonstrated that the maturation of normal multisensory circuits in the cortex of the cat takes place over an extended period of postnatal life. Such a finding suggests that the sensory experiences received during this time may play an important role in this developmental process. To test the necessity of sensory experience for normal cortical multisensory development, cats were raised in the absence of visual experience from birth until adulthood, effectively precluding all visual and visual–nonvisual multisensory experiences. As adults, semichronic single-unit recording experiments targeting the anterior ectosylvian sulcus (AES), a well-defined multisensory cortical area in the cat, were initiated and continued at weekly intervals in anesthetized animals. Despite having very little impact on the overall sensory representations in AES, dark-rearing had a substantial impact on the integrative capabilities of multisensory AES neurons. A significant increase was seen in the proportion of multisensory neurons that were modulated by, rather than driven by, a second sensory modality. More important, perhaps, there was a dramatic shift in the percentage of these modulated neurons in which the pairing of weakly effective and spatially and temporally coincident stimuli resulted in response depressions. In normally reared animals such combinations typically give rise to robust response enhancements. These results illustrate the important role sensory experience plays in shaping the development of mature multisensory cortical circuits and suggest that dark-rearing shifts the relative balance of excitation and inhibition in these circuits.
The cortex of the anterior ectosylvian sulcus (AES) of the cat has been divided into distinct visual [anterior ectosylvian visual area (AEV)] (Benedek et al. 1988; Mucke et al. 1982; Norita et al. 1986; Olson and Graybiel 1987), auditory [Field AES (FAES)] (Clarey and Irvine 1986, 1990a,b), and somatosensory [fourth somatosensory cortex (SIV)] (Clemo and Stein 1982, 1983, 1984) representations, each positioned relatively late in their respective sensory processing hierarchies. In addition to the unisensory neurons that make up these largely sensory-specific divisions, a large population of multisensory neurons is found in AES. These multisensory neurons appear to be enriched at the borders between the core unisensory domains (Wallace et al. 1992, 2004b; but see Jiang et al. 1994a,b; Minciacchi et al. 1987), with their modality distribution reflecting the neighboring cortical domains (i.e., visual-auditory neurons are clustered between AEV and FAES). Much like neurons in other multisensory brain structures, AES neurons have the remarkable ability to integrate information from different senses, generating responses far different from the component unisensory responses and often very different from those predicted by their summation (Jiang et al. 1994a; Wallace et al. 1992). Unlike in subcortical multisensory structures such as the superior colliculus (SC), where multisensory neurons and their integrated responses likely play an integral role in the transformation of sensory information into appropriate motor commands (Bell et al. 2001; Jiang et al. 2002; Meredith and Stein 1985; Stein 1993, 1998; Wallace et al. 1998), a well-defined behavioral and/or perceptual role for AES has remained elusive. Nonetheless, its position in the sensory processing hierarchy coupled with its high incidence of multisensory neurons suggests a role in higher-order multisensory processing.
One strategy to better elucidate the functional role of AES has been to examine the developmental chronology of its constituent neurons, in the hopes that such work will provide insight into the maturation of its neural circuits and their parallels with the acquisition of behavioral and perceptual competencies. Prior studies have revealed that multisensory circuits in AES appear and mature slowly during postnatal life (Wallace et al. 2006). Multisensory neurons are absent in the newborn AES, and their appearance is in keeping with the progressive and sequential appearance of the three unisensory representations described in the SC (Stein et al. 1973; Wallace and Stein 1997, 2001), lending support to the generality of this sensory chronology.
Somatosensory neurons are the first sensory-responsive neurons to appear and they are found in the rostral aspects of the AES in the presumptive fourth somatosensory cortex (SIV). Auditory neurons soon follow and these are found preferentially distributed in the dorsal and caudal regions of the AES (Field AES). As soon as these two modalities are represented, the first multisensory (somatosensory-auditory) neurons can be found interposed at the border between SIV and FAES. As postnatal development progresses, visual responses appear and the first visually responsive multisensory neurons are seen at the AEV–FAES and AEV–SIV borders. However, much like their counterparts in the SC (Wallace and Stein 1997), these early multisensory AES neurons are still immature and cannot yet integrate the cross-modal inputs they receive (Wallace et al. 2006). As development progresses, a growing population of these neurons acquires this integrative capacity.
The slow and progressive maturation of multisensory cortical circuits strongly suggests that sensory experience plays an integral role in this developmental process. Prior work in the SC, where multisensory development leads cortical multisensory development by several weeks, has shown a profound deterministic role for sensory experience in the formation of a normal subcortical multisensory representation (Wallace et al. 2004a, 2007). In the current study, we set out to test the need for normal sensory experience in the development of the AES multisensory representation. To do this, we raised animals in complete darkness from birth until adulthood and then assessed the impact of this rearing condition on the neuronal populations in AES.
Cats (n = 5) were raised from birth until adulthood (>6 mo) in the dark. Binocular infrared goggles (with an illuminator wavelength of 920 nm) were used to provide daily care and to allow for routine veterinary procedures. Additional infrared viewing systems were in place that allowed the animals to be monitored from an adjacent room, and in which the output of the video cameras was stored on a digital video recorder. All dark-reared data were compared with age-matched control animals (n = 5) reared in a standard illuminated housing environment (Wallace and Stein 1997). In the dark-reared group, experiments (i.e., recording sessions) ranged in number from 9 to 38 and were carried out over a period not >1 yr. In the control group, the experiments ranged in number from 6 to 44 and were carried out over a period not >15 mo.
Recording sessions did not begin until an animal was ≥6 mo of age. Semichronic recordings were conducted at weekly intervals after the implantation of a recording chamber over the AES (see following text). All procedures were performed in compliance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health publication number 91-3207) at Wake Forest University School of Medicine and Vanderbilt University Medical Center, both of which are accredited by the American Association for Accreditation of Laboratory Animal Care. Details of the surgery and recording procedures are virtually identical to those used previously (Wallace et al. 1992, 2006) and will only be briefly described here.
Implantation and recording procedures
Induction of anesthesia was begun in the dark room with an injection of a cocktail of ketamine hydrochloride [20 mg/kg, administered intramuscularly (im)] and acepromazine maleate (0.04 mg/kg, im). Because it was necessary to remove these animals from the dark room, substantial efforts were taken to minimize any visual experience during transit for surgical implantation or experimental recordings. Therefore they were anesthetized in their housing cages and were fitted with light-occluding masks during transport and recovery, and opaque contact lenses during recordings.
For the implantation of the recording chamber over AES, animals were transported to a central surgical suite, where they were intubated and then artificially respired. A stable plane of surgical anesthesia was achieved using inhalation isofluorane (1.0–3.0%). Body temperature, expiratory CO2, blood pressure, and heart rate were continuously monitored (VSM7, Vetspecs) and maintained within normal physiological bounds. A craniotomy was made to allow access to the AES. A recording chamber was then secured (see following text) over the craniotomy to allow direct access to AES. A head holder was stereotaxically positioned over the midline. Both the recording chamber and head holder were attached to the skull using stainless steel screws and orthopedic cement to allow the animal to be maintained in a recumbent position during recordings without obstructing the face and ears. Preoperative analgesics and postoperative care (i.e., analgesics and antibiotics) were administered in close consultation with veterinary staff. Animals were allowed a minimum of 1 wk recovery before the first recording session.
For recording, anesthesia was induced with an initial bolus of ketamine (20 mg/kg, im) and acepromazine maleate (0.04 mg/kg, im), and the animal was comfortably supported in the recumbent position without any wounds or pressure points by the head-holding system implanted during surgery. Anesthesia was maintained with a constant-rate infusion of ketamine [5 mg·kg−1·h−1, administered intravenously (iv)] delivered through a cannula placed in the saphenous vein. Animals were artificially respired, paralyzed with pancuronium bromide (0.1 mg·kg−1·h−1, iv) to prevent ocular drift, and administered fluids [lactated Ringer solution (LRS), 4 ml/h, iv] for the duration of the recording. On completion of the experiment animals were given an additional 60–100 ml LRS subcutaneously to facilitate recovery. Parylene-insulated tungsten electrodes (Z = 1–3 MΩ) were advanced into the AES using an electronically controlled mechanical microdrive. Single-unit neural activity (with a minimum of 3:1 signal-to-noise ratio) was recorded and amplified and was routed to an oscilloscope, audio monitor, and computer for both on- and off-line analyses.
Search strategy and receptive field mapping
In an effort to identify all sensory-responsive neurons, and to determine the neuronal selectivities that will be used to tailor later quantitative testing (see following text), a comprehensive battery of search stimuli was used (for more detail see Perrault Jr et al. 2003; Wallace et al. 1997, 1998, 2006). Visual search stimuli consisted of moving and stationary bars of light projected onto a translucent tangent screen. For both, the size and intensity of the stimuli could be independently controlled, and for the moving stimuli the orientation, speed, and direction of movement were varied to obtain a gross tuning profile for the neuron under study. Auditory search stimuli consisted of clicks, hisses, whistles, claps, and broadband noise bursts that could be delivered at all locations around the animal. Somatosensory search stimuli consisted of mechanical taps, manual compression of the skin, and joint movement.
After isolating a sensory responsive neuron, its receptive field(s) and modality convergence pattern were determined. Receptive fields (RFs) were mapped as in the past (Wallace et al. 1992, 2004a) and were plotted on a standardized representation of visual, auditory, and somatosensory space (Stein and Meredith 1993). If a neuron was initially judged to be unisensory (e.g., auditory only), stimuli from the nondriving modalities (e.g., visual, somatosensory) were then presented to test for any modulatory effects. A modulatory effect is operationally defined as a statistically significant change (see following text) in the multisensory response as compared with the single sensory-driven response. If these tests failed to reveal a modulatory influence, the neuron was characterized as unisensory. Conversely, if such a modulatory influence was found, or if the neuron was overtly activated by two or more modalities, it was categorized as multisensory. Based on this distinction, multisensory neurons were divided into two classes: stimulus-driven (SD) neurons (neurons that show an evoked response to more than one sensory modality) and stimulus-modulated (SM) neurons (neurons that show an evoked response to a single sensory modality and a modulated multisensory response).
Stimulus presentation, data acquisition, and analysis
A custom-built PC-based real-time data-acquisition system controlled the structure of the trials, the timing of the presented stimuli, and recorded spike data (100 kHz). Analyses of the data were performed off-line using customized scripts in the MATLAB (The MathWorks, Natick, MA) programming environment.
For quantitative data collection, visual stimuli consisted of 50- to 100-ms-duration presentations of either flashed stationary light-emitting diodes (LEDs) or moving bars of light (0.11–13.0 cd/m2 on a background luminance of 0.10 cd/m2) projected onto the tangent screen (positioned 1 m from the animal's eyes). Visual stimulus effectiveness was manipulated by changing either the direction or speed (70–120°/s) of movement, or the physical dimensions of the stimulus (2 × 2–6 × 6°). Auditory stimuli were delivered through a movable speaker that was either clipped to the corresponding LED or pinned to the tangent screen adjacent to the corresponding visual stimulus location. Stimuli consisted of 50-ms-duration broadband (20 Hz to 20 kHz) noise bursts ranging in intensity from 50.6 to 70.0 dB sound pressure level (SPL) on a background of 45 dB SPL (A-weighted). Somatosensory stimuli were presented from a probe tip mounted to a modified moving-coil vibrator (Ling 102A), whose movements (amplitude, duration, velocity) could be independently controlled. The probe tip was positioned against either the skin or hair. Somatosensory stimulus effectiveness was manipulated by changing either stimulus amplitude or velocity. Stimulus conditions [e.g., visual (V), auditory (A), multisensory (VA)] were interleaved randomly until a total of ≥15 trials were collected for each stimulus condition.
For tests of the multisensory integrative capacity of a given neuron, weakly effective spatially and temporally coincident pairings were used because such combinations have previously been shown to optimize the potential for an interaction (Meredith and Stein 1986b; Perrault Jr et al. 2003). Weakly effective stimuli are operationally defined as those that elicit a slightly suprathreshold (i.e., statistically different from spontaneous firing) response.
Peristimulus time histograms and collapsed spike density functions characterized the neuron's responses. Spike density functions were created by convolving the spike train from each trial for a given condition and location with a function resembling a postsynaptic potential specified by τg, the time constant for the growth phase, and τd, the time constant for the decay according to the following formula Based on physiological data from excitatory synapses, τg was set to 1 ms and τd to 20 ms (Kim and Connors 1993; Mason et al. 1991; Sato and Schall 2001; Sayer et al. 1990). Baselines for each spike density function were calculated as the mean firing rate during the 500 ms immediately preceding stimulus onset. Collapsed spike density functions were then set at a threshold, 2 SDs above their respective baselines to delimit the stimulus-evoked responses. Single units that failed to demonstrate a significant stimulus-evoked response lasting ≥30 ms in at least one modality were categorized as “no response” and were removed from further consideration.
The stimulus-evoked responses during the multisensory condition were compared with the responses to the most effective unisensory conditions to create a metric that indexes integrative capacity: I = [(M − U)/U] × 100, where I is the multisensory interactive product, M is the response to the multisensory stimulus, and U is the response to the most effective unisensory stimulus (Meredith and Stein 1983, 1986b). Statistical comparisons of the mean evoked responses of the multisensory condition and the best unisensory evoked response were done using a two-tailed paired Student's t-test. Statistical comparisons of population distributions (i.e., dark-reared vs. control) where done using the χ2 test for independence.
Dark-rearing has surprisingly little impact on the multisensory distribution in AES
In total, 559 AES neurons were analyzed, 391 from normally reared adult animals and 168 from dark-reared adult animals. Despite equivalent sampling across the AES, the distribution of sensory responsive neurons differed between the two groups (Fig. 1, χ2 test, df = 7, χ2 = 20.56, P < 0.01), suggesting that dark-rearing had a significant effect on the development of the modality representations. Despite this overall effect, the multisensory neuronal populations did not differ significantly between these two groups (χ2 test, df = 3, χ2 = 5.79, P > 0.05). Most surprising in this respect was the incidence of visually responsive neurons (this includes both unisensory visual neurons and visually responsive multisensory neurons) in the dark-reared group, which, at 34%, was identical to the normally reared population. Similarly, the overall proportion of multisensory neurons in both populations was very similar (27% in dark-reared vs. 21% in normals). In both groups, visually responsive neurons constituted the vast majority of the multisensory populations (82.6% of dark reared population, 87.5% of normal population). Together, these results suggest that although dark-rearing does appear to have a global impact on the modality distributions of AES, this effect is not driven by a decline in the visually responsive population or by a shift in the relative distribution of multisensory neurons.
Dark-rearing has a substantial impact on the types of multisensory interactions seen in AES neurons
Despite these similarities, substantial differences were apparent in the manner in which neurons in the two populations integrated multisensory cues. To assess the integrative capacity of each neuron, stimuli were chosen to optimize the likelihood of response enhancement by pairing weakly effective spatially and temporally coincident stimuli (Meredith et al. 1986a, 1987, 1996; Stein and Meredith 1993). Under such conditions, 63% (110/175) of the neurons tested in control animals showed response enhancements. In contrast, using the identical stimulus set, only 36% (15/42) of the neurons tested in dark-reared animals showed response enhancements (Fig. 2). The most frequent outcome of this pairing in dark-reared animals was response depression, seen in 55% (23/42) of the neurons tested, and representing a significant difference from the control population (χ2 test, df = 2, χ2 = 13.46, P < 0.01).
Dark-rearing shifts the distribution of AES multisensory neurons toward those that are modulated, rather than driven, by the second sensory modality
In addition to this shift toward response depressions, another major change in dark-reared animals was in the nature of the multisensory neurons found in AES. In normally reared animals, the vast majority of multisensory neurons can be effectively driven by two (or even three) different sensory modalities (Fig. 3, top center). A small number of neurons show a different response profile in which the influence of the second modality is revealed only when paired with a stimulus in the “driving” modality. For example, a neuron initially assumed to be a unisensory visual neuron would be reclassified as multisensory visual-auditory when it was shown that the neuron's visual response was altered by the concurrent presentation of an auditory stimulus. To distinguish between these two multisensory populations, we use the terms stimulus-driven (SD) and stimulus-modulated (SM).
Using this classification scheme, it can be seen that there is a substantial and significant (χ2 test, df = 1, χ2 = 8.3, P < 0.01) shift in the dark-reared population toward SM neurons (Fig. 3, bottom center). It is important to note here that the initial modality-classification scheme (i.e., Fig. 1) included both SD and SM neurons in the multisensory categories. Together, these results suggest that dark-rearing results in a redistribution of neurons with an SD profile into neurons with an SM profile (see discussion).
The majority of interactions seen in stimulus-modulated multisensory neurons in dark-reared animals are response depressions
As shown previously (Fig. 2), dark-rearing resulted in a significant increase in the incidence of response depressions (note again that these were pairings that would normally give rise to response enhancements). This increase appeared to be largely a function of changes in the integrative profile of SM neurons. Whereas the large majority (i.e., 71%) of SM neurons in normally reared animals showed response enhancements, 73% of the SM population in the dark-reared animals exhibited response depressions to these same pairings (Fig. 3, χ2 test, df = 1, χ2 = 4.85, P < 0.05). In contrast, SD neurons showed similar patterns of interactions in the two populations (Fig. 3).
The responses depicted for the two neurons shown in Fig. 4 illustrate the typical pattern of results for SM neurons in these two populations. On the left is an example of a neuron from a normally reared animal that was initially categorized as a unisensory auditory neuron. When a visual stimulus (that was ineffective when presented alone) was paired with an effective auditory stimulus, a significant response enhancement was observed. In contrast, in the AES neuron from a dark-reared animal shown on the right, which was also initially classified as unisensory (auditory), the same multisensory stimulus configuration resulted in a significant response depression.
Response depression characterized SM neurons in dark-reared animals regardless of the nature of the stimulus complex
To determine whether the anomalous multisensory integration seen in the dark-reared population was truly an inherent characteristic of these neurons, and not a function of changes in the spatial, temporal, or effectiveness profiles of these neurons that may have been induced by the dark-rearing, a number of neurons were subjected to manipulations in which these parameters were systematically altered (spatial: n = 4; temporal: n = 9; stimulus effectiveness: n = 12). In the representative neuron shown in Fig. 5, an auditory stimulus was positioned at a number of locations within the auditory receptive field and the responses were recorded. These same auditory stimuli were then paired with a visual stimulus at the same location that by itself was ineffective. In five of these six pairings, the net result was a significant depression of the auditory response (in the sixth condition, the depression was not significant). Similarly, altering the temporal relationship of the stimulus pairings invariably resulted in response depressions within a defined temporal “window,” and no significant interactions when stimuli fell outside of this window (Fig. 6). Finally, in these neurons, regardless of the relative effectiveness of the paired stimuli, response depressions were the standard outcome (Fig. 7). Each of these results differs dramatically from those seen in AES neurons of normally reared animals, where similar spatial, temporal, and effectiveness combinations generally result in a pattern of response enhancements.
These results illustrate the importance of sensory experience (specifically visual experience) for the development of mature multisensory cortical circuits. Several notable changes were seen in AES when visual experience was eliminated from birth until adulthood. The first was a shift in the dark-reared population toward more multisensory neurons that were modulated by, rather than driven by, a second sensory modality. Whereas in normally reared animals the vast majority of AES multisensory neurons were effectively driven by two or more sensory modalities, in dark-reared animals approximately one third of the AES neurons were not overtly multisensory. The multisensory character of these neurons was revealed only when a stimulus in a second modality was presented in conjunction with a stimulus in the effective modality. To differentiate between these populations, we refer to the neurons in which two or more modalities are effective in activating the neuron individually as stimulus-driven (SD) neurons. These are contrasted with stimulus-modulated (SM) neurons. Intriguingly, these categorical distinctions appear not to be dependent on stimulus features. For example, increasing the intensity (i.e., effectiveness) of a stimulus delivered to an SM neuron never resulted in a reclassification of that neuron as SD. The second major change noted in dark-reared animals, and the one likely to be of greater functional significance, was in the sign of this modulatory effect. In normally reared animals the pairing of spatially and temporally coincident multisensory stimuli of minimal effectiveness typically results in a significant enhancement in the neuron's response (this is seen in both SM and SD neuron classes). In contrast, here we find that nearly three quarters of the SM multisensory neurons in dark-reared animals exhibit a response depression when presented with these same pairings. One might expect these depressive interactions to be limited to the visual domain, given the nature of the sensory manipulation; however, all multisensory neuron types investigated (i.e., VA, VS, AS) in dark-reared animals demonstrated robust response depressions. Together, these results suggest that one of the major impacts of dark-rearing on the multisensory representations in AES is a shift in the relative balance of excitation and inhibition (see following text for more discussion on this issue).
One concern in experiments of this type is the possibility that the limited visual (and visual–nonvisual) experience received during the course of the recording sessions is sufficient to induce changes in the AES population. A longitudinal analysis of the data failed to reveal any obvious differences between early and later recording sessions, making this an unlikely possibility.
Although dark-rearing had a minor impact on the appearance of visually responsive neurons in AES, the visual response properties of these neurons were not normal: receptive fields were very large, and response vigor, habituation, and tuning profiles were more like neurons in neonatal animals (Wallace et al. 2006). It is also important to note that the visual “response” of many of the multisensory neurons in the dark-reared animals was a visually elicited modulation of a driven response in another modality. In fact, many of these neurons were first thought to be unisensory auditory or somatosensory based on initial sensory testing.
At first these results appear difficult to reconcile with the work of Rauschecker and colleagues (Rauschecker 1996; Rauschecker and Korte 1993). In these studies it was found that visual deprivation (lid suturing) resulted in a drastic reduction in the number of visual neurons in AES. In fact, much of AEV (the visual subdivision of AES) was now found to contain neurons responsive solely to auditory cues. The authors posit that this form of sensory substitution may in fact be the neural substrate subserving the improved auditory spatial acuity of these animals (Korte and Rauschecker 1993; Rauschecker and Kniepert 1994). These authors, although noting the occurrence of multisensory neurons, did not examine AES neurons in a multisensory context (i.e., by presenting them with paired stimuli from multiple modalities). In fact, in the current study electrode penetrations that targeted AEV (i.e., that were directed toward the caudal pole of the ventral bank of AES) revealed many neurons whose responses to auditory stimuli were modulated by the concurrent presentation of visual cues (and were consequently described as visual-auditory neurons). Furthermore, it may be difficult to directly compare the effects of these two different forms of visual deprivation. Whereas dark-rearing eliminates all visual input, lid suturing has its most profound effects on pattern vision (because substantial light still impinges on the retina) (Crair et al. 1998; Spear et al. 1983; Zufferey et al. 1999). Although one might expect that the effects of dark-rearing on visual responses would be more severe than those of lid suturing, a paradoxical effect might be a result of the retention of an open “window” for cortical plasticity in the absence of all visual inputs (Mower and Christen 1985).
Numerous studies have investigated the role of visual experience on the development of visual cortex, with a major emphasis of this work being to characterize changes in early visual cortical regions attributable to the experiential manipulations (for reviews see Barlow 1975; Feller and Scanziani 2005; Movshon and Van Sluyters 1981; Pizzorusso et al. 2000; Rauschecker 1991; Sherman and Spear 1982). Although this work has shown there to be significant genetic and epigenetic (i.e., experience-dependent) influences on cortical development, the impact of these factors on the organization of visual–nonvisual multisensory circuits had remained uncharacterized. Perhaps most relevant to the current study is work from cat, which examined the effects of binocular deprivation on extrastriate visual areas in the suprasylvian sulcus (Spear et al. 1983). These areas, which provide the majority of the visual input to AES (Norita et al. 1986; Olson and Graybiel 1987; Scannell et al. 1995), show a substantial reduction in the numbers of visually responsive neurons and significant changes in the response properties of their constituent neurons, alterations consistent with the results of the current study. In addition, recent work has shown auditory responses in the visual cortex (both V1 and extrastriate) of visually deprived animals (Sanchez-Vives et al. 2006; Yaka et al. 1999), suggesting that multisensory information may be relayed to AES from earlier stages in the processing hierarchy under these altered conditions.
Differences in experiential multisensory plasticity in cortical and subcortical circuits
The current results highlight an important difference for the role of visual experience in multisensory development in the cat AES when compared with the classic model for studies of multisensory integration, the superior colliculus (SC). Although prior work has pointed to striking parallels between the normal development of the multisensory representations in the AES and SC (Wallace and Stein 1997; Wallace et al. 2006) it has been shown in the SC that visual experience is a necessary prerequisite for the appearance of multisensory integration (Wallace et al. 2004a). Thus under identical rearing circumstances (i.e., dark-rearing from birth until adulthood), despite the retention of a relatively normal complement of multisensory neurons in both structures, in the SC virtually all neurons lack the capacity to generate response enhancements to the pairing of weakly effective and spatially and temporally coincident stimuli. In contrast, in AES a small population of neurons does develop this capacity, and a substantial proportion of neurons now shows an uncharacteristic form of integration (i.e., response depression) to these stimulus configurations.
Why such differences are seen between these two important multisensory populations remains unknown, but the answer is likely to be contained within the different ways in which these independent cortical and subcortical multisensory circuits are assembled. In the SC, multisensory neurons are created by the convergence of unisensory inputs from both subcortical and cortical sources (Huerta and Harting 1984; Jay and Sparks 1987; Meredith and Stein 1983, 1986b; Meredith et al. 1992; Wallace et al. 1993). In addition, the integrative abilities of SC multisensory neurons can be specifically attributed to corticotectal projections arising from unisensory neurons in AES (Jiang et al. 2001; Wallace and Stein 1994, 2000; Wallace et al. 1993). Much less is known about the functional architecture of the multisensory zones of AES, but it seems likely that these are created by a convergence of local inputs from neighboring unisensory cortices. Clues to this organization, and to the anatomical substrates that may support the dark-rearing–induced shift toward inhibitory interactions (i.e., response depressions), comes from recent studies that have looked at the anatomy and physiology of connections between the major unisensory subdivisions of AES (Dehner et al. 2004; Meredith et al. 2006). In this work, it has been shown that there is substantial interconnectivity between the core unisensory domains of FAES and SIV, that these connections are largely GABAergic, and that their functional role appears to be in their ability to modulate sensory responses within these largely auditory and somatosensory representations. Although this work focused on these unisensory representations in AES, it seems likely that these GABAergic projections may also target the borders between these domains where multisensory neurons are enriched. A strengthening or unmasking of these inhibitory influences by visual deprivation may represent the most parsimonious explanation of results of the current study.
Although contrasting these studies emphasizes important differences between cortex and subcortex in the effects of visual deprivation on multisensory development, each of these studies illustrates the powerful role that early sensory experience plays in the creation of multisensory circuits. Extending this finding, recent work in the SC has shown that manipulating (but not eliminating) early (multi)sensory experience has effects that reflect the altered experiences. Thus raising animals in an environment in which visual stimuli are invariably associated with spatially disparate auditory stimuli results in the creation of neurons with an anomalous form of multisensory integration that is “appropriate” for the experiences received during early postnatal life (Wallace and Stein 2007). These results highlight the malleability of cortex and subcortex during early life and point to an instructive role for sensory experience in shaping the final architecture of mature multisensory circuits.
This work was supported by National Institutes of Health Grants NS-36916 and MH-63861 and Vanderbilt Kennedy Center for Research on Human Development.
We thank J. Schuster and G. Derderian for expert technical assistance and Z. Barnett for assistance with programming.
The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
- Copyright © 2007 by the American Physiological Society