Understanding actions of conspecifics is a fundamental social ability depending largely on the activation of a parieto-frontal network. Using functional MRI (fMRI), we studied how goal-directed movements (i.e., motor acts) performed by others are coded within this network. In the first experiment, we presented volunteers with video clips showing four different motor acts (dragging, dropping, grasping, and pushing) performed with different effectors (foot, hand, and mouth). We found that the coding of observed motor acts differed between premotor and parietal cortex. In the premotor cortex, they clustered according to the effector used, and in the inferior parietal lobule (IPL), they clustered according to the type of the observed motor act, regardless of the effector. Two subsequent experiments in which we directly contrasted these four motor acts indicated that, in IPL, the observed motor acts are coded according to the relationship between agent and object: Movements bringing the object toward the agent (grasping and dragging) activate a site corresponding approximately to the ventral part of the putative human AIP (phAIP), whereas movements moving the object away from the agent (pushing and dropping) are clustered dorsally within this area. These data provide indications that the phAIP region plays a role in categorizing motor acts according to their behavioral significance. In addition, our results suggest that in the case of motor acts typically done with the hand, the representations of such acts in phAIP are used as templates for coding motor acts executed with other effectors.
In the monkey, there is a specific set of neurons (mirror neurons) that discharges both when the monkey performs a certain motor act and when it observes another individual performing it (Rizzolatti and Craighero 2004). Single neuron studies showed that mirror neurons are mostly located in the ventral premotor cortex (PMv) and inferior parietal lobule (IPL), a finding recently confirmed by functional MRI (fMRI) experiments (Nelissen et al. 2005; Peeters et al. 2009). Mirror neurons, as well as most PMv (Gallese et al. 1996; Kakei et al. 2001; Rizzolatti et al. 1988, 1996; Umilta et al. 2008) and IPL (Fogassi et al. 2005; Rozzi et al. 2008) motor neurons devoid of visual properties, code motor acts, i.e., movements with a specific goal (e.g., grasping). This differentiates them from neurons of the primary motor cortex that most frequently code elementary movements; that is, displacement of joints or body parts (e.g., flexion, extension, adduction, abduction) regardless of the motor act of which they are part (Kakei et al. 1999; Lemon et al. 1976; Porter and Lemon 1993; Umilta et al. 2008).
The observation of motor acts performed by others activate in humans, as well as in monkeys, a series of areas active during motor act execution. This action observation–action execution network includes PMv plus parts of the inferior frontal gyrus (IFG) and the cortex inside the intraparietal sulcus extending into the adjacent IPL convexity (Fabbri-Destro and Rizzolatti 2008; Rizzolatti and Craighero 2004). Additional cortical areas (e.g., the superior parietal lobule and the dorsal premotor cortex) have also been described to be active during the execution and observation of some types of actions (Filimon et al. 2007; Gazzola and Keysers 2009; Grezes et al. 2003). There is evidence that, as in the monkey, the action observation–action execution network encode motor acts (Gazzola et al. 2007a,b; Hamilton and Grafton 2008; Peeters et al. 2009).
It has been reported that both the parietal and frontal sections of the action observation–action execution network have a somatotopic organization. In PMv, the foot, hand, and mouth representations are organized in a medial to lateral direction, whereas in the IPL, the effector representation seems to show a rough rostro-caudal organization, the hand being located rostrally and the foot caudally (Buccino et al. 2001; Sakreida et al. 2005; Wheaton et al. 2004). However, all these studies have been based on a limited number of motor acts and have not addressed the issue of whether there are additional factors that may influence the coding of the motor acts in the action observation–action execution network.
In this study, we addressed this issue by presenting volunteers with four motor acts performed by three effectors (hand, mouth, and foot). Two motor acts described a positive agent–object relationship (grasping an object or dragging it toward the agent), and two motor acts described a negative relationship (pushing an object or dropping it). In principle, there are two possibilities. The first is that the agent–object relationship has no role whatsoever in coding the observed motor acts, with the acts being coded according to the effector (mouth, hand or foot) performing them. The second is that the meaning of the observed motor acts influences the encoding, and thus motor acts having a similar agent–object relationship are clustered together regardless of the effector used to perform them. Thus dragging motor acts, for example, will be coded in the same cortical field regardless whether they are performed with the hand, mouth, or foot.
The results showed that, although in PMv, the observed motor acts are grouped according to the effector performing them, in IPL, there is an organization based on the agent–object relationship that is independent of the effector. Furthermore, motor acts having a positive sign (i.e., grasping and dragging) were located ventro-rostrally, whereas those with a negative sign (i.e., dropping and pushing) were located dorso-caudally.
Eighteen volunteers participated in experiment 1 (11 women; mean age, 24 yr; range, 21–31 yr) and 15 in experiment 2 (9 women; mean age, 25 yr; range, 20–34 yr), of whom 7 also participated in experiment 3 (5 women). All volunteers were right-handed, had normal or corrected-to-normal visual acuity, and had no history of mental illness or neurological diseases. The study was approved by the Ethical Committee of the K.U. Leuven Medical School, and all volunteers gave their written informed consent in accordance with the Helsinki Declaration before the experiment.
To reduce head motion during the scanning sessions, the participants were asked to bite an individually molded bite bar fixed on the scanner table. Throughout the scanning session, eye movements of the participants were recorded with an ASL eye tracking system 5000 (60 Hz, Applied Science Laboratories).
The stimuli were projected with a liquid crystal display projector (Barco Reality 6400i, 1,024 × 768, 60-Hz refresh frequency) onto a translucent screen positioned in the bore of the magnet at a distance of 36 cm from the point of observation. Participants viewed the stimuli through a mirror tilted at 45° and attached to the head coil.
The experimental stimuli consisted of video clips displaying a side view of a human actor sitting at a table and performing one of four different motor acts. These motor acts were 1) dragging an object toward the actor, 2) pushing an object away from the actor, 3) grasping an object, and 4) dropping an object. Each motor act was carried out using three different effectors (foot, hand, and mouth; Fig. 1A), resulting in 12 different effector/motor act combinations. The overall size of the video display measured 17.7 × 13.2° visual angle and each video clip lasted 3 s. During video recording, special care was taken to ensure that each motor act was performed with only the most distal part of the effector (e.g., only the hand or the foot moved, whereas the arm or leg itself remained stationary). We thereby ensured that the four motor acts differed only in the type of movement and their goal and not in the amount of whole body motion. Because all movements were carried out by two different actors (1 man and 1 woman) and involved two different objects (a blue ball and a yellow cube), the complete stimulus set included 48 different video clips. Approximately 1° of visual angle above or below the moving effector, an image of a toy car (either blue or yellow), was shown in every frame of the video clips. This toy car measured 2.4 × 1.3° of visual angle, matching the size of the hand and foot in the videos.
In addition to the action stimuli, we used three different kinds of control stimuli: static stimuli, scrambled videos, and nonbiological motion videos. Static stimuli showed the first frame of every movement sequence, continuously, for 3 s, thereby controlling for the shape information present in the individual effector/motor-act videos. The two types of motion control stimuli were chosen to control for overall global motion (scrambled videos) and for specific local motion (nonbiological motion videos) differences between the individual effector/motor-act pairs. The scrambled videos contained the same sequence of frames as the original video clip, but each frame was replaced by its phase-scrambled version. The nonbiological motion control stimuli consisted of the movement of the toy car mimicking the movement of the effector with the first frame of the respective action stimulus presented in the background. The car was animated with the exact motion of the hand or the foot extracted from the motor act videos. For the mouth, we used a mixture of object and mouth motions to animate the car. For pushing and dragging, corresponding to blowing and aspirating with the mouth, no mouth motion was added, whereas for grasping and dropping, the opening and the closing of the mouth was added to the object motion. Thus the duration, direction, speed, and amplitude of the car motion was matched to the movement of the effector and its effect on the object. The nonbiological motion condition was the most stringent control condition because it controlled for both static shape and local motion. To allow assessment of the visual nature of the MR responses, we included a fixation baseline condition showing a plain gray rectangle with the same dimensions and average luminance as the movie clips. We thereby minimized changes in luminance across conditions and thus kept the pupil size constant over the course of the experiment.
A small red square (0.2°) was superimposed on all the stimuli. This fixation dot was presented 0.7° above or below the position where the movement occurred in the video. These two fixation positions were chosen to control for possible retinotopic effects caused by presentation of the movement in the upper or lower visual field. Because the movements took place at slightly different positions across the four movies (different actors and objects) belonging to any given effector/motor-act combination, the fixation point was positioned in each video individually. This introduced small shifts in the vertical and horizontal positions of the video frames with respect to the fixation point. To reduce the possible influences of these shifts, the edges of the videos were blurred with an elliptical mask (14.3 × 9.6°), leaving the actor in the video unchanged, but gradually blending the periphery with the black background.
Experiment 1 was designed to study the coding of different motor acts, carried out by different effectors, in premotor and parietal cortex. Every run started with the acquisition of four dummy volumes to assure that the MR signal had reached its steady state. One run included five conditions, presented in different blocks of 24 s each. The five conditions consisted of 1 of the 12 effector/motor-act pairs (e.g., hand-dragging), its three corresponding control conditions, and a fixation baseline. Within a given block, the four versions of an effector/motor act pair (male and female actor, yellow and blue object; see Stimuli), the four nonbiological control movies, the four scrambled movies, and four images of the static control condition were presented twice in random order for 3 s each, for a total of 24 s. The baseline fixation image was presented throughout the 24 s of the fixation block. Within a run, the fixation position remained constant either above or below the effector. Every condition was shown three times, yielding 15 blocks per run, lasting 360 s overall. The order of the conditions remained constant over the repetitions within a run, but could change between runs.
The order remained identical across the three effectors, but differed across motor acts and fixation positions, thus resulting in eight different sequences. Because each of these sequences could be associated with an effector, there were 24 different runs, corresponding to the 12 effector/motor-act pairs, each presented with two different fixation positions. One scanning session consisted of eight runs, which were grouped with regard to the effector used (e.g., 4 runs of different hand actions followed by 4 runs of foot actions). Every participant took part in three scanning sessions, resulting in the collection of the 24 runs for each subject. The order of the different motor acts and that of the effectors was counterbalanced across participants.
Before scanning, each participant was familiarized with the different stimuli (movements and controls) outside the scanner and instructed to maintain his gaze on the red fixation dot throughout the experiment. During the presentation in the scanner, the position of the fixation dot was kept constant on the screen across all stimuli presented during any given run.
Experiment 2 was designed to further study the grouping of the four motor acts into the two categories we had found in experiment 1. To be able to compare activations for the different motor acts directly, a single run in experiment 2 comprised all four hand motor acts (dragging, dropping, grasping, and pushing) together with their respective nonbiological motion controls (e.g., the most effective control; Supplemental Fig. S1)1 and a fixation baseline condition. This design resulted in nine (2 × 4 + 1) conditions that were presented three times per run, resulting in 27 (3 × 9) blocks. To reduce the length of each run, the durations of the blocks were shortened to 21 s for the experimental conditions and 15 s for the baseline fixation condition, resulting in 549 s per run. The first and last 16 frames were removed from each video, resulting in videos lasting 2,625 ms instead of 3,000 ms. Because each motor act lasted <2,625 ms, this manipulation reduced only the number of still frames presented within a block, without changing the duration of the movements, whether it was a motor act or the corresponding nonbiological motion control. Participants took part in a single scanning session of six runs.
Experiment 2 also differed in another aspect from experiment 1. To control for the possibility that the more unusual motor acts might capture slightly more attention than those performed more commonly, thus eliciting stronger fMRI responses (Ress et al. 2000), the participants were instructed to perform an attentionally demanding high-acuity task (Vanduffel et al. 2002). The fixation point was replaced by a small horizontal bar, which flipped to vertical for 1 s at random intervals between 3 and 15 s. Participants had to indicate this flip by pressing a button with their right index fingers. In a prescanning session, the size of the bar was adapted for each participant to maintain performance between 80 and 85% correct responses. The bar size ranged from 0.12 × 0.08 to 0.15 × 0.05° across participants. By using this task, we attempted to hold the level of attention devoted to the experimental stimuli constant (Sawamura et al. 2005).
The design of experiment 3 was identical to experiment 2 with the exception that foot and mouth motor acts were presented in the different runs instead of hand motor acts. The entire experiment consisted of two scanning sessions of six runs each: one for the four foot motor acts and their controls and one for the four mouth motor acts together with their controls.
Scanning was performed with a 3-T MR scanner (Intera, Philips Medical Systems, Best, The Nederlands) located at the University Hospital of K.U. Leuven. Functional images were acquired using gradient-echoplanar imaging with the following parameters: 50 horizontal slices (2.5 mm slice thickness; 0.25 mm gap), repetition time (TR) = 3 s, time of echo (TE) = 30 ms, flip angle = 90°, 80 × 80 matrix with 2.5 × 2.5 mm in plane resolution, and sensitivity enhancing (SENSE) reduction factor of 2. The 50 slices of a volume covered the entire brain from the cerebellum to the vertex. A three-dimensional (3D) high-resolution T1-weighted image covering the entire brain was acquired in one of the scanning sessions and used for anatomical reference (TE/TR 4.6/9.7 ms; inversion time, 900 ms; slice thickness, 1.2 mm; 256 × 256 matrix; 182 coronal slices; SENSE reduction factor 2.5). A single scanning session lasted ∼90 min.
Data analysis was performed using the SPM2 software package (Wellcome Department of Cognitive Neurology, London, UK) running under MATLAB (The Mathworks, Natick, MA). The preprocessing steps involved 1) realignment of the images, 2) coregistration of the anatomical image and the mean functional image, and 3) spatial normalization of all images to a standard stereotaxic space (MNI) with a voxel size of 2 × 2 × 2 mm. Before further analysis, the functional data were smoothed with an isotropic Gaussian kernel of 8 mm.
For every participant, the onset and duration of each condition was modeled by a general linear model (GLM). The design matrix was composed of five regressors modeling the five conditions plus six regressors obtained from the motion correction in the realignment process to account for voxel intensity variations caused by head movement and one constant regressor per run. All regressors were convolved with the canonical hemodynamic response function (HRF). Depending on the question under consideration, different models were set up. One model combined all 24 runs of an individual subject to define the main action observation network. Because all effector/motor act combinations were presented in separate runs, 12 additional models were defined for each of the 12 effector/motor-act pairs, combining the two runs with different fixation positions, to investigate their respective signal changes in a follow-up region of interest (ROI) analysis focusing on the regions activated in the premotor and the parietal cortex. Activation for a given effector/motor-act pair was always assessed with respect to its respective control conditions presented within the same run to remove potential differences in overall activation between runs or sessions. The potential differences were further reduced by counterbalancing the order of the different runs within and between sessions across subjects.
To define the main action observation network (Fig. 1, B and C; Supplemental Fig. S1), we computed three contrast images for each of the 18 participants at the first level, combining all effector/motor-act videos compared with their static, scrambled, and nonbiological motion controls, respectively. In a second level random effects analysis (Holmes and Friston 1998), we computed three different activation maps, one for each control, combining the contrast images from all subjects (1-sample t-test). Inclusive masking of all three activation maps, thresholded at P < 0.001 uncorrected, yielded the voxels common to the three contrasts and constituted the main action observation network used for further analysis.
The second step in the analysis was an ROI-based ANOVA of the relative activation levels of the 12 effector/motor-act pairs. First, the T-score map of the main action observation network was projected (enclosing voxel projection) onto flattened left and right hemispheres of the human PALS B12 atlas (Van Essen 2005; http://sumsdb.wustl.edu/sums/humanpalsmore.do) using the Caret software package (Van Essen et al. 2001; http://brainvis.wustl.edu/wiki/index.php/Caret:About). Next, we defined on the flatmaps a series of rectangular regions of interest (ROIs) covering, as well as possible, the premotor and parietal activation sites of the main action observation network. Subsequently, we identified the voxel coordinates contained in each ROI and exported them into MNI space. With this method, we defined 33 ROIs over the premotor cortex (RH: 17, LH: 16) and 135 ROIs along three lines over the left and right parietal cortex respectively (RH: 66, LH: 69). The particular sizes of these ROIs (19 ± 6 voxels) was chosen to be able to average the signal across several voxels to reduce spurious effects while at the same time allowing for a sampling of the %MR signal change at multiple locations along the activations, thereby quantifying gradual changes in the signal. Moreover, this method restricted our analysis mainly to gray matter voxels. In a final step, we extracted for each ROI and each participant the %MR signal change for each of the 12 effector/motor-act pairs, relative to the nonbiological motion control condition. The data were entered into the ANOVAs or averaged for the observation of actions performed with different effectors (by collapsing over motor acts) or for the different motor acts (by collapsing over effectors). The line plots (Figs. 2 and 3) show the average %MR signal change across participants along the sequence of ROIs. In addition, we used the small volume correction in SPM to test whether or not the activation for the observed effector or motor act differed significantly (P < 0.05, family wise error corrected) from its three control conditions.
Ideally, our ROI analysis should be restricted to a single repeated-measures ANOVA, comparing effects across premotor and parietal cortex. However, to perform this analysis, one would need to assume a specific relation between the ROIs defined in the two cortical areas. Because there is no a priori assumption that ROI position 1 in premotor cortex has any relationship to ROI position 1 in parietal cortex, we chose to analyze the two cortical areas separately. A four-way repeated-measures ANOVA was performed to compare the MR activation across the two lines covering the activation in left and right premotor cortex. The four factors were hemisphere, effectors, motor acts, and ROI position. Because the lines between left and right hemisphere differed in length by one ROI, we compared the first 16 positions from the two lines and disregarded ROI position 17 of the right hemisphere for this analysis. Because the activation levels differed significantly between the right and left parietal cortex, our analysis focused only on the left hemisphere; however, results of the right hemisphere are included as Supplemental Material. We performed three separate three-way ANOVAs, one for each line. The significance level was set at P < 0.0125 to correct for multiple comparisons, because we performed four independent ANOVAs. After obtaining a significant main effect of ROI or interaction in the ANOVA, subsequent contrast analyses were carried out to identify significant differences in specific ROIs between effectors or motor acts respectively.
Modeling of the conditions was identical to experiment 1; however, because all motor acts were presented within the same run, a single model was used to investigate the effects. Following the results obtained in the first experiment, we computed a positive and a negative interaction for each of the participants. In the positive interaction, activation for dragging and grasping relative to their respective controls (positive contrast) was contrasted with activation for dropping and pushing, also relative to their respective controls (negative contrast). The negative interaction was simply the opposite of the positive one. Afterward, a second-level, random effects analysis was performed for the two interactions obtained from this first-level analysis. Next, the two resulting SPMs were masked with the main effect of actions versus controls, thresholded at P < 0.05 uncorrected. The significance level for the interactions was set at P < 0.001 uncorrected, a typical criterion for interactions (Jastorff and Orban 2009). From these interactions, we defined regions centered on the local maxima of the T-score maps. These regions were compared using a two-way ANOVA to test for a region-based, higher-order interaction between region and contrast sign (see results).
In addition, we computed the same interactions using a fixed effects model, concentrating only on the seven participants who took part in both experiments 2 and 3. The resulting local maxima of this fixed effects analysis in the left parietal cortex were used for analysis in experiment 3.
fMRI activations for foot and mouth motor acts were computed for the two interaction sites defined by the fixed effects analysis of experiment 2 and tested with a two-way ANOVA for region-based interactions.
Activity profiles plot the differential MR activation in the various local maxima of experiments 2 and 3, as% MR signal change relative to the nonbiological motion control condition, for the positive and negative contrasts (see analysis of experiment 2). Profiles were computed by averaging a region of 27 voxels surrounding the local maxima.
Definition of phAIP
The outline of putative human AIP (phAIP) in the left parietal cortex was based on a meta-analysis of local maxima from previous studies. The outline is defined as the confidence ellipse of all local maxima centered on their mean. The following publications were used to define phAIP: Begliomini et al. (2007); Binkofski et al. (1999a,b); Cavina-Pratesi et al. (2007); Culham et al. (2003); Frey et al. (2005); Jancke et al. (2001); Kroliczak et al. (2007); Shikata et al. (2003).
Experiment 1: Effectors versus type of motor act
All participants maintained good fixation during scanning. On average, they made 12.2 ± 1.0 saccades/min, and the number of saccades did not differ significantly among conditions (1-way ANOVA). Analyzing the eye movements separately for each effector/motor act pair did not yield significant differences, nor did a comparison of the different effector/motor act conditions with each other. This finding is important for the correct interpretation of the data because, as shown by Kimmig et al. (2001) for example, several cortical regions including parietal cortex display saccade-related BOLD responses that are highly correlated with saccade frequency.
General action observation network
Figure 1, B and C, highlights the general action-observation network (see methods) superimposed onto a rendered MNI template brain (Fig. 1B) and the flattened left and right hemispheres of the PALS atlas (Fig. 1C). The colored areas indicate stronger activation (P < 0.001 uncorrected) for all 12 effector/motor act pairs, pooled together, relative to all control conditions. The lenient threshold was used to include voxels activated by only some of the effector/motor act pairs. In fact, most activation sites reached the more stringent level of P < 0.05 corrected as shown by the local maxima in premotor, parietal, and occipito-temporal cortex, as reported in Supplemental Fig. S1. The latter figure also shows that the nonbiological movement condition (toy car motion) was the most stringent of the three control conditions, yielding a more restricted activation pattern, when subtracted from action observation conditions compared with the other contrasts.
Although the general action observation network seems to be similar in the left and the right hemispheres, some differences in the strengths of the activations in the two hemispheres can be noticed. In particular, the activation of the left parietal cortex is stronger than that of the right (see also Peeters et al. 2009). Direct comparison of the activation patterns in the two hemispheres indicated that this difference was significant (P < 0.001 uncorrected) along the anterior half of the intraparietal sulcus. Because the observed motor acts were all carried out with the right hand and foot, this difference might reflect an hemispheric preference for contralateral effector movements (Pelphrey et al. 2004; Shmuelof and Zohary 2006).
To study the role of effectors versus type of motor act in the organization of the action observation network, we analyzed the MR signal changes using a four-way repeated-measures ANOVA with the following factors: hemisphere, effector, type of motor act, and ROI position (Table 1). This analysis showed a significant two-way interaction between effectors and ROI position (F(30,510) = 3.8, P < 0.001) but no significant interaction between type of motor act and ROI position (F(45,765) = 0.9, P = 0.61). Figure 2, A and B, plots activations during the observation of motor acts when performed by each of the three effectors, independently of the type of the motor act. The small rectangles, superimposed on the activated areas, indicate the position of the ROIs used to assess changes in activation along the cortical surface for the various conditions (see methods for details). The line plots indicate the average activities for the observation of motor acts performed with the three effectors in terms of % MR signal change relative to the most strict control condition, i.e., the nonbiological motion condition, along the sequence of ROIs. The stringency of the control condition explains the small size of the MR signals, typical of these types of experiments (Aziz-Zadeh et al. 2006; Chong et al. 2008; Pierno et al. 2009). A rendered view (MNI brain template) of the activations for the different effectors over the whole brain is shown in Supplemental Fig. S3A.
In the right premotor cortex (Fig. 2A), foot-related activation was the highest in the dorsal part of the activated region and decreased progressively, whereas the mouth-related activation was virtually absent dorsally, and increased ventrally. Finally, hand-related activation maintained a relatively constant value throughout the entire activation area. In contrast to the dorsal part, we observed an increase in activation for all effectors in the ventrally located ROIs, indicating that this region, in contrast to more dorsal regions, represents all effectors. MNI coordinates (x = 54, y = 14, z = 30), cytoarchitectonic maps (Amunts et al. 1999), and the proximity to areas sensitive to curved stereo surfaces (Georgieva et al. 2009) provide some indication that this region might correspond to area 44, an area strongly interconnected with the anterior part of the intraparietal sulcus (Croxson et al. 2005).
The results for the left premotor cortex (Fig. 2B) were very similar; however, the MR signal for the mouth was generally weaker compared with the right hemisphere. This difference is manifested as a significant three-way interaction between hemisphere, effectors, and ROI position (F(30,510) = 1.8, P < 0.01). Subsequent contrast analysis showed no significant difference between foot and hand activations across the two hemispheres (foot: F(1,17) = 1.2, P = 0.29, hand: F(1,17) = 1.9, P = 0.18) but a significantly weaker activation for mouth motor acts in the left hemisphere (F(1,17) = 19.7, P < 0.001). Neither the three-way interaction between hemisphere, motor acts, and ROI position (F(45,765) = 1.1, P = 0.23) nor the four-way interaction (F(90,1530) = 1.2, P = 0.13) was significant.
Analyzing the %MR signal changes for the left parietal cortex using a three-way repeated-measures ANOVA with the factors effector, type of motor act, and ROI position yielded a completely opposite result. As shown in Fig. 2C, we obtained generally stronger activation for the observation of foot motor acts compared with hand and mouth motor acts throughout most of the parietal cortex, which became evident in a significant main effect of effector (line 1: F(2,34) = 16.0, P < 0.001; line 2: F(2,34) = 13.3, P < 0.001; line 3: F(2,34) = 5.7, P < 0.01). However, in contrast to premotor cortex, the two-way interaction between effectors and ROI position was not significant, not for line 1 shown in Fig. 2C (F(44,748) = 1.1, P = 0.21), nor for the other two lines (line 2: F(44,748) = 1.2, P = 0.15; line 3: F(44,748) = 1.0, P = 0.35). In contrast, we obtained a significant two-way interaction between type of motor act and ROI position (Fig. 3; line 1: F(66,1122) = 4.1, P < 0.001; line 2: F(66,1122) = 6.8, P < 0.001; line 3: F(66,1122) = 5.5, P < 0.001). The type of motor act by ROI interaction is shown in Fig. 3B, plotting the fMRI activation for each motor act, according to the ROIs of line 1 in the left parietal cortex. In more caudal regions, we obtained stronger responses for dropping and pushing actions compared with dragging and grasping actions. The opposite was true for the rostral part of the intraparietal sulcus (positions 13–18). The three-way interaction was not significant (line 1: F(132,2244) = 1.2, P = 0.06; line 2: F(132,2244) = 1.2, P = 0.09; line 3: F(132,2244) = 1.1, P = 0.23). Qualitatively similar results were obtained in the right parietal cortex (see Supplemental Fig. S2); however, because the activation in the left parietal cortex significantly exceeded the right one, further analysis concentrated on the left hemisphere.
The line plot of Fig. 3B not only shows the interaction between types of motor acts and the cortical position but also suggests a specific grouping of motor acts. When the activation level for the observation of “dropping” videos (blue) increases or decreases along the line, the activation for pushing (green) follows. Similarly, activations for grasping (red) and dragging (yellow) co-vary. Comparing the spatial distributions of the activations for the different types of motor acts along the three lines, we found similar activation patterns for dragging and grasping motor acts on one hand and for dropping and pushing motor acts on the other. This finding was supported by correlation analysis. A comparison of activations for dragging and grasping motor acts over all 69 ROIs in the three lines of the left parietal cortex showed a strong correlation between these two motor acts (R = 0.8, P < 0.001). A similarly high correlation was obtained by comparing the activations for dropping and pushing motor acts (R = 0.76, P < 0.001). The correlations between all other combinations of motor acts did not reach significance (Fig. 4A). Moreover, the two correlation coefficients between drag and grasp and between drop and push were significantly higher than the next strongest correlation: that between push and drag. The test yielded z = 3.07 (P < 0.01) when comparing the correlation between drag/grasp to that between drag/push and z = 2.52 (P < 0.01) when comparing the correlation between drop/push to that between drag/push. On the other hand, the two correlations for drag/grasp and drop/push did not differ significantly from each other (z = 0.55, P = 0.29). These correlation results clearly suggest that the four motor acts can be grouped into two motor act categories based on their relationship between the agent and the object. We tentatively designated these categories positive (dragging and grasping) for motor acts bringing the object toward the agent and negative (dropping and pushing) for motor acts moving the object away from the agent (Fig. 4B).
To validate the grouping of motor acts according to sign, we conducted an additional split-data analysis, computing the activations for positive and negative motor acts separately for the even and the odd runs for all activated voxels of the left parietal cortex (Fig. 1C). Subsequently, we normalized the response of each voxel separately for the even and the odd runs by subtracting the mean response across all conditions (Haxby et al. 2001). By computing the correlation between the even and the odd runs for the two action categories, we found a positive correlation (R = 0.67, P < 0.001) when comparing positive motor acts across runs (the same was true for negative motor acts) but a negative correlation when comparing positive with negative motor acts across runs (R = −0.67, P < 0.001; Fig. 4C). This analysis confirmed that the activations for the two motor act categories were not only distinct in the parietal cortex but that this result was also highly reproducible across runs.
In summary, experiment 1 showed a clear difference in the encoding of motor acts in the premotor and the parietal cortices. Whereas in premotor cortex, the motor acts executed by the same effector were clustered together independently of the type of the observed motor act, in parietal cortex, it was the motor acts having a similar goal that tended to be coded together, regardless of the effector used. Specifically, in the parietal lobe the spatial distributions of the activations for motor acts bringing the object toward the agent (positive motor acts; dragging and grasping) were very similar to each other, as were the spatial distributions for motor acts moving the object away from the agent (negative motor acts; dropping and pushing).
Experiment 2: Spatial segregation of positive and negative motor acts in parietal cortex
Experiment 2 was designed to further study the spatial separation between positive and negative motor acts found in the left parietal cortex. To this end, we presented all four motor acts (dragging, dropping, grasping, and pushing) within the same run, together with their respective nonbiological motion controls. Only hand motor acts were shown. In addition, the participants performed an attentionally demanding high-acuity task during the scanning (Denys et al. 2004; Georgieva et al. 2008; Vanduffel et al. 2002) to minimize the possibility that motor acts more rarely observed in everyday life capture more attention than those seen frequently. The mean activation levels in the main action observation network for the four hand motor acts used in this experiment were indeed very similar (Supplemental Fig. S3). Despite the participants' engagement in the attention task, the action observation network obtained in experiment 2 did not differ from that obtained for the observation of hand motor acts in experiment 1 (Supplemental Fig. S4).
On average, participants detected the bar flip in the high-acuity task in 85 ± 2% of the cases, with a mean reaction time of 589 ± 7 ms across subjects. Statistical analysis showed that the percentage of correct responses did not differ significantly across conditions (F(8,112) = 1.1, P = 0.37); nor did the reaction times (F(8,112) = 1.4, P = 0.21). The number of saccades averaged 6.3 ± 1.1 saccades/min and was not significantly different between conditions (1-way ANOVA). Compared with experiment 1, the high-acuity task clearly improved the fixation quality.
Figure 5A shows the results of a random effects analysis of experiment 2. Voxels showing a significant interaction (P < 0.001 uncorrected; masked with the main effect of actions, see methods) for positive motor acts are shown in red, whereas voxels showing a significant interaction for negative motor acts are highlighted in blue. These two interaction sites were the only regions within the entire brain showing a significant interaction for the two groups of actions. The site exhibiting stronger activation for positive motor acts was located in the rostro-ventral part of phAIP (white outline) (Binkofski et al. 1999a; Culham et al. 2003; Frey et al. 2005; Shikata et al. 2003), whereas negative motor acts led to stronger activation dorso-caudally in the same area. Although caution is required when identifying an area on the basis of Talairach coordinates taken from other studies, the ellipse defining phAIP was based on a set of nine well-controlled studies (see methods). In addition, a recent study of Peeters et al. (2009) confirmed the involvement of AIP in the monkey and phAIP in humans in action observation.
The activity profiles for the local maxima of the positive and the negative interaction sites (colored spheres in Fig. 5A) are presented in Fig. 5B. In the positive local maximum, the red bar is higher than the blue bar, indicating that the activity for positive motor acts, relative to their respective nonbiological motion controls (positive contrast), exceeds that for the negative motor acts, again relative to their respective nonbiological motion controls (negative contrast), i.e., there is an interaction favoring positive motor acts. In the negative local maximum, the opposite holds true, indicating the opposite interaction. A subsequent two-way repeated-measures ANOVA showed a significant interaction (F(1,14) = 32.7, P < 0.001) between the factors cortical region (red and blue spheres) and contrast sign (positive or negative). This region-based interaction captures the contrasting activity patterns of the two ROIs. To avoid circular reasoning, we also determined the two local maxima for the odd runs of experiment 2 only and computed the activity profiles in those locations using the even runs (Fig. 5C). This additional analysis confirmed our results, again showing a significant region-based interaction (F(1,14) = 28.2, P < 0.001).
Figure 5D plots the fMRI activation for each individual participant for the local maxima of the random effects model (2 spheres in Fig. 5A), ordered according to the amplitude of their activations for positive motor acts relative to their controls. All subjects except one (subject 15) clearly show stronger responses, relative to control, for positive motor acts compared with negative ones in the local maximum for the positive interaction. The converse was observed for the negative interaction site, with the exception of subject 12. However, when we considered the local maximum for each individual subject, rather than for the group, all subjects showed the expected pattern (Supplemental Fig. S5).
In conclusion, experiment 2 directly tested for the segregation of motor acts according to sign in the left parietal cortex that had been suggested by the results of experiment 1. A site favoring positive motor acts was located rostro-ventrally in phAIP, whereas a site representing negative motor acts was located dorso-caudally within the same area.
Experiment 3: Effector-independent grouping of positive and negative motor acts
Experiment 2 studied only hand motor acts. Experiment 3 was therefore designed to confirm the sign-related segregation found in experiment 2, but for the other two effectors. Seven participants, who had taken part in experiment 2 (indicated in red in the abscissas of Fig. 5D), were studied, this time testing foot and mouth motor acts. The design was otherwise identical to experiment 2.
As in the previous experiment, participants performed well in the high-acuity task. On average, subjects detected the bar flip in 84 ± 2% of the cases for runs presenting foot motor acts and in 86 ± 3% for runs presenting mouth motor acts. The mean reaction time was 562 ± 5 ms and 567 ± 8 ms across subjects in foot and mouth runs, respectively. Statistical analysis showed that the percentage of correct detections did not differ significantly across conditions (F(8,48) = 0.9, P = 0.54 and F(8,48) = 1.0, P = 0.44 in foot and mouth runs, respectively); nor did reaction times (F(8,48) = 0.4, P = 0.92 and F(8,48) = 1.0, P = 0.27). Because the high-acuity task can be performed only in central vision, similarities in task performance indicate that fixation quality was nearly identical in the different conditions.
To generate an a priori prediction for this experiment, we performed a fixed effects model of the data obtained in experiment 2 for the seven participants common to both experiments. Using this analysis, we defined the group local maxima over these seven subjects for positive and negative hand actions, indicated as diamond shapes in Fig. 6A, and extracted the activity profiles for all three effectors using the 27 voxels surrounding the local maxima (Fig. 6B). We predicted an interaction between the cortical region and the contrast sign not only for hand motor acts but also for the two new effectors. Both predictions were born out, and region-based interactions in the two-way ANOVA were significant for both foot actions (F(1,41) = 5.1, P < 0.05) and mouth actions (F(1,41) = 8.4, P < 0.01). These values were similar to those obtained for the hand (F(1,41) = 8.9, P < 0.01). Figure 6B shows that stronger activation was indeed observed for positive compared with negative motor acts, each relative to their controls, in the positive local maximum for the hand, independently of the effector used to perform the action. The converse was found in the negative local maximum. Taken together, the results of experiment 3 confirm that the clustering of motor acts according to sign was indeed effector independent, as suggested by experiment 1.
The basic elements of functional organization in the ventral premotor cortex (PMv) and the inferior parietal lobe (IPL) are motor acts, i.e., movements with a specific goal (see Introduction). However, the same goal, (e.g., grasping an object) can be achieved using different effectors and, conversely, the same effector can be used to obtain different goals (e.g., grasping, dragging, pushing). Here we have shown that the clustering of motor acts in the premotor cortex is effector-based: the observation of a motor act performed with the same effector activates the same anatomical sector regardless of the type of that motor act. In contrast, the clustering of observed motor acts in IPL is based on their functional meaning: the observation of motor acts with similar goals activates the same anatomical sectors regardless of the effector performing them.
The finding in the ventral premotor cortex that activations for the observation of motor acts tend to be located together if performed with the same effector is in agreement with the data of previous imaging studies (Buccino et al. 2001; Sakreida et al. 2005; Wheaton et al. 2004). Similarly, the presence, within a given somatotopic field (foot, hand, or mouth), of additional, weaker responses to the observation of motor acts carried out with other effectors is also in accord with these previous findings.
Furthermore, the absence of clustering according to functional meaning in PMv does not at all imply that single neurons in the premotor cortex are not selective for the type of action. Such selectivity has been reported for monkey F5 neurons (Rizzolatti et al. 1988). It simply indicates that motor acts executed with the same effectors are represented one close to another. In the same vein, our results may seem, at first glance, to contradict the proposed motor cortex organization of Graziano and Aflalo (2007), who reported segregation of various complex movements performed with the same effector. However, our study was focused on the representation of fine manipulative actions in PMv using action observation as probe, whereas that of Graziano et al. described complex, multisegmental movements evoked by electrical stimulation of a large region of cortex that included primary motor and dorsal premotor cortex. Note also that the concept of Graziano and Aflalo that the cortical motor system evolved primarily for action organization, rather than for movement control, is fully consistent with our findings.
When comparing right and left premotor activations, we found that the mouth representation in the left hemisphere was significantly weaker compared with that in the right hemisphere. A similar asymmetry has been previously reported (Buccino et al. 2001; Wheaton et al. 2004). It is likely that this asymmetry depends on the type of actions presented. Whereas the left ventral premotor cortex is mainly involved in linguistic functions (Aziz-Zadeh et al. 2005, 2006), the motor acts we presented depicted nonlinguistic motor acts. It is therefore possible that, with the evolution of language, a reorganization occurred in the left premotor cortex of humans with a concomitant decrease of the neural space devoted to nonlinguistic mouth actions.
The most novel finding of our study concerns the way in which motor acts performed by others are coded in the parietal lobe. Although it has been shown previously that phAIP is involved in action observation (Peeters et al. 2009), we found 1) that the observation of different types of hand motor acts produced a spatially segregated activation pattern in this area, with positive motor acts clustered ventrally and negative motor acts clustered dorsally in phAIP, and 2) that the same pattern was observed during the observation of motor acts performed with the mouth and the foot.
There are several lines of evidence to suggest that this organization is indicative of higher-level, categorical representations of motor acts in the inferior parietal lobule. In fact, the movement kinematics differs considerably both between dragging and grasping (positive motor acts) and between dropping and pushing (negative motor acts) executed with the same effector (e.g., the hand) and even more when these motor acts are executed with different effectors (hand, mouth, and foot). Thus the high correlations between the activations elicited by the two positive motor acts and between those of the two negative motor acts cannot simply be explained by their physical similarities. Note also that the separation between positive and negative motor acts was observed independently in all experiments.
The categorization of motor acts according to their sign is strongly reminiscent of some findings by Freedman and Assad (2006). These authors recorded neurons in lateral intraparietal (LIP) of the monkey after training the animal to perform a motion-categorization task. They found that neurons in anterior LIP reflect the category membership of visual motion direction and that the responses of LIP neurons changed according to the required categorization. Our results extend the role of LIP in stimulus categorization to another, neighboring parietal area. They also suggest that stimulus categorization occurs in IPL naturally, according to the behavioral significance of the observed motor act. This latter property differentiates it from rule-based categorization that involves prefrontal areas (Smith and Grossman 2008). Note, however, that we do not claim that our definitions of sectors coding positive and negative motor acts describe the only categorical boundary represented in the anterior part of IPS. Future experiments testing a wider range of motor acts involving different viewpoints are needed to answer this question.
What could be the purpose for grouping motor acts according to their behavioral sign (taking possession vs. discarding) and for their particular spatial arrangement? In the macaque monkey, the anterior tip of AIP is strongly interconnected with the hand and digit representations in secondary somatosensory cortex, whereas these connections are weaker in the posterior part of AIP (Borra et al. 2008). We propose that this may explain why positive actions are located rostro-ventrally in phAIP and negative ones dorso-caudally, because the consequence of taking possession of an object is prolonged haptic interaction, which is not the case when discarding an object.
It is important to stress that the categorization of motor acts in positive and negative was observed in area phAIP for all three effectors. This finding is rather surprising because neurophysiological findings show that, in both monkeys (Jeannerod et al. 1995; Sakata et al. 1995; Taira et al. 1990) and humans (Binkofski et al. 1999a; Culham et al. 2003; Frey et al. 2005; Rice et al. 2006; Tunik et al. 2008), area AIP is part of a circuit related to the visuo-motor control of hand movements. Anatomical findings (Borra et al. 2008) and lesion studies (Gallese et al. 1994) also support this view. Thus how can this multiple effector representation in AIP explained?
One possibility is that, contrary to what is generally believed, area AIP is not exclusively a hand area but is also involved in the processing of agent–object interactions performed with different effectors. There is no evidence, however, in favor of such multiple-effector localization in AIP. On the contrary, single neuron data in the monkey clearly suggest a segregation of the different body parts and related movements in different cytoarchitectonic areas of the inferior parietal lobule (Hyvarinen 1981; Rozzi et al. 2008).
An alternative possibility is that, because the motor acts we presented (grasping, dragging, pushing, dropping) are typically hand motor acts, their observation might have activated principally neurons coding hand actions, or, putting it in other terms, the hand action templates were used to comprehend the goal of motor acts carried out using other effectors. Consistent with this interpretation are the findings that the human grasping circuit is strongly activated during the observation of grasping performed with artificial devices, even when the artificial device differs from a grasping hand in shape and kinematics (Gazzola et al. 2007a; Peeters et al. 2009). It might therefore be that, because phAIP is crucial for decoding the motor acts typically done with the hand, other parietal areas related to the control of foot movements could become active during the observation of motor acts typically performed with the foot. The activations extending dorsally, beyond area phAIP for foot kicking and pushing actions reported by Buccino et al. (2001), could in fact indicate the existence of other parietal areas related to the control of foot movements. In line with this interpretation is our finding of a dorsal region where the observation of foot actions produced significantly stronger activations than that of other effectors.
In summary, our data suggest that the encoding of motor acts done by others occurs in three main steps. The first consists of a visual description of the observed movements. This process occurs in the occipito-temporal cortex. The relevance of movement goals is rather limited at this point (Jastorff and Orban 2009; Nelissen et al. 2006; Perrett et al. 1990) and, most importantly, even in the STS region, there is no goal generalization across effectors (Cattaneo et al. 2010). This visual description of the observed motor acts is sent to the parietal cortex, where it is transformed into potential, goal-directed motor acts. By generalizing the observed motor act across effectors and by categorizing them, these representations play a fundamental role in the comprehension of the goal of a given motor act, independently of how that goal is achieved. In a third step, the goal-related information reaches the premotor cortex, where motor acts, and their visual counterparts, are now clustered largely according to a somatotopic coordinate system, which highlights the somatic specificity of the motor act and may facilitate the imitation of that act.
This study was supported by Grants Fonds Wetenschappelijk Onderzoek G.0730.09, Inter University Attraction Pole 6/29, and Excellentie Financiering 05/14 to G. A. Orban and IUAP6/29 and Agenzia Spatiale Italiana to G. Rizzolatti.
No conflicts of interest, financial or otherwise, are declared by the author(s).
The help of R. Peeters, P. Kayenbergh, W. Depuydt, M. Depaep, and G. Meulemans is kindly acknowledged. We thank S. Raiguel, R. Vandenberghe, G. Luppino, S. Rozzi, C. Keysers, J. Culham, and J. Assad for comments on an earlier version of the manuscript.
↵1 The online version of this article contains supplemental data.
- Copyright © 2010 The American Physiological Society
- Amunts et al., 1999.↵
- Aziz-Zadeh et al., 2005.↵
- Aziz-Zadeh et al., 2006.↵
- Begliomini et al., 2007.↵
- Binkofski et al., 1999a.↵
- Binkofski et al., 1999b.↵
- Borra et al., 2008.↵
- Buccino et al., 2001.↵
- Cattaneo et al.↵
- Cavina-Pratesi et al., 2007.↵
- Chong et al., 2008.↵
- Croxson et al., 2005.↵
- Culham et al., 2003.↵
- Denys et al., 2004.↵
- Fabbri-Destro and Rizzolatti, 2008.↵
- Filimon et al., 2007.↵
- Fogassi et al., 2005.↵
- Freedman and Assad, 2006.↵
- Frey et al., 2005.↵
- Gallese et al., 1996.↵
- Gallese et al., 1994.↵
- Gazzola and Keysers, 2009.↵
- Gazzola et al., 2007a.↵
- Gazzola et al., 2007b.↵
- Georgieva et al., 2009.↵
- Georgieva et al., 2008.↵
- Graziano and Aflalo, 2007.↵
- Grezes et al., 2003.↵
- Hamilton and Grafton, 2008.↵
- Haxby et al., 2001.↵
- Holmes and Friston, 1998.↵
- Hyvarinen, 1981.↵
- Jancke et al., 2001.↵
- Jastorff and Orban, 2009.↵
- Jeannerod et al., 1995.↵
- Kakei et al., 1999.↵
- Kakei et al., 2001.↵
- Kimmig et al., 2001.↵
- Kroliczak et al., 2007.↵
- Lemon et al., 1976.↵
- Nelissen et al., 2005.↵
- Nelissen et al., 2006.↵
- Peeters et al., 2009.↵
- Pelphrey et al., 2004.↵
- Perrett et al., 1990.↵
- Pierno et al., 2009.↵
- Porter and Lemon, 1993.↵
- Ress et al., 2000.↵
- Rice et al., 2006.↵
- Rizzolatti et al., 1988.↵
- Rizzolatti and Craighero, 2004.↵
- Rizzolatti et al., 1996.↵
- Rozzi et al., 2008.↵
- Sakata et al., 1995.↵
- Sakreida et al., 2005.↵
- Sawamura et al., 2005.↵
- Shikata et al., 2003.↵
- Shmuelof and Zohary, 2006.↵
- Smith and Grossman, 2008.↵
- Taira et al., 1990.↵
- Tunik et al., 2008.↵
- Umilta et al., 2008.↵
- Van Essen, 2005.↵
- Van Essen et al., 2001.↵
- Vanduffel et al., 2002.↵
- Wheaton et al., 2004.↵