TY - JOUR AU - Lingnau,, Angelika AB - Abstract Humans are able to interact with objects with extreme flexibility. To achieve this ability, the brain does not only control specific muscular patterns, but it also needs to represent the abstract goal of an action, irrespective of its implementation. It is debated, however, how abstract action goals are implemented in the brain. To address this question, we used multivariate pattern analysis of functional magnetic resonance imaging data. Human participants performed grasping actions (precision grip, whole hand grip) with two different wrist orientations (canonical, rotated), using either the left or right hand. This design permitted to investigate a hierarchical organization consisting of three levels of abstraction: 1) “concrete action” encoding; 2) “effector-dependent goal” encoding (invariant to wrist orientation); and 3) “effector-independent goal” encoding (invariant to effector and wrist orientation). We found that motor cortices hosted joint encoding of concrete actions and of effector-dependent goals, while the parietal lobe housed a convergence of all three representations, comprising action goals within and across effectors. The left lateral occipito-temporal cortex showed effector-independent goal encoding, but no convergence across the three levels of representation. Our results support a hierarchical organization of action encoding, shedding light on the neural substrates supporting the extraordinary flexibility of human hand behavior. action, fMRI, grasping, motor system, MVPA Introduction Human behavior is characterized by an astonishing dexterity combined with extreme flexibility. How our motor system succeeds in implementing these two complementary features is still largely unknown. Consider our daily goal-directed interactions within the environment, for example, grasping an object. On the one side, the brain coordinates the smooth execution of an articulated hand-object interaction and its accurate online control. This is made possible through the representation of “concrete” action exemplars characterized by specific “motor features,” such as the adopted type of grip, effector, and orientation of the hand with respect to the object. On the other side, to maintain behavior flexible, our brain also needs to represent the “abstract” goal of the action we aim to perform, regardless of the concrete implementation of this action. This example suggests that the neural architecture permitting our daily interactions with objects represents action-related information at different levels of abstraction. The underlying idea is that information characterizing an action, for example, grasping different objects with specific grip types, might be represented not only in the context of a specific movement (“concrete action level”), but also generalizing across other motor features of the movement (“abstract goal levels”), such as the effector and wrist orientation. We reasoned that brain regions jointly hosting concrete and abstract action representations defined as above could subtend the neural basis which guarantees human hand behavior, its incredible precision and extreme flexibility. Neurophysiological and neuroimaging investigations demonstrated that a parieto-frontal motor network is recruited during the planning, execution and online control of goal-directed hand actions (Culham et al. 2006; Culham and Valyear 2006; Filimon 2010; Grafton 2010; Vesia and Crawford 2012; Turella and Lingnau 2014; Gallivan and Culham 2015; Janssen and Scherberger 2015; Fattori et al. 2017). Recent findings hinted to the possibility that several regions within this network hosted different levels of action encoding, ranging from the representation of a specific action to the representation of the same action independently from other motor features. A series of functional magnetic resonance imaging (fMRI) studies adopting multivariate pattern analysis (MVPA) investigated “concrete action representations.” For example, it has been demonstrated that it is possible to distinguish between different actions performed with the dominant hand, such as leftward versus rightward reaching or specific types of grasping performed toward different objects or with different grip types (Gallivan et al. 2011b, 2013b; Fabbri et al. 2014) within the primary motor (M1), premotor, and posterior parietal cortices (PPC). MVPA of fMRI (Gallivan et al. 2011a, 2013b; Barany et al. 2014; Kadmon Harpaz et al. 2014; Krasovsky et al. 2014; Haar et al. 2015) and magnetoencephalography data (Turella et al. 2016) has also been exploited to investigate a more abstract level, namely, the generalization of an action across specific motor features, such as the adopted effector (left [LH] vs. right hand [RH]), the direction and/or the amplitude of the movement. These abstract representations (also referred to as “action goals”) have been described within a circumscribed set of cortical regions both at an “effector-dependent level”—representing actions performed with the same effector (e.g., Kadmon Harpaz et al. 2014)—and an “effector-independent level”—representing actions (or action goals) irrespective of the adopted effector (Gallivan et al. 2013b). Kadmon Harpaz et al. (2014) showed that writing movements for specific letters performed with the dominant hand were represented within a number of parietal and frontal regions (“concrete action representation”). Of these regions, the M1 and anterior intraparietal sulcus (aIPS) in the left hemisphere also represented the goal of the action, that is, writing specific letters irrespective of the amplitude of the required movement (“effector-dependent goal representation”). This finding suggests that two different levels of action representations coexisted within M1 and aIPS. Gallivan et al. (2013b) showed significant decoding for hand actions (grasping vs. reaching) during the planning phase of a movement within a wide set of motor, premotor and parietal areas. In addition, several regions represented also the goal of the action (grasping vs. reaching) irrespective of the adopted hand (“effector-independent goal representation”). Decoding for this type of representation was obtained within bilateral regions of the posterior intraparietal sulcus (pIPS), and the dorsal premotor cortex (PMd), middle (mIPS), and aIPS of the left hemisphere. Overall, the studies by Gallivan et al. (2013b) and Kadmon Harpaz et al. (2014) showed the joint representation of concrete action instances and of abstract goal representation within several brain regions including M1, premotor, and parietal cortices. Together, the two studies suggest that there might be a convergence of concrete, effector-dependent, and effector-independent goal representations within the parietal cortex in the left aIPS. However, since Gallivan et al. (2013b) and Kadmon Harpaz et al. (2014) used qualitatively different types of hand actions, and since both studies examined two different levels of representation, it is difficult to directly compare these results. It thus remains unclear if more than two different levels of action representation are present within the motor system and which, if any, brain regions jointly host these different types of action-related information. To address this question, we applied MVPA of fMRI data to examine which regions of the human brain accommodate different levels of action encoding, ranging from the representation of concrete action exemplars to the representation of their abstract goals. Participants were requested to execute non-visually guided grasping actions (precision grip, whole hand grip) toward objects of different sizes, performed with two different wrist orientations (canonical, rotated) and effectors (LH, RH). Exploiting this experimental design in combination with MVPA, we aimed to reveal a possible hierarchical structure of action representations including three different levels of abstraction (see also Fig. 2): concrete action encoding (level 1), representing different grasping actions performed with the same effector and wrist orientation; effector-dependent goal encoding (level 2), representing different grasping actions performed with the same effector but generalizing across wrist orientation; effector-independent goal encoding (level 3), representing different grasping actions generalizing across both effector and wrist orientation. The rationale of our experimental paradigm was three-folded: 1) to test whether visuo-motor networks represented these different levels of action encoding following a hierarchical organization along a concrete-to-abstract continuum, 2) to characterize cortical regions where these different levels were jointly represented, and 3) to identify regions hosting all the three levels of action encoding. Local processing within cortical sites hosting multiple levels of action representations might permit to move between levels of the hierarchy, a fundamental neural computation for supporting flexibility in our behavior. Based on previous neuroimaging and neurophysiological investigations (Culham et al. 2006; Culham and Valyear 2006; Filimon 2010; Grafton 2010; Vesia and Crawford 2012; Turella and Lingnau 2014; Gallivan and Culham 2015; Janssen and Scherberger 2015; Fattori et al. 2017), we predicted that actions are encoded at the concrete level of the hierarchy in a widespread set of regions of the parieto-frontal hand motor network, consisting of motor, premotor, and PPC. It is more difficult to predict which regions encode actions at the effector-independent and the effector-independent goal level of the hierarchy since only few studies investigated these types of representation (Gallivan et al. 2013b; Kadmon Harpaz et al. 2014). We expect that these levels are represented within a subset of the regions hosting concrete action encoding, with a possible central role of the intraparietal cortices. Regarding the convergence of all three levels of action encoding, a likely candidate could be the aIPS, as two independent fMRI studies suggested that this region might represent several different levels within the hierarchy (Gallivan et al. 2013b; Kadmon Harpaz et al. 2014). Materials and Methods Participants Twenty-four participants took part in the experiment. Three participants were excluded from subsequent analysis due to rapid head movements (exceeding 1 mm in translation or 1 degree of rotation within one volume). The reported analyses refer to the remaining 21 participants (11 female, average age: 29.2 year, right-handed according to self-report). All participants gave written informed consent for their participation in the study and were paid for their participation. The protocol of the study was approved by the Ethics Committee for Human Research of the University of Trento in accordance with the Declaration of Helsinki. Experimental Setup, Task and Paradigm The experimental setup was similar to the one described in a recent study on action planning (Ariani et al. 2015). Participants were scanned while performing a motor task, which consisted in executing non-visually guided grasping actions on an object (Fig. 1A,B). The to-be-grasped object was positioned on a plexiglas magnetic resonance (MR)-compatible support at the same reaching distance from both hands (Fig. 1A,B; see also Ariani et al. 2015). The object had a symmetrical shape and comprised a flat surface with a small cuboid attached to it in a central position (Fig. 1B). An MR-compatible response box was adopted as the starting position (Fig. 1A) permitting the recording of reaction times (RTs). Participants were lying in the scanner horizontally, without tilting the head toward the body (Fig. 1A). Participants performed the movements without visual feedback of the object or of their moving hand, following visual instructions projected on a screen behind the head of the participant through a coil-mounted mirror. Visual stimulation was back-projected on the screen (1024 × 768 resolution, 60 Hz refresh rate) using Presentation (version 16, Neurobehavioral Systems, https://www.neurobs.com/). This setup prevented that any decoding results were affected by the observation of the different actions during execution (see also Ariani et al. 2015, 2018 for similar setups), at the cost of being less comparable to previous studies that used visually-guided tasks. Figure 1 Open in new tabDownload slide (A) Experimental setup. Participants were requested to perform non-visually guided grasping actions toward a wooden object. Visual instructions were projected on a screen behind the participant and could be seen via a mirror mounted on the head coil. The head of the participant was positioned in a conventional orientation, not tilted toward the object. The position of the head prevented the participant from seeing his own movement and the object during the entire experimental session. At the start of each trial, the hand rested on an MR-compatible response box. (B) Experimental apparatus. The to-be-grasped object was attached on a Plexiglas support positioned above the pelvis of the participant. The wooden object consisted of two elements, a small cuboid (2 × 2 × 1 cm) attached over a larger one (7 × 7 × 2 cm). (C) Experimental design. We used a 2 × 2 × 2 factorial design with factors: “wrist orientation” (canonical, 0°; rotated, 90°), “effector” (RH; LH), and “action” (precision grip, PG, whole hand, WH). (D) ROI selection. The position of the ROIs selected for MVPA are reported on the brain surface (see Supplementary Materials, Fig. S2, for axial and coronal views and three-dimensional render of the ROI positions). We superimposed the statistical t-maps assessing the univariate contrast [grasping > baseline] on the reconstructed brain surface. The minimum threshold for the t-map was set at P < 0.05 TFCE corrected (two-tailed, z = 1.96). Figure 1 Open in new tabDownload slide (A) Experimental setup. Participants were requested to perform non-visually guided grasping actions toward a wooden object. Visual instructions were projected on a screen behind the participant and could be seen via a mirror mounted on the head coil. The head of the participant was positioned in a conventional orientation, not tilted toward the object. The position of the head prevented the participant from seeing his own movement and the object during the entire experimental session. At the start of each trial, the hand rested on an MR-compatible response box. (B) Experimental apparatus. The to-be-grasped object was attached on a Plexiglas support positioned above the pelvis of the participant. The wooden object consisted of two elements, a small cuboid (2 × 2 × 1 cm) attached over a larger one (7 × 7 × 2 cm). (C) Experimental design. We used a 2 × 2 × 2 factorial design with factors: “wrist orientation” (canonical, 0°; rotated, 90°), “effector” (RH; LH), and “action” (precision grip, PG, whole hand, WH). (D) ROI selection. The position of the ROIs selected for MVPA are reported on the brain surface (see Supplementary Materials, Fig. S2, for axial and coronal views and three-dimensional render of the ROI positions). We superimposed the statistical t-maps assessing the univariate contrast [grasping > baseline] on the reconstructed brain surface. The minimum threshold for the t-map was set at P < 0.05 TFCE corrected (two-tailed, z = 1.96). We adopted a 2 × 2 × 2 factorial design (Fig. 1C) with the factors: “wrist orientation” (no rotation, 0°, vs. rotated wrist, 90°), “effector” (LH vs. RH), and “action” (precision grip, PG, vs. whole hand grip, WH). Participants were instructed to grasp two objects of different sizes. Specifically, they were asked to perform a precision grip toward the small central block, using the thumb and index finger (see Fig. 1C, upper panel), and to perform a whole-hand grip using their entire hand on the large lateral side of the object (see Fig. 1C, lower panel). Participants had to perform the grasping action by simply touching the object, without manipulating or moving it. We adopted an object with a symmetrical shape, so that participants could grasp the object on their upper sides without any rotation of the wrist (0°), whereas the lateral sides could be grasped only by rotating the wrist (90°). Each participant completed one experimental session consisting of 10 runs (Fig. S1). Each run consisted of 64 experimental trials arranged in eight blocks (Fig. S1). Within each run, the effector to perform the action (LH or RH) was constant. The order of the hand to be used was alternated across runs and counterbalanced across participants. Each run started with an initial baseline period (20 s), followed by four blocks of trials (each lasting 28 s), interleaved with a baseline period (8 s), a long baseline period (32 s), four blocks of trials (each lasting 28 s) interleaved with a baseline period (8 s), and a final baseline period (24 s; see Fig. S1). Visual stimulation during the baseline period consisted of a gray fixation cross on a black background. At the beginning of each block, we presented an instruction (3 s) signaling the orientation of the hand for the following block. The instruction consisted of a vertical line, signaling the participant to use an untilted wrist (canonical, 0°), grasping the upper side of the object, or in a horizontal line, indicating to use a tilted wrist (rotated, 90°), grasping the lateral side of the object. The instruction was followed by the presentation of a black fixation cross (1 s), followed by a block of eight experimental trials. Each experimental trial lasted 2.5 s and was followed by a black fixation cross (0.5 s). Each experimental trial consisted of a change in the color of the fixation cross which instructed the participant which action to perform. We adopted four different colors for the cue (blue, green, red, yellow). For each participant, each color was associated with the execution of an action (either PG or WH) with a specific effector (either LH or RH). The colors assigned to each combination of action and effector were counterbalanced across participants. At the appearance of the cue, the participant had to perform the action and then return to the starting position waiting for the next cue. To avoid that participants might establish the same arbitrary mapping between an action and a color across the two hands, we adopted different colors for signaling the same actions across the two hands. Therefore, cross-decoding at the most abstract level cannot be based on arbitrary stimulus response mappings unless one assumes that the same patterns of brain activation are elicited for all four combinations between color and action. We recorded the participants’ behavioral performance using a video camera mounted on a tripod outside the 0.5 mT line during the entire duration of the study for offline control of behavior. To familiarize with the task, participants practiced two entire runs (one for each hand) before entering the MR room and one additional practice run inside the MR scanner. Data Acquisition MR data were acquired with a 4 T scanner (Bruker MedSpec) using an 8-channel head coil. Functional images were acquired with a T2* echo-planar imaging sequence (Repetition time (TR) 2 s; Echo Time (TE) 33 ms; Field of view (FOV): 192 × 192 mm; in-plane resolution 3 × 3; 28 slices with slice thickness of 3 mm and a gap size of 0.45 mm acquired in ascending interleaved order and aligned parallel with the ACPC line). Before each run, an additional scan was collected to measure the point-spread function of the acquired sequence to correct possible distortions (Zaitsev et al. 2004). Each participant completed 10 runs of 174 volumes each. At the beginning of the experimental session, a T1-weighted MP-RAGE anatomical scan (TR: 2700 ms; TE: 4.18 ms; FOV: 256 × 224 mm; 1 mm isotropic voxel resolution; 176 slices) was acquired for each participant. Experimental Design and Statistical Analysis Behavioral Analysis For behavioral analysis and MVPA, we excluded trials based on performance errors off-line by visual inspection of the videos recorded during the experiment (e.g., when participants performed the incorrect grasping action, the right grip with the wrong wrist orientation or when they omitted to perform the action). Moreover, we excluded trials if the RT was shorter than 100 ms or longer than 1500 ms (for a similar procedure, see Ariani et al. 2015). On average, we excluded 13 out of a total of 640 trials (~2%), indicating that participants were able to perform the task correctly. A 2 × 2 × 2 repeated measure ANOVA was performed on the RTs. fMRI Data Pre-Processing and Analysis Data analysis was performed using BrainVoyager QX 2.8.4 (Brain Innovation), custom software written in MATLAB (MathWorks), the Neuroelf Toolbox (http://neuroelf.net/) and the CoSMoMVPA Toolbox (Oosterhof et al. 2016, http://www.cosmomvpa.org/). We excluded the first four functional volumes from analysis to avoid possible T1 saturation effects. Pre-processing started with the realignment of all functional images with the first image of the first run as reference using trilinear interpolation. Subsequently, we applied slice time correction and high-pass filtering (cut-off frequency of 3 cycles per run). Next, we co-registered the first volume of the first run to the T1 anatomical image. For group analysis, anatomical and functional data were transformed into Talairach space using trilinear interpolation. For univariate analysis, functional images were smoothed with a Gaussian kernel of 8-mm full width half maximum. No spatial smoothing was applied for multivariate analysis. Univariate Analysis For ROI selection, we defined a general linear model (GLM) for each participant, with a total of three predictors of interest, corresponding to the two different types of movements (precision grip, whole hand grasp), performed either with the LH or RH, and the instructions (see Fig. S1). The predictors were created with a boxcar convolved with hemodynamic response function (Boynton et al. 1996). For the instructions, the start and end of the boxcar function was defined by their onset and offset (duration: 3 s). For the LH and RH conditions, we selected a box-car starting at the onset of the first experimental trial of each block and with a duration of 24 s, so that it comprised the eight experimental trials. This model was selected to maximize the power of the design to identify the cortical regions recruited by the task. The estimated beta weight for each condition was transformed to percent signal change and random effect analysis (RFX) was conducted at the group level. Statistical analysis (one-sample t-test) was performed using the CoSMoMVPA Toolbox (Oosterhof et al. 2016, http://www.cosmomvpa.org/). Correction for multiple comparisons was performed using Threshold-Free Cluster Enhancement (TFCE, Smith and Nichols 2009) in combination with cluster level correction (P < 0.05, two-tailed, z > 1.96, 10 000 permutations) as implemented in CoSMoMVPA. Statistical maps (t-values masked with TFCE threshold) were projected onto an average surface map of the reconstructions of the cortical surface of the left and right hemisphere of all participants that were included in this study, using cortex-based alignment as implemented in Brain Voyager (version 2.8). ROI Selection for MVPA We aimed to identify cortical regions representing different levels of action encoding, and cortical regions in which the different levels are jointly represented. We reasoned that the identification of regions showing overlapping representations might support the possible exchange of information across different levels of the hierarchy within these cortical sites. To this aim, we selected ROIs within both hemispheres based on previous studies that examined the encoding of movement goals at different levels of abstraction (Gallivan et al. 2013b; Kadmon Harpaz et al. 2014). We decided to start from the results of a recent study (Gallivan et al. 2013b) that adopted hand movements (grasping and reaching actions) similar to the ones considered in our study. Moreover, this study investigated two levels of action encoding, concrete action and effector-independent goal representations, allowing us to replicate and extend their findings investigating an additional intermediate level of representation. For each ROI, we started from the activation at the group level obtained in Gallivan et al. (2013b). Then, we located the nearest local maxima within the results of the univariate GLM RFX analysis (t-contrast: LH and RH vs. baseline). We focused on the following ROIs: M1, ventral premotor cortex (PMv), PMd and regions along the intraparietal sulcus, pIPS, mIPS and aIPS (anterior aIPS in Gallivan et al. 2013a, 2013b, 2013c). The positions of the peaks for each group ROI are reported in Table 1, illustrated in Figure 1D and reported in more detail in Figure S2. Table 1 Position of the ROIs used for MVPA ROI . Coordinates (Talairach) . x . y . z . Left pIPS −21 −70 48 Right pIPS 20 −62 45 Left mPS −31 −53 48 Right mIPS 30 −50 51 Left aIPS −40 −32 46 Right aIPS 37 −29 48 Left M1 −31 −24 54 Right M1 31 −25 51 Left PMd −22 −11 48 Right PMd 26 −13 52 Left PMv −53 −4 40 Right PMv 54 1 35 ROI . Coordinates (Talairach) . x . y . z . Left pIPS −21 −70 48 Right pIPS 20 −62 45 Left mPS −31 −53 48 Right mIPS 30 −50 51 Left aIPS −40 −32 46 Right aIPS 37 −29 48 Left M1 −31 −24 54 Right M1 31 −25 51 Left PMd −22 −11 48 Right PMd 26 −13 52 Left PMv −53 −4 40 Right PMv 54 1 35 Open in new tab Table 1 Position of the ROIs used for MVPA ROI . Coordinates (Talairach) . x . y . z . Left pIPS −21 −70 48 Right pIPS 20 −62 45 Left mPS −31 −53 48 Right mIPS 30 −50 51 Left aIPS −40 −32 46 Right aIPS 37 −29 48 Left M1 −31 −24 54 Right M1 31 −25 51 Left PMd −22 −11 48 Right PMd 26 −13 52 Left PMv −53 −4 40 Right PMv 54 1 35 ROI . Coordinates (Talairach) . x . y . z . Left pIPS −21 −70 48 Right pIPS 20 −62 45 Left mPS −31 −53 48 Right mIPS 30 −50 51 Left aIPS −40 −32 46 Right aIPS 37 −29 48 Left M1 −31 −24 54 Right M1 31 −25 51 Left PMd −22 −11 48 Right PMd 26 −13 52 Left PMv −53 −4 40 Right PMv 54 1 35 Open in new tab ROI-Based and Searchlight-Based MVPA For MVPA, we estimated a GLM for each participant. We modeled each trial (see Fig. S1) as a boxcar function convolved with the hemodynamic response function (Boynton et al. 1996). The onset and duration of the boxcar was based on the presentation of the visual cue, with a duration of 2 s. We considered two regressors for each block (Fig. S1), modeling all trials for each experimental condition within a block as a single predictor of interest (for a similar procedure see Oosterhof et al. 2012a, 2012b). For each participant, a total of 160 regressors of interest were considered, originating from the 8 (2 movement types × 2 wrist orientations × 2 effectors) conditions × 5 runs (for each hand) × 4 blocks. In addition, we considered instructions, movement parameters (3 for translation, 3 for rotation), and error trials, if present, as predictors of no interest. For the instructions, their onset and offset (duration: 3 s) was used as the start and end of the boxcar function. For classification, we used linear discriminant analysis as implemented in the CoSMoMVPA Toolbox (Oosterhof et al. 2016). MVPA was performed adopting a ROI-based and a volume-based searchlight approach (Kriegeskorte and Bandettini 2007), that is, performing a decoding analysis along all the voxels within the brain. The searchlight analysis was conducted to provide a replication and a possible extension of the results of the ROI analysis. Note that the dimensions of the searchlight limit the spatial specificity of the results. This is a limitation inherent to any MVPA as the result of decoding analysis for each searchlight is assigned to the central voxel of the searchlight (Oosterhof et al. 2011). Each ROI and each searchlight were defined using the beta values of the selected voxel and of the surrounding neighboring ones within a sphere with a radius of four voxels (average of 218.6 voxels per searchlight and of and of 250.2 voxels for ROIs). The estimated beta weights for each predictor of interest were adopted as features for the classifier. We extracted 20 patterns of beta weights for each condition. Figure 2 Open in new tabDownload slide Hierarchical action encoding and decoding approach. Schematic overview of the hierarchical organization of action encoding tested within this study, consisting of three levels of representation along a concrete-to-abstract continuum. An example of the type of adopted decoding procedure is provided within the inset for each level of action representation. Figure 2 Open in new tabDownload slide Hierarchical action encoding and decoding approach. Schematic overview of the hierarchical organization of action encoding tested within this study, consisting of three levels of representation along a concrete-to-abstract continuum. An example of the type of adopted decoding procedure is provided within the inset for each level of action representation. Our analysis investigated the possible hierarchical organization of action encoding along a concrete-to-abstract continuum (see Fig. 2), focusing on three levels of representation: Level 1: concrete action encoding; Level 2: effector-dependent goal encoding, generalizing across wrist orientation; Level 3: effector-independent goal encoding, generalizing across effector and wrist orientation. In analogy with previous studies (Gallivan et al. 2011a, 2013b, 2013c; Tucciarelli et al. 2015; Turella et al. 2016), we adopted cross-decoding for testing abstract goal encoding, that is, generalizing across specific motor features, by training the classifier on a pairwise comparison having a specific motor feature and testing it on a different pairwise comparison lacking this feature. To estimate decoding accuracy, we adopted a leave-one-run-out cross-validation approach. We adopted a nested approach for decoding the following Barany et al. (2014), whereby the same data were used for training the classifier within each searchlight, making the results across the different levels of action encoding comparable. Within each cross-validation fold, for all the considered pairwise comparisons, we tried to distinguish between the two action types (PG vs. WH) by training the classifier on the beta patterns extracted from four runs (16 per condition) and testing it on the patterns estimated from another run with independent data (4 per condition). To test for the encoding of concrete actions (level 1; Fig. 2, upper panel), we considered grasping movements performed with a specific effector and wrist orientation, training, and testing the classifier on independent data of the same pairwise comparison (e.g., training and testing: WH 0° RH vs. WH 0° RH). All possible combinations of training and testing sets were considered, separately for each hand. For effector-dependent goal encoding (level 2; see Fig. 2, middle panel), we trained the classifier on the pairwise comparison on one hand (e.g., training: WH 0° RH vs. PG 0° RH) and then tested the classifier on the same pairwise comparison, but with a different wrist orientation (e.g., testing: WH 90° RH vs. PG 90° RH). This type of encoding was tested for each effector separately with all possible combinations of training and testing sets. For effector-independent goal encoding (level 3; see Fig. 2, bottom panel), we trained a classifier on a specific pairwise comparison between the two actions (e.g., training: WH 0° RH vs. PG 0° RH) and then tested it on the same pairwise comparison but performed with the other effector and wrist orientation (e.g., testing: WH 90° LH vs. PG 90° LH). For this level, decoding results for all possible combinations of training and testing sets with the two different effectors were considered. For ROI-based MVPA, we tested significant decoding with a one-sample t-test against chance level (50%). We report uncorrected and corrected results adopting false discovery rate (FDR) correction (Benjamini and Hochberg 1995) accounting for multiple comparisons considering all comparisons (N = 5) within all ROIs (N = 12). For the searchlight MVPA, to determine in which brain areas decoding was above chance level (50%, Oosterhof et al. 2016), we computed t-tests across the individual decoding accuracy maps. P values were set at P < 0.05 (one-tailed, z > 1.6449) for cluster level correction adopting TFCE (Smith and Nichols 2009) estimated adopting 10 000 permutations as implemented in CoSMoMVPA. To further characterize the z maps obtained from the searchlight MVPA, we entered them into a conjunction analysis. The conjunction was set at the voxel level by considering the minimum z value across the considered maps (Nichols et al. 2005). Statistical maps originating from searchlight analyses were projected onto an average surface map computed using cortex-based alignment (as implemented in BrainVoyager 2.8) of the anatomies of all participants that were included in this study. Results Behavioral Results The 2 × 2 × 2 repeated measures ANOVA performed on RTs showed no significant main effect nor an interaction (effector: [F(1,20) = 0.57, P = 0.46]; action [F(1,20) = 3.20, P = 0.09]; orientation [F(1,20) = 3.19, P = 0.09]; effector × action [F(1,20) = 0.65, P = 0.8], effector × orientation [F(1,20) = 0.52, P = 0.48]; orientation × action [F(1,20) = 1.92, P = 0.18]; effector × action × orientation [F(1,20) = 0.42, P = 0.53]). Mean RTs are plotted and reported in the supplementary materials (Fig. S2 and Table S1). Univariate Results Univariate analysis was adopted to identify brain regions involved in non-visually guided grasping execution adopting the LH and the RH. The contrast of interest identified cortical regions recruited during the task, that is, grasping conditions (LH and RH) versus baseline. This contrast revealed the recruitment of a widespread bilateral set of regions within the premotor cortex, M1, somatosensory cortex and PPC (Fig. 1D). In addition, we observed a bilateral recruitment of the posterior temporal cortex, the parietal operculum, and the insular cortex. ROI-Based MVPA Our results showed significant decoding of grasping actions for all three investigated levels of representation, suggesting a hierarchical organization of action encoding across frontal and intraparietal cortices (see Fig. 3A,B). We started focusing on the three levels to provide a general overview of the structure of action organization within the parieto-frontal motor network. Then, to better characterize which brain regions host overlapping action representations, we provided a description of the joint encoding of these different levels, separately for frontal and parietal regions. Figure 3 Open in new tabDownload slide (A) Decoding results for ROI-based MVPA: motor and premotor cortices. (B) Decoding results for ROI-based MVPA: intraparietal regions. The bar graphs show the average decoding accuracy for the different levels of action encoding. Significant decoding is indicated with asterisks (*P < 0.05; **P < 0.005; q < 0.05 FDR corrected, black star). Figure 3 Open in new tabDownload slide (A) Decoding results for ROI-based MVPA: motor and premotor cortices. (B) Decoding results for ROI-based MVPA: intraparietal regions. The bar graphs show the average decoding accuracy for the different levels of action encoding. Significant decoding is indicated with asterisks (*P < 0.05; **P < 0.005; q < 0.05 FDR corrected, black star). Representation of Different Levels of Action Encoding The results of the ROI-based MVPA revealed that actions are encoded at the concrete level (level 1 in Fig. 2), characterized by a specific combination of hand orientation, and adopted effector, in a widespread set of regions. All ROIs encoded specific grasping actions performed with the contralateral hand (Fig. 3A,B). These findings are in line with the role of parieto-frontal motor networks in effector-specific action planning for saccade and hand movements (Rizzolatti et al. 1998, 2014; Cui and Andersen 2007; Andersen et al. 2014). With respect to hand actions, our description mirrors the general selectivity for this type of actions present both in PPC, premotor, and motor cortices (Turella and Lingnau 2014; Gallivan and Culham 2015; Janssen and Scherberger 2015; Fattori et al. 2017). Moreover, our results are well in agreement with previous MVPA studies in humans (Gallivan et al. 2011b, 2013b, 2013c; Fabbri et al. 2014) and monkey studies (Nelissen et al. 2017) examining similar grasping conditions. Within most of our ROIs, we also obtained a representation of concrete ipsilateral hand actions (level 1, Fig. 3). This is in agreement with the predominant representation of contralateral hand actions described in neurophysiological (Chang et al. 2008; Chang and Snyder 2012), neuropsychological (Karnath and Perenin 2005; Andersen et al. 2014), and neuroimaging studies (Valyear and Frey 2015). Even if the representation of ipsilateral hand movements has been investigated less intensely, our findings are in line with human fMRI investigations (Beurze et al. 2007; Gallivan et al. 2013b; Valyear and Frey 2015; Fitzpatrick et al. 2019) and monkey neurophysiological studies (Rizzolatti et al. 1988; Chang et al. 2008; Michaels and Scherberger 2018). It is more difficult to relate our results for the abstract representations (level 2, level 3) to previous investigations as only few studies (Gallivan et al. 2013b; Kadmon Harpaz et al. 2014) investigated similar action representations while adopting different experimental paradigms. Significant decoding for the second level (effector-dependent goal encoding) was evident in a less extended set of regions (Fig. 3A,B). In general, there was a posterior-to-anterior gradient with a higher number of regions with significant decoding for this level of representation within PPC with respect to frontal cortex. Several PPC regions showed significant decoding for the second level jointly for the ipsi- and contralateral hand, whereas no frontal region showed this pattern of results. Finally, none of the frontal regions accommodated all three investigated levels of action encoding (Fig. 3A), whereas there was significant decoding for the third level (effector-independent goal encoding) within several regions of PPC (bilateral aIPS, left pIPS). Convergence of Action Representations Within Frontal Cortices The ROI-based analysis showed a similar pattern of decoding for left and right M1 (Fig. 3A). There was a joint representation of concrete action and effector-dependent goal representation for the contralateral hand (levels 1 and 2). In addition, there was a representation of concrete actions performed with the ipsilateral hand (level 1), but no effector-dependent goal representation (level 2). Premotor cortices led to more variable results (Fig. 3A). Left PMd presented significant decoding only for concrete actions performed with the contralateral hand (RH). Right PMd showed a pattern of decoding similar to right M1 (significant decoding of level 1 and 2 for LH, significant decoding of level 1 for RH). Left and right PMv showed a pattern of decoding similar to left M1 (significant decoding of level 1 and 2 for RH, significant decoding of level 1 for LH). The difference in decoding within premotor regions might be in part related to differences in action representations for the dominant and non-dominant hand. Taken together, our results suggest that frontal regions might be involved in representing concrete, specific ipsilateral and contralateral action exemplars, together with the goal of contralateral hand actions at an intermediate—effector-dependent—level of abstraction. This organization of action representations might allow the conversion of a goal into a specific action but only at the effector-dependent level, for example, changing hand position on the grasped object, using the same hand. This suggests a possible role of motor and premotor cortices as a neural substrate subtending flexible hand behavior but limited to the controlateral effector. Convergence of Action Representations Within Parietal Regions For intraparietal regions, MVPA revealed a qualitatively different picture (Fig. 3B). First, there was a more widespread representation of concrete action (level 1) and effector-dependent goal encoding (level 2) for the ipsilateral and contralateral hand. Importantly, several regions showed significant effector-independent encoding (level 3). In contrast to frontal cortices, there were several regions hosting all three levels of action encoding (Fig. 3B), namely, bilateral aIPS and left pIPS. The description of effector-independent goal representations within the PPC is in line with the two MVPA studies which inspired our investigation (Gallivan et al. 2013b; Kadmon Harpaz et al. 2014). In particular, our results are comparable to those of Gallivan et al. (2013b) which showed goal representations within similar regions of the PPC (aIPS) adopting similar MVPA methods. Note that our results are partially different from a recent repetition suppression fMRI study (Valyear and Frey 2015). In this study, Valyear and Frey (2015) did not obtain any evidence for grasp-specific hand-independent representations as we did in our study. The authors mainly described hand-specific representations within the PPC. The difference between the two studies might be related to the adopted methods—different sensitivity of repetition suppression and MVPA- and/or the adopted experimental design, as our non-visually guided task might engage a specific set of action representations. That said, the two results are not mutually exclusive as there might be a strong effector-specific gradient within the PPC as shown by Valyear and Frey (2015)—and by a long tradition of neurophysiological, neuropsychological, and neuroimaging studies (Andersen et al. 2014), whereas there might also exist an effector-independent functional gradient within the PPC as suggested by recent studies (Heed et al. 2011; Leoné et al. 2014). Overall, our results suggest that intraparietal regions (aIPS, left pIPS) host overlapping representations of concrete exemplars of actions and goals at an intermediate level—effector-dependent—for both hands, but also at a more general effector-independent level. Within these cortical sites, this organization might allow sharing of information across all three levels. Brain regions having this property could allow local computations across different levels of the hierarchy allowing the flexible remapping of goals across effectors and means, for example, changing the hand adopted to grasp the same object with a different wrist orientation. MVPA Searchlight Level 1: Concrete Action Encoding (Within Effector and Orientation) Significant decoding for concrete actions (level 1, see Fig. 2, upper panel) was evident within a widespread set of frontal, parietal and temporal areas for each hand separately (Figs S4 and S5). These regions encoded concrete effector-dependent actions, that is, specific exemplars defined by a combination of wrist orientation and action type. Our results for the RH are well in agreement with then known with previous human and monkey studies (Fattori et al. 2010, 2017; Vesia and Crawford 2012; Turella and Lingnau 2014; Gallivan and Culham 2015; Janssen and Scherberger 2015; Breveglieri et al. 2016), and with previous MVPA neuroimaging investigations (Gallivan et al. 2011b, 2013b, 2013c; Fabbri et al. 2014; Ariani et al. 2015). As for the left hand, there are few neurophysiological and neuroimaging investigations on action representations performed with this effector (for exceptions, see Fabbri et al. 2010; Gallivan et al. 2013a, 2013b; Valyear and Frey 2015), so it is difficult to make direct comparisons. Qualitatively, however, results for the two hands seem comparable and specular with maximum decoding within the contralateral M1. Level 2: Effector-Dependent Goal Encoding (Within Effector and Across Orientation) The intermediate level of action encoding (level 2, see Fig. 2, middle panel) was identified through MVPA cross-decoding as an effector-dependent goal representation, generalizing across wrist orientation. We identified this type of encoding for the two effectors separately (Figs S6 and S7). For the left hand (Fig. S6), there was a lateralization of significant decoding in the frontal cortex, mainly within the contralateral (right) M1, somatosensory and premotor cortices. Moreover, there was bilateral significant decoding within the superior and inferior parietal cortex. Within the temporal lobe, there was also significant decoding within the posterior lateral occipitotemporal cortex (LOTC) of the right hemisphere. For the RH (Fig. S7), a specular and more widespread pattern of results was evident within the frontal cortex with significant decoding within the left M1, premotor, and somatosensory cortices (Fig. S7). Bilateral significant decoding was evident within the superior and inferior parietal lobule encompassing the intraparietal sulcus. For the temporal lobe, the bilateral posterior part of the LOTC was showing significant decoding. Regions of Convergence for Concrete Action and Effector-Dependent Goal Encoding (Conjunction of Levels 1 and 2) To identify cortical sites with a convergence of concrete action and effector-dependent goal encoding (levels 1 and 2), we performed a conjunction analysis, separately for each hand (Figs S8 and S9). The threshold value for the conjunction maps needs to be interpreted as if the two corrected maps are independently significant at the given P-value. The minimum level of significance (z = 1.6449, P = 0.05) is equivalent to a real P value of =0.052, that is, P = 0.0025. For both effectors, there was a clear lateralization of the conjunction of these representations within a cluster comprising the contralateral premotor, motor, somatosensory, and PPC (see Figs S8 and S9). The results of the searchlight analyses thus confirmed the ROI-based MVPA, which showed a lateralization of the joint representation of level 1 and 2 for each hand only within the contralateral motor cortices (Fig. 3A). Moreover, we obtained a convergence of levels 1 and 2 within bilateral posterior parietal and mainly within the right posterior temporal cortex for both effectors (Figs S8 and S9). Level 3: Effector-Independent Goal Encoding (Across Effector and Orientation) The most abstract level of encoding (level 3, see Fig. 2, bottom panel) was identified through MVPA cross-decoding as a goal representation, generalizing across the adopted effector and wrist orientation (Fig. 4A). Within frontal cortex, significant cross-decoding was evident within the right PMv, the left PMv, and PMd. Furthermore, two clusters showed significant decoding within bilateral PPC, within the aIPS and posterior superior parietal lobule (pSPL) spreading within the posterior part of the intraparietal sulcus (pIPS). Within the lateral occipito-temporal cortex, a significant cluster was present within the left anterior part of the posterior middle temporal gyrus. Our results are partially in agreement with two recent investigations adopting similar cross-effector decoding (Gallivan et al. 2013a, 2013b; Turella et al. 2016). Figure 4 Open in new tabDownload slide (A) Effector-independent goal encoding. A z-map assessing the statistical significance of the decoding accuracies is represented. The threshold was set at P < 0.05 TFCE corrected (one-tailed). Upper panel: medial view of the brain. Lower panel: lateral view of the brain. (B) Conjunction analysis for all action representations. The statistical map represents the minimum z values for the conjunction of the five statistical maps (one map for each effector for levels 1 and 2, and one map for level 3). The threshold was set at P < 0.05 TFCE corrected (one-tailed, z = 1.6449). Figure 4 Open in new tabDownload slide (A) Effector-independent goal encoding. A z-map assessing the statistical significance of the decoding accuracies is represented. The threshold was set at P < 0.05 TFCE corrected (one-tailed). Upper panel: medial view of the brain. Lower panel: lateral view of the brain. (B) Conjunction analysis for all action representations. The statistical map represents the minimum z values for the conjunction of the five statistical maps (one map for each effector for levels 1 and 2, and one map for level 3). The threshold was set at P < 0.05 TFCE corrected (one-tailed, z = 1.6449). Regions of Convergence for All Action and Goal Representations (Conjunction of Levels 1, 2 and 3) To identify additional brain areas in which all three investigated action levels are jointly represented, and that we might have missed in the ROI analysis, we performed a conjunction analysis on the searchlight maps for all three levels of action representations (concrete, level 1, effector-dependent, level 2, and effector-independent, level 3). To this aim, we performed a conjunction analysis of five z-maps: one map for each effector of level 1 (Figs S4 and S5); one map for each effector of level 2 (Figs S6 and S7), and one map for level 3 (Fig. 4A). The threshold value for the conjunction map needs to be interpreted as if the five corrected maps are all independently significant at the given P-value. The minimum level of significance (z = 1.6449, P = 0.05) is equivalent to a real P value of =0.055, that is, P = 0.0000003125. A significant conjunction was evident within three sites in the posterior parietal cortex (Fig. 4B). The first two sites were obtained within the bilateral aIPS. The other cluster was located within the superior parietal lobule (pSPL) spreading within the pIPS. A left lateralization of effector-independent action encoding was evident also in results by Gallivan et al. (2013a, 2013b, 2013c). Our analysis led to no significant clusters within the frontal lobe, and thus confirmed the results of the ROI analysis. Overall, ROI-based MVPA and searchlight analyses suggested a differential role for frontal, parietal, and temporal regions, revealing a complex hierarchical structure of action representations hosted within the human brain. Discussion In this study, we aimed to identify brain areas that represent actions at three increasing levels of abstraction, and areas that jointly accommodate all three levels. We found a different pattern of functional specificity for frontal, parietal, and temporal cortices. Moreover, we identified a convergence of all three levels in parietal cortex. These results support the idea of a complex hierarchical structure of these representations hosted within the human brain which might be at the basis of the extreme flexibility of our daily behavior, remapping an intended motor goal (through the same effector, e.g., by using a different wrist orientation), or even generating a different motor output (adopting another effector). Within the “frontal lobe,” ROI analysis revealed a convergence of concrete action and effector-dependent goal representations (levels 1 and 2) for the contralateral hand within primary motor cortices (see also Figs S8 and S9). M1 might provide the neural substrate for representing a very specific action together with abstract information about its goal, but only at an effector-dependent level. Within the “parietal lobe,” we obtained a convergence of all three action representations (levels 1, 2 and 3), ranging from the representation of hand-specific action exemplars to the representation of effector-independent goals. The joint coding of all three different representations in bilateral aIPS and left pIPS suggests a role of these regions as a central hub for moving across representations along the hierarchy, from concrete action specification to goals within and across different effectors. Within the left “lateral occipito-temporal lobe,” searchlight analysis showed a more segregated organization, with the left LOTC hosting wrist- and effector-independent action representations, but no convergence with concrete and effector-dependent action encoding (see Fig. 4A,B). We acknowledge that our decoding results might be at least partially determined by the adoption of a non-visually guided memory-based task (see also Fabbri et al. 2010, 2012; Ariani et al. 2015, 2018; Leo et al. 2016; Turella et al. 2016). Our task imposed specific constrains during movement execution which are different from requirements of classical open-loop paradigms—where the target object is directly visible during planning, but not during execution—adopted in monkey (e.g., Fattori et al. 2010) and human studies (e.g., Gallivan et al. 2013b). Nevertheless, even considering the uncommon requirements of our paradigm, most of the reported findings, particularly effector-independent encoding (level 3), are in line with previous investigations in the field adopting a classical open-loop paradigm (Gallivan et al. 2013b). In the following sections, we will focus on each specific cortical network to highlight its properties, providing an overview of the implications of our findings for understanding the brain architecture underlying hand-object interactions. Action Encoding in Frontal Regions M1 has been classically considered as the final stage of the parieto-frontal system with the crucial role of producing motor output. Therefore, the description of concrete action encoding in this area is not surprising and is causally linked to perform an appropriate action. However, we also obtained a wrist-independent, effector-specific goal representation in M1. The representation of abstract information within M1 appears to be in line with recent fMRI studies (Kadmon Harpaz et al. 2014; Leo et al. 2016). As an example, Kadmon Harpaz et al. (2014) reported that M1 and aIPS encoded the movement of writing a specific letter with the dominant hand independently of the size of the drawing within M1 and aIPS, suggesting a similarity of encoding within these two regions. These findings confirmed that M1 entailed the functional specificity needed to perform actions characterized by specific motor features but showed that, even at the latest cortical stage of the motor system, there is also information about a more abstract goal—even if only at an effector-dependent level (see also Kakei et al. 1999). Likewise, Leo et al. (2016) showed that M1 represented hand movement information in terms of synergies, that is, meaningful patterns of joint movements/postures, measured with kinematic recordings. This finding provided support for the idea that M1 does not only represent concrete aspects of actions, but also more abstract information, as demonstrated in our study. Moreover, brain activity was predictable on the basis of kinematic synergies not only within left M1 (Leo et al. 2016), but also within other regions of the hand motor network, such as contralateral premotor, somatosensory, superior, and inferior parietal regions, together with bilateral PMv and aIPS. Relevant for our study, the authors showed similar synergy-based encoding for the dominant hand not only within the left M1, but also within bilateral aIPS. These two studies suggest possible similarities between the abstract representation within aIPS and M1. Our study also revealed differences between these two regions. We obtained a functional specialization within contralateral M1, where actions can be represented in flexible terms with respect to their goal (i.e., irrespective of wrist orientation), but only at an effector-dependent level. The parietal lobe contained similar effector-dependent encoding, but crucially hosting sites of convergence within aIPS (and possibly left pIPS), which represented action goals at both an effector-dependent and -independent level (see also Gallivan et al. 2013b, 2013c). Action Encoding in Parietal Regions The PPC has been classically reported to be crucial for planning effector-specific actions comprising specific modules for reaching, saccadic eye movements, and grasping (Andersen and Buneo 2002; Cui and Andersen 2007; Andersen et al. 2014). Our description of concrete action encoding in the PPC perfectly supports this line of research. Recent findings seem to complement our understanding of effector-specificity within the PPC by investigating actions performed with multiple effectors, such as the eyes, the RH and the right foot (Heed et al. 2011, 2016; Leoné et al. 2014). In these studies, PPC regions do not seem to follow only a strict effector-related organization, but could be involved in processing the same type of action information (or function) regardless of the adopted effector (Heed et al. 2011). Moreover, PPC might more likely represent dichotomies of effectors rather than a single specific one (Leoné et al. 2014), so that effector specificity within PPC might be relative and not absolute in nature. This representational organization within PPC could provide the possible neural mechanism for representing an action goal both in terms of the selected effector (e.g., RH) and of its potential alternatives (e.g., right foot and/or eyes). Similarly, our analyses revealed PPC regions (aIPS, left pIPS), where information about the same action goal is encoded jointly for the contra- and ipsilateral hand, and irrespective of which hand will be adopted (see also Gallivan et al. 2013b). Our conjunction results thus extend previous findings (Leoné et al. 2014) to grasping actions performed with the same effector, but considering the two body sides (left vs. right). One might argue that a complementary interpretation of our data could be that these representations are crucial for action and effector selection, providing a cohort of potential movements—characterized by different motor features, such as effector, hand orientation and grip type—among which to select the best candidate to-be-executed. This interpretation is in line with the possible pivotal role of PPC in representing this possible landscape of motor options, emerging from computational models of action selection (Cisek 2006), and empirical data on effector selection (Oliveira et al. 2010; Fitzpatrick et al. 2019). While this interpretation seems plausible, we wish to remark that our data are equally compatible with alternative views according to which the PPC represent abstract (categorical) “outcomes” of decisions (e.g., Freedman and Assad 2011), which are then implemented by selecting the most appropriate motor features. Our experiment was not designed to distinguish between these alternative accounts. From a broader perspective, parietal cortices seem to accommodate a wider set of action representations with respect to frontal regions. This description seemed to suggest a gradient from a representation of actions less bound to the selected effector within parietal cortices to a representation which is more linked to a specific effector and motor features within frontal regions. A similar transition between hand-independent and hand-specific coding has been recently described at the neural level in monkey—during grasp planning—while recording from anterior intraparietal cortex, area AIP, and PMv, area F5 (Michaels and Scherberger 2018). Overall, our results suggest that the level at which actions are encoded differ between frontal and parietal regions. Local computations within PPC regions, such as aIPS, might flexibly remap the same goal to a different effector when required by the task, as recently proposed by other authors (Heed et al. 2011, 2016; Leoné et al. 2014). Considering the likely bidirectional flow of information within parieto-frontal networks (Schaffelhofer and Scherberger 2016; Blohm et al. 2019), in this specific situation, the implementation of the remapped motor output would be possible by transferring this information to premotor cortices and then to M1, where only effector-dependent goal information is present. Differences in Action Representations Between Posterior and Anterior Parietal Cortices The crucial role of the aIPS in representing different types of grasping actions is commonly accepted (Janssen and Scherberger 2015). Lesions within the anterior part of the intraparietal cortex led to an impairment in grasping both in monkeys (Gallese et al. 1994) and in humans (Binkofski et al. 1998). Our findings provide an important extension of previous results on effector-independent encoding (Gallivan et al. 2013b), demonstrating that this area serves as a convergence for representations of action goals within and across the two hands. In addition to aIPS, we showed a similar convergence of all three investigated levels of representation also within a more posterior region within the parietal cortex, the pIPS (and pSPL in the conjunction analysis). The posterior part of the intraparietal cortex (and the pSPL) has been traditionally assumed to be involved in reaching, representing the direction, and/or the target spatial position of the performed or planned movement (Andersen and Buneo 2002). However, recent investigations demonstrated the involvement of this area also in grasping actions both in monkeys (Fattori et al. 2009, 2010, 2012; Breveglieri et al. 2018) and in humans (Gallivan et al. 2011a, 2013b, 2013c; Fabbri et al. 2014; Tosoni et al. 2015). Haar et al. (2017) showed that posterior parietal regions, comprising the SPL, encode the direction of the performed movement. In addition, ipsilateral and contralateral movements were represented similarly, according to an intrinsic coordinate system. Fronto-parietal regions seemed to host a representation of movement direction based on equivalent joint angles, which is invariant of the adopted effector (LH or RH). These findings might suggest an interpretation of our results in pSPL, and possibly also in pIPS, based on direction of the movement and/or spatial position of the target. Note that this interpretation is not compatible with the paradigm adopted in our study, as the spatial position of the target object was the same for movements with the right and the left hand. Moreover, the direction of the movement in intrinsic/joint coordinates was the same for all the actions performed with the two hands. This characteristic of our task thus might point toward an interpretation of our results in left pSPL and pIPS in terms of a representation of actions goals across the two hands, in accordance with the role of these regions in coding also grasp-related information. This representation of action goals might coexist with the encoding of direction in intrinsic/joint coordinates described by Haar et al. (2017). A recent study on tool use (Gallivan et al. 2013c) suggested potentially different roles of pIPS and aIPS in representing action goals. In this investigation, Gallivan et al. (2013a, 2013b, 2013c) compared grasping and reaching performed either with the RH or with a plier. pIPS hosted overlapping representations of concrete actions performed with the hand and with the tool, together with a representation of action goal invariant to the use of the hand or of a tool. aIPS showed a different response pattern, as it also represented concrete actions for both effectors (hand and plier) but showed no representation of shared goals across these two effectors. A comprehensive interpretation of ours and other results (Gallivan et al. 2013b, 2013c; Haar et al. 2017) might suggest a representation of action goals within and across hands—but not generalizing to tool action goals—within the anterior part of PPC, whereas the posterior part of PPC might host a representation of actions goals together with the encoding of the direction of the movement in a common effector-invariant code following intrinsic coordinates. Different action- and position-related information would be stored in specific pathways of the dorsal stream and might be differently recruited depending on task demands (Galletti and Fattori 2018). Even if this is an intriguing proposal, it needs to be empirically tested by employing a paradigm dissociating spatial position and action/goal encoding across hands and/or tools. Action Organization Within Temporal Cortices Whereas, our ROI analysis focused on parieto-frontal regions, the searchlight analysis revealed decoding for different levels of action representations within temporal cortices (Fig. 3A, Figs S4–S9). Of particular interest for the present study is the specific organization of action encoding in the left LOTC (Fig. 3A). Within this region, we obtained significant decoding for the most abstract level of goal encoding, but no overlap with the other levels of the hierarchy. Our observations are in line with recent investigations showing goal encoding within the temporal lobe during planning and execution (Gallivan et al. 2013a; Ariani et al. 2015; Turella et al. 2016). We extended these results by showing a segregated organization, with an abstract (effector-independent) representation of executed actions hosted within the LOTC. These results partly mirror the organization described during action observation (Wurm and Lingnau 2015; Wurm et al. 2016), supporting the crucial role of the LOTC in representing abstract action information across a range of stimuli and tasks (Lingnau and Downing 2015). The LOTC might act as a site of integration for different types of information coming from the dorsal and the ventral streams (Lingnau and Downing 2015), but the exact nature of the role of these representations during tasks involving sensorimotor control is still poorly understood (Gallivan and Culham 2015). Before an action is performed, specific sub-regions within the LOTC and the dorsal stream might exchange information about the properties of the to-be-grasped object, the planned action and/or of its expected sensory consequences (Gallivan and Culham 2015; Zimmermann et al. 2016). During action execution, this information might be adopted for monitoring and possible online corrections, but its exact contribution to motor control is difficult to assess yet. A complementary interpretation might relate this significant decoding to motor imagery of the planned or executed actions as participants could not see their own movements (Pilgramm et al. 2016; Zabicki et al. 2017). Further MVPA and connectivity studies will be needed to fully clarify the specific function of the temporal cortex in motor control. Indeed, previous connectivity studies provided precious information to constrain our interpretation of the present data (Grol et al. 2007; Verhagen et al. 2008; Hutchison and Gallivan 2018) and novel task-based connectivity approaches (Di and Biswal 2019) seem to be incredibly promising to further characterize the brain dynamics of the hand motor network. Nevertheless, our description of abstract representational content has started to reveal the complex structure of the neural organization of action representation hosted within the LOTC even during motor execution. Conclusions Our results widened previous investigations on motor control, demonstrating that executed actions are represented at different levels of abstraction following a hierarchical organization, retaining a unique arrangement within frontal, parietal, and temporal cortices. This neural architecture is likely to underlie our ability to interact with the world, orchestrating the performance of specific hand-object interactions. At the same time, this neural architecture might provide the flexibility needed to react to unexpected changes in the environment, remapping an intended motor goal, through the same effector—adopting another type of interaction with an object—or even generating a different motor output—adopting another effector. Finally, the description of these different levels of abstraction might be exploited for a possible translational implementation of our work in the field of neuroprosthetics. Indeed, a confirmation of the crucial role of the parietal cortex in representing “abstract” action goals emerged in a recent neurophysiological investigation in humans (Aflalo et al. 2015). Here, a tetraplegic patient controlled an external robotic hand by means of the neural signals recorded from anterior intraparietal and superior parietal cortices (similar to the ones localized in our study). This finding advised the adoption of MVPA of fMRI data as a precious guide to identify the most meaningful cortical targets to implant multi-array electrodes for guiding brain machine interfaces. Funding Provincia Autonoma di Trento and by the Fondazione Caritro; “Futuro in Ricerca 2013” grant (FIRB 2013, project RBFR132BKP to L.T.) awarded by MIUR; German Research Foundation Heisenberg-Professorship Grant (Li 2840/2-1 to A.L.). Notes The authors would like to thank Dr Paolo Ferrari for technical support. Conflict of Interest: The authors declare no competing financial interests. References Aflalo T , Kellis S , Klaes C , Lee B , Shi Y , Pejsa K , Shanfield K , Hayes-Jackson S , Aisen M , Heck C et al. 2015 . Decoding motor imagery from the posterior parietal cortex of a tetraplecig human . Science . 348 : 906 – 910 . Google Scholar Crossref Search ADS PubMed WorldCat Andersen RA , Andersen KN , Hwang EJ , Hauschild M . 2014 . Optic ataxia: from Balint’s syndrome to the parietal reach region . Neuron . 81 : 967 – 983 . Google Scholar Crossref Search ADS PubMed WorldCat Andersen RA , Buneo CA . 2002 . Intentional maps in posterior parietal cortex . Annu Rev Neurosci. 25 : 189 – 220 . Google Scholar Crossref Search ADS PubMed WorldCat Ariani G , Oosterhof NN , Lingnau A . 2018 . Time-resolved decoding of planned delayed and immediate prehension movements . Cortex. 99 : 330 – 345 . Google Scholar Crossref Search ADS PubMed WorldCat Ariani G , Wurm MF , Lingnau A . 2015 . Decoding internally and externally driven movement plans . J Neurosci. 35 : 14160 – 14171 . Google Scholar Crossref Search ADS PubMed WorldCat Barany D , Della-Maggiore V , Viswanathan S , Cieslak M , Grafton ST . 2014 . Feature interactions enable decoding of sensorimotor transformations for goal-directed movement . J Neurosci. 34 : 6860 – 6873 . Google Scholar Crossref Search ADS PubMed WorldCat Benjamini Y , Hochberg Y . 1995 . Controlling the false discovery rate: a practical and powerful approach to multiple testing . J R Stat Soc. 57 : 289 – 300 . OpenURL Placeholder Text WorldCat Beurze SM , de Lange FP , Toni I , Medendorp WP . 2007 . Integration of target and effector information in the human brain during reach planning . J Neurophysiol. 97 : 188 – 199 . Google Scholar Crossref Search ADS PubMed WorldCat Binkofski F , Dohle C , Posse S , Stephan KM , Hefter H , Seitz RJ , Freund HJ . 1998 . Human anterior intraparietal area subserves prehension: a combined lesion and functional MRI activation study . Neurology. 50 : 1253 – 1259 . Google Scholar Crossref Search ADS PubMed WorldCat Blohm G , Alikhanian H , Gaetz W , Goltz HC , DeSouza JFX , Cheyne DO , Crawford JD . 2019 . Neuromagnetic signatures of the spatiotemporal transformation for manual pointing . NeuroImage. 197 : 306 – 319 . Google Scholar Crossref Search ADS PubMed WorldCat Breveglieri R , Bosco A , Galletti C , Passarelli L , Fattori P . 2016 . Neural activity in the medial parietal area V6A while grasping with or without visual feedback . Sci Rep. 6 : 28893 . Google Scholar Crossref Search ADS PubMed WorldCat Breveglieri R , De Vitis M , Bosco A , Galletti C , Fattori P . 2018 . Interplay between grip and vision in the monkey medial parietal lobe . Cereb Cortex. 28 : 2028 – 2042 . Google Scholar Crossref Search ADS PubMed WorldCat Chang SWC , Dickinson AR , Snyder LH . 2008 . Limb-specific representation for reaching in the posterior parietal cortex . J Neurosci. 28 : 6128 – 6140 . Google Scholar Crossref Search ADS PubMed WorldCat Chang SWC , Snyder LH . 2012 . The representations of reach endpoints in posterior parietal cortex depend on which hand does the reaching . J Neurophysiol. 107 : 2352 – 2365 . Google Scholar Crossref Search ADS PubMed WorldCat Cisek P . 2006 . Integrated neural processes for defining potential actions and deciding between them: a computational model . J Neurosci. 26 : 9761 – 9770 . Google Scholar Crossref Search ADS PubMed WorldCat Cui H , Andersen RA . 2007 . Posterior parietal cortex encodes autonomously selected motor plans . Neuron. 56 : 552 – 559 . Google Scholar Crossref Search ADS PubMed WorldCat Culham JC , Cavina-Pratesi C , Singhal A . 2006 . The role of parietal cortex in visuomotor control: what have we learned from neuroimaging? Neuropsychologia. 44 : 2668 – 2684 . Google Scholar Crossref Search ADS PubMed WorldCat Culham JC , Valyear KF . 2006 . Human parietal cortex in action . Curr Opin Neurobiol. 16 : 205 – 212 . Google Scholar Crossref Search ADS PubMed WorldCat Di X , Biswal BB . 2019 . Toward task connectomics: examining whole-brain task modulated connectivity in different task domains . Cereb Cortex. 29 : 1572 – 1583 . Google Scholar Crossref Search ADS PubMed WorldCat Fabbri S , Caramazza A , Lingnau A . 2010 . Tuning curves for movement direction in the human visuomotor system . J Neurosci. 30 : 13488 – 13498 . Google Scholar Crossref Search ADS PubMed WorldCat Fabbri S , Caramazza A , Lingnau A . 2012 . Distributed sensitivity for movement amplitude in directionally tuned neuronal populations . J Neurophysiol. 107 : 1845 – 1856 . Google Scholar Crossref Search ADS PubMed WorldCat Fabbri S , Strnad L , Caramazza A , Lingnau A . 2014 . Overlapping representations for grip type and reach direction . NeuroImage. 94 : 138 – 146 . Google Scholar Crossref Search ADS PubMed WorldCat Fattori P , Breveglieri R , Bosco A , Gamberini M , Galletti C . 2017 . Vision for prehension in the medial parietal cortex . Cereb Cortex. 27 : 1149 – 1163 . Google Scholar PubMed OpenURL Placeholder Text WorldCat Fattori P , Breveglieri R , Marzocchi N , Filippini D , Bosco A , Galletti C . 2009 . Hand orientation during reach-to-grasp movements modulates neuronal activity in the medial posterior parietal area V6A . J Neurosci. 29 : 1928 – 1936 . Google Scholar Crossref Search ADS PubMed WorldCat Fattori P , Breveglieri R , Raos V , Bosco A , Galletti C . 2012 . Vision for action in the macaque medial posterior parietal cortex . J Neurosci. 32 : 3221 – 3234 . Google Scholar Crossref Search ADS PubMed WorldCat Fattori P , Raos V , Breveglieri R , Bosco A , Marzocchi N , Galletti C . 2010 . The dorsomedial pathway is not just for reaching: grasping neurons in the medial parieto-occipital cortex of the macaque monkey . J Neurosci. 30 : 342 – 349 . Google Scholar Crossref Search ADS PubMed WorldCat Filimon F . 2010 . Human cortical control of hand movements: parietofrontal networks for reaching, grasping, and pointing . Neuroscientist. 16 : 388 – 407 . Google Scholar Crossref Search ADS PubMed WorldCat Fitzpatrick AM , Dundon NM , Valyear KF . 2019 . The neural basis of hand choice: an fMRI investigation of the posterior parietal interhemispheric competition model . NeuroImage. 185 : 208 – 221 . Google Scholar Crossref Search ADS PubMed WorldCat Freedman DJ , Assad JA . 2011 . A proposed common neural mechanism for categorization and perceptual decisions . Nat Neurosci . 14 : 143 – 146 . Google Scholar Crossref Search ADS PubMed WorldCat Gallese V , Murata A , Kaseda M , Niki N , Sakata H . 1994 . Deficit of hand preshaping after muscimol injection in monkey parietal cortex . Neuroreport. 5 : 1525 – 1529 . Google Scholar Crossref Search ADS PubMed WorldCat Galletti C , Fattori P . 2018 . The dorsal visual stream revisited: stable circuits or dynamic pathways? Cortex . 98 : 203 – 217 . Google Scholar Crossref Search ADS PubMed WorldCat Gallivan JP , Chapman CS , McLean DA , Flanagan JR , Culham JC . 2013a . Activity patterns in the category-selective occipitotemporal cortex predict upcoming motor actions . Eur J Neurosci . 38 : 2408 – 2424 . Google Scholar Crossref Search ADS WorldCat Gallivan JP , Culham JC . 2015 . Neural coding within human brain areas involved in actions . Curr Opin Neurobiol . 33 : 141 – 149 . Google Scholar Crossref Search ADS PubMed WorldCat Gallivan JP , McLean DA , Flanagan JR , Culham JC . 2013b . Where one hand meets the other: limb-specific and action-dependent movement plans decoded from preparatory signals in single human frontoparietal brain areas . J Neurosci . 33 : 1991 – 2008 . Google Scholar Crossref Search ADS WorldCat Gallivan JP , McLean DA , Smith FW , Culham JC . 2011a . Decoding effector-dependent and effector-independent movement intentions from human parieto-frontal brain activity . J Neurosci . 31 : 17149 – 17168 . Google Scholar Crossref Search ADS WorldCat Gallivan JP , McLean DA , Valyear KF , Culham JC . 2013c . Decoding the neural mechanisms of human tool use . elife . 2 : e00425 . Google Scholar Crossref Search ADS WorldCat Gallivan JP , McLean DA , Valyear KF , Pettypiece CE , Culham JC . 2011b . Decoding action intentions from preparatory brain activity in human parieto-frontal networks . J Neurosci . 31 : 9599 – 9610 . Google Scholar Crossref Search ADS WorldCat Geoffrey M. Boynton , Stephen A. Engel , Gary H. Glover and David J. Heeger . ( 1996 ). Linear Systems Analysis of Functional Magnetic Resonance Imaging in Human V1 . J Neurosci. 16 : 4207 – 4221 . Google Scholar Crossref Search ADS PubMed WorldCat Grafton ST . 2010 . The cognitive neuroscience of prehension: recent developments . Exp Brain Res . 204 : 475 – 491 . Google Scholar Crossref Search ADS PubMed WorldCat Grol MJ , Majdandzic J , Stephan KE , Verhagen L , Dijkerman HC , Bekkering H , Verstraten FAJ , Toni I . 2007 . Parieto-frontal connectivity during visually guided grasping . J Neurosci . 27 : 11877 – 11887 . Google Scholar Crossref Search ADS PubMed WorldCat Haar S , Dinstein I , Shelef I , Donchin O . 2017 . Effector-invariant movement encoding in the human motor system . J Neurosci . 37 : 1663 – 1617 . Google Scholar Crossref Search ADS PubMed WorldCat Haar S , Donchin O , Dinstein I . 2015 . Dissociating visual and motor directional selectivity using visuomotor adaptation . J Neurosci . 35 : 6813 – 6821 . Google Scholar Crossref Search ADS PubMed WorldCat Heed T , Beurze SM , Toni I , Röder B , Medendorp WP . 2011 . Functional rather than effector-specific organization of human posterior parietal cortex . J Neurosci . 31 : 3066 – 3076 . Google Scholar Crossref Search ADS PubMed WorldCat Heed T , Leone FTM , Toni I , Medendorp WP . 2016 . Functional versus effector-specific organization of the human posterior parietal cortex: revisited . J Neurophysiol . 116 : 1885 – 1899 . Google Scholar Crossref Search ADS PubMed WorldCat Hutchison RM , Gallivan JP . 2018 . Functional coupling between frontoparietal and occipitotemporal pathways during action and perception . Cortex . 98 : 8 – 27 . Google Scholar Crossref Search ADS PubMed WorldCat Janssen P , Scherberger H . 2015 . Visual guidance in control of grasping . Annu Rev Neurosci . 38 : 150403170110009 . Google Scholar Crossref Search ADS WorldCat Kadmon Harpaz N , Flash T , Dinstein I . 2014 . Scale-invariant movement encoding in the human motor system . Neuron . 81 : 452 – 461 . Google Scholar Crossref Search ADS PubMed WorldCat Kakei S , Hoffman DS , Strick PL . 1999 . Muscle and movement representations in the primary motor cortex . Science . 285 : 2136 – 2139 . Google Scholar Crossref Search ADS PubMed WorldCat Karnath H-OO , Perenin M-TT . 2005 . Cortical control of visually guided reaching: evidence from patients with optic ataxia . Cereb Cortex . 15 : 1561 – 1569 . Google Scholar Crossref Search ADS PubMed WorldCat Krasovsky A , Gilron R , Yeshurun Y , Mukamel R . 2014 . Differentiating intended sensory outcome from underlying motor actions in the human brain . J Neurosci . 34 : 15446 – 15454 . Google Scholar Crossref Search ADS PubMed WorldCat Kriegeskorte N , Bandettini P . 2007 . Combining the tools: activation- and information-based fMRI analysis . NeuroImage . 38 : 666 – 668 . Google Scholar Crossref Search ADS PubMed WorldCat Leo A , Handjaras G , Bianchi M , Marino H , Gabiccini M , Guidi A , Scilingo EP , Pietrini P , Bicchi A , Santello M et al. 2016 . A synergy-based hand control is encoded in human motor cortical areas . elife . 5 : 1 – 32 . Google Scholar Crossref Search ADS WorldCat Leoné FTM , Heed T , Toni I , Medendorp WP . 2014 . Understanding effector selectivity in human posterior parietal cortex by combining information patterns and activation measures . J Neurosci . 34 : 7102 – 7112 . Google Scholar Crossref Search ADS PubMed WorldCat Lingnau A , Downing PE . 2015 . The lateral occipitotemporal cortex in action . Trends Cogn Sci . 19 : 268 – 277 . Google Scholar Crossref Search ADS PubMed WorldCat Michaels JA , Scherberger H . 2018 . Population coding of grasp and laterality-related information in the macaque fronto-parietal network . Sci Rep . 8 : 1710 . Google Scholar Crossref Search ADS PubMed WorldCat Nelissen K , Fiave PA , Vanduffel W . 2017 . Decoding grasping movements from the parieto-frontal reaching circuit in the nonhuman primate . Cereb Cortex . 465 : 1 – 15 . OpenURL Placeholder Text WorldCat Nichols T , Brett M , Andersson J , Wager T , Poline JB . 2005 . Valid conjunction inference with the minimum statistic . NeuroImage . 25 : 653 – 660 . Google Scholar Crossref Search ADS PubMed WorldCat Oliveira FTP , Diedrichsen J , Verstynen T , Duque J , Ivry RB . 2010 . Transcranial magnetic stimulation of posterior parietal cortex affects decisions of hand choice . Proc Natl Acad Sci U S A . 107 : 17751 – 17756 . Google Scholar Crossref Search ADS PubMed WorldCat Oosterhof NN , Connolly AC , Haxby JV . 2016 . CoSMoMVPA: multi-modal multivariate pattern analysis of neuroimaging data in Matlab/GNU octave . Front Neuroinform . 10 : 27 . Google Scholar Crossref Search ADS PubMed WorldCat Oosterhof NN , Tipper SP , Downing PE . 2012a . Visuo-motor imagery of specific manual actions: a multi-variate pattern analysis fMRI study . NeuroImage . 63 : 262 – 271 . Google Scholar Crossref Search ADS WorldCat Oosterhof NN , Tipper SP , Downing PE . 2012b . Viewpoint (in)dependence of action representations: an MVPA study . J Cogn Neurosci . 24 : 975 – 989 . Google Scholar Crossref Search ADS WorldCat Oosterhof NN , Wiestler T , Downing PE , Diedrichsen J . 2011 . A comparison of volume-based and surface-based multi-voxel pattern analysis . NeuroImage . 56 : 593 – 600 . Google Scholar Crossref Search ADS PubMed WorldCat Pilgramm S , de Haas B , Helm F , Zentgraf K , Stark R , Munzert J , Krüger B . 2016 . Motor imagery of hand actions: decoding the content of motor imagery from brain activity in frontal and parietal motor areas . Hum Brain Mapp . 37 : 81 – 93 . Google Scholar Crossref Search ADS PubMed WorldCat Rizzolatti G , Camarda R , Fogassi L , Gentilucci M , Luppino G , Matelli M . 1988 . Functional organization of inferior area 6 in the macaque monkey . Exp Brain Res. 71 : 491 – 507 . Google Scholar Crossref Search ADS PubMed WorldCat Rizzolatti G , Cattaneo L , Fabbri-Destro M , Rozzi S . 2014 . Cortical mechanisms underlying the organization of goal-directed actions and mirror neuron-based action understanding . Physiol Rev. 94 : 655 – 706 . Google Scholar Crossref Search ADS PubMed WorldCat Rizzolatti G , Luppino G , Matelli M . 1998 . The organization of the cortical motor system: new concepts . Electroencephalogr Clin Neurophysiol . 106 : 283 – 296 . Google Scholar Crossref Search ADS PubMed WorldCat Schaffelhofer S , Scherberger H . 2016 . Object vision to hand action in macaque parietal, premotor, and motor cortices . elife . 5 : 1 – 24 . Google Scholar Crossref Search ADS WorldCat Smith SM , Nichols TE . 2009 . Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in cluster inference . NeuroImage . 44 : 83 – 98 . Google Scholar Crossref Search ADS PubMed WorldCat Tosoni A , Pitzalis S , Committeri G , Fattori P , Galletti C , Galati G . 2015 . Resting-state connectivity and functional specialization in human medial parieto-occipital cortex . Brain Struct Funct. 220 : 3307 – 3321 . Google Scholar Crossref Search ADS PubMed WorldCat Tucciarelli R , Turella L , Oosterhof NN , Weisz N , Lingnau A . 2015 . MEG multivariate analysis reveals early abstract action representations in the lateral occipitotemporal cortex . J Neurosci. 35 : 16034 – 16045 . Google Scholar Crossref Search ADS PubMed WorldCat Turella L , Lingnau A . 2014 . Neural correlates of grasping . Front Hum Neurosci. 8 : 686 . Google Scholar Crossref Search ADS PubMed WorldCat Turella L , Tucciarelli R , Oosterhof NNN , Weisz N , Rumiati R , Lingnau A . 2016 . Beta band modulations underlie action representations for movement planning . NeuroImage . 136 : 197 – 207 . Google Scholar Crossref Search ADS PubMed WorldCat Valyear KF , Frey SH . 2015 . Human posterior parietal cortex mediates hand-specific planning . NeuroImage. 114 : 226 – 238 . Google Scholar Crossref Search ADS PubMed WorldCat Verhagen L , Dijkerman HC , Grol MJ , Toni I . 2008 . Perceptuo-motor interactions during prehension movements . J Neurosci. 28 : 4726 – 4735 . Google Scholar Crossref Search ADS PubMed WorldCat Vesia M , Crawford JD . 2012 . Specialization of reach function in human posterior parietal cortex . Exp Brain Res. 221 : 1 – 18 . Google Scholar Crossref Search ADS PubMed WorldCat Wurm MF , Ariani G , Greenlee MW , Lingnau A . 2016 . Decoding concrete and abstract action representations during explicit and implicit conceptual processing . Cereb Cortex. 26 : 3390 – 3401 . Google Scholar Crossref Search ADS PubMed WorldCat Wurm MF , Lingnau A . 2015 . Decoding actions at different levels of abstraction . J Neurosci. 35 : 7727 – 7735 . Google Scholar Crossref Search ADS PubMed WorldCat Zabicki A , De Haas B , Zentgraf K , Stark R , Munzert J , Krüger B . 2017 . Imagined and executed actions in the human motor system: testing neural similarity between execution and imagery of actions with a multivariate approach . Cereb Cortex. 27 : 4523 – 4536 . Google Scholar PubMed OpenURL Placeholder Text WorldCat Zaitsev M , Hennig J , Speck O . 2004 . Point spread function mapping with parallel imaging techniques and high acceleration factors: fast, robust, and flexible method for echo-planar imaging distortion correction . Magn Reson Med. 52 : 1156 – 1166 . Google Scholar Crossref Search ADS PubMed WorldCat Zimmermann M , Verhagen L , de Lange FP , Toni I . 2016 . The extrastriate body area computes desired goal states during action planning . eNeuro . 3 : 1 – 13 . Google Scholar Crossref Search ADS WorldCat © The Author(s) 2020. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permission@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Hierarchical Action Encoding Within the Human Brain JF - Cerebral Cortex DO - 10.1093/cercor/bhz284 DA - 2020-05-14 UR - https://www.deepdyve.com/lp/oxford-university-press/hierarchical-action-encoding-within-the-human-brain-bmmKbgv6oC DP - DeepDyve ER -