TY - JOUR AU1 - Vasser,, Madis AU2 - Vuillaume,, Laurène AU3 - Cleeremans,, Axel AU4 - Aru,, Jaan AB - Abstract It is well known that the human brain continuously predicts the sensory consequences of its own body movements, which typically results in sensory attenuation. Yet, the extent and exact mechanisms underlying sensory attenuation are still debated. To explore this issue, we asked participants to decide which of two visual stimuli was of higher contrast in a virtual reality situation where one of the stimuli could appear behind the participants’ invisible moving hand or not. Over two experiments, we measured the effects of such “virtual occlusion” on first-order sensitivity and on metacognitive monitoring. Our findings show that self-generated hand movements reduced the apparent contrast of the stimulus. This result can be explained by the active inference theory. Moreover, sensory attenuation seemed to affect only first-order sensitivity and not (second-order) metacognitive judgments of confidence. Introduction Imagine having a drink in a pub. Your eyes fixate on the glass and your arm stretches to grab it. You are not bothered at all by the appearance of this long elongated shape (your arm) in your visual field. But now consider the same situation where a similar long elongated shape moves towards your beverage—be it a snake, or the arm of a colleague. In this case, you would immediately notice this potentially threatening moving object. Hence, it seems that the movement of our own body parts is processed differently from that of objects of the external world, many of which may nevertheless exhibit similar properties. In particular, it seems that the movement of our own body parts is not so salient, i.e. it captures less attention than the movement of external objects. In the present work, we ask in which sense the movements of our own body are processed differently than that of external objects. The brain’s ability to predict and attenuate the various sensory consequences of the movements of its own body is well known. However, the exact mechanisms underlying sensory attenuation are still debated (Blakemore et al. 1998; Bays et al. 2006; Brown et al. 2013; Clark 2015). The active inference, or prediction error minimization theory (Friston 2010; Hohwy 2013; Clark 2015) posits that the brain is constantly predicting sensory input. Predictions are then continuously compared with sensory input in such a way that only prediction errors are propagated further, with the overall computational goal of minimizing prediction error in the long term. Accordingly, performing actions is an efficient way of minimizing prediction errors by changing the sensory data so as to fit the predictions (Friston 2010; Hohwy 2013; Clark 2015). The active inference theory suggests that movements are elicited by predicting the sensory consequences of the movement (e.g. that the hand will be moving in the visual field). These predictions will drive the behavior so that the organism will perform the necessary movements leading to the predicted state. However, the predicted consequences (e.g. the hand will be moving towards the glass) are not in agreement with the current sensory data (where the hand is still in a resting position). The active inference theory proposes that this mismatch between the predictions and sensory data is resolved through withdrawal of attention from the current sensory input, resulting in sensory attenuation (for a longer treatment of this proposal see Brown et al. 2013; Clark 2015). Hence, according to active inference sensory attenuation is a necessary counterpart of movement. This proposed mechanism of sensory attenuation can explain several findings in the literature that are difficult to understand under the classic efference copy view (e.g. Bays et al. 2006; Voss et al. 2008; van Doorn et al. 2014, 2015; Laak et al. 2017). Recent work with virtual reality (VR) devices has brought direct support for active inference by demonstrating that attention is withdrawn from the area of the visual field where one’s own hand is currently moving (Laak et al. 2017). Participants were slower to detect stimulus changes (both movement and color) in the condition where the change occurred behind their precisely tracked hand in VR. Importantly, participants themselves did not see their virtual hand, as it was rendered invisible. Thus, the effect was caused by prediction rather than by visual occlusion. Motor predictions also attenuate brain activity caused by corresponding visual feedback (Limanowski et al. 2018). In the work by Laak et al. (2017), which was designed to investigate active inference, the effect of lowered attention was observed through longer reaction times—an indirect measure of the quality of perception. In the current article, we sought to extend these previous results to probe the effect of self-generated movement prediction on perception more directly. We based our study on the experimental approach used by Carrasco et al. (2004). In their paradigm, two Gabor patches with different orientations are presented, one cued and the other not, and the participants are asked to report the orientation of the Gabor patch with the stronger contrast. Through this experimental approach, the authors could demonstrate that covert attention enhances the perceived contrast of an attended stimulus (Carrasco et al. 2004). We reasoned that if (i) attention is withdrawn from the part of the visual field where one’s hand is moving (Laak et al. 2017) and (ii) the deployment of attention affects perceived contrast (Carrasco et al. 2004), then the subjective contrast of objects in the region of the visual field where the hand is moving should be reduced. In addition to first-order sensitivity, we also sought to investigate the effect of self-generated movements on higher-order processes such as metacognition, that is, the ability to monitor and control one’s own mental states (Koriat 2007). To come back to the pub setting, in addition to grabbing your beer, it is quite important to be sure that it really is yours, and that you are not actually stealing your neighbor’s beer, for instance. Metacognition is a crucial part of our daily life and a critical aspect of decision making in different domains (Metcalfe and Shimamura 1996; Fleming et al. 2012). In recent years, metacognition has become a prominent topic of investigation. However, to our knowledge, no previous study has directly addressed the influence of sensory attenuation on metacognition, and studies focusing on the interplay between attention and metacognition have so far yielded mixed results (Wilimzig et al. 2008; Kanai et al. 2010; Rahnev et al. 2011; Sherman et al. 2015). Here, we measured the quality of metacognitive monitoring by asking participants to rate their confidence in their response on each trial, and by assessing the relationship between objective performance and subjective confidence (Galvin et al. 2003). This allowed us to explore our second research question: do the self-generated movements also alter metacognitive accuracy? To address these questions, we conducted two VR experiments where participants performed a visual two-alternative forced choice task coupled with moving their hand to overlap one of the target stimuli (Fig. 1). Crucially, while the hand was tracked via sensors, the hand avatar was not shown to the participants. This made it possible to assess the extent to which visual expectations coupled to self-generated movement result in sensory attenuation. Figure 1. View largeDownload slide (A) Approximate example of the Gabor patches used as stimuli in the study. The pairs varied in contrast, frequency, and orientation. The black dot in the middle is the gaze fixation point. (B) Physical setup of the experiment, with both rest and raised hand positions and approximate movement trajectory visualized. Reproduced with permission from Laak et al. (2017). (C) General design for a single trial. Participants were instructed to perform a trained hand movement when the fixation cross changed color. This movement triggered the appearance of the two stimuli that were shown for 133 ms. The task consisted in reporting the orientation of the Gabor patch with the higher contrast. In Experiment 2, participants were additionally requested to report how confident they were in their decision on a scale from 1 (very unsure) to 4 (very sure). Note that participants’ hand was completely invisible to them. Hand outlines and Gabor patch sizes on the figure are illustrative. Figure 1. View largeDownload slide (A) Approximate example of the Gabor patches used as stimuli in the study. The pairs varied in contrast, frequency, and orientation. The black dot in the middle is the gaze fixation point. (B) Physical setup of the experiment, with both rest and raised hand positions and approximate movement trajectory visualized. Reproduced with permission from Laak et al. (2017). (C) General design for a single trial. Participants were instructed to perform a trained hand movement when the fixation cross changed color. This movement triggered the appearance of the two stimuli that were shown for 133 ms. The task consisted in reporting the orientation of the Gabor patch with the higher contrast. In Experiment 2, participants were additionally requested to report how confident they were in their decision on a scale from 1 (very unsure) to 4 (very sure). Note that participants’ hand was completely invisible to them. Hand outlines and Gabor patch sizes on the figure are illustrative. In Experiment 1, we probed the first-order effects of hand movements on contrast judgements. Experiment 2 aimed to replicate Experiment 1 with a new group of participants, and to additionally probe the extent to which self-generated movements also influence metacognitive accuracy. In Experiment 2, participants thus also reported the confidence in their decisions, on every trial. A control condition in which the visual targets never appeared at locations situated behind the moving hand was also carried out. This allowed us to clarify whether the attenuation effect also extends to metacognitive ability or not. Materials and Methods Experimental setup Our experimental setup was very similar to that of Laak et al. (2017). The hardware solution consisted of an Oculus Rift CV1 VR headset combined with a Leap Motion hand tracking device mounted directly on the goggles. This allowed us to have absolute control over the visual environment the participant perceived in the lab and also made it possible to record the relative transforms and parameters of the participant’s hands (position, velocity, and orientation). Crucially, however, the hand avatar was not rendered in the virtual world. Thus, wherever we mention the “target behind the hand,” the hand in question was completely invisible to the participant wearing the headset. As in Carrasco et al. (2004), the visual stimuli consisted of Gabor patches with different contrasts, frequencies, and orientations, shown in horizontally placed pairs (Fig. 1A). For the control condition of Experiment 2, the pairs were positioned vertically to avoid the effects of hand movement on contrast judgements. Importantly, the participants’ task was orthogonal to the location of the stimuli (and thus to their hand movement) as their task was to report the orientation of the patch that seemed higher in contrast (see Carrasco et al. 2004) rather than to report its location (Experiment 1). In Experiment 2, participants were additionally requested to report how confident they were in their decision. Stimulus presentation was triggered by the participants’ pre-trained hand movement. The stimuli were shown behind the hand only when the hand was moving upwards and the center of the palm hit a virtual predefined target rectangle measuring 6° in width and 3° in height. Responses were registered by a left or right mouse click to indicate the orientation of the more contrasted Gabor and with the mouse scroll wheel to select a confidence rating from 1 to 4 followed by a mouse click to confirm the selected rating. We switched the hands used for movement and responses in the middle of the experiment so as to balance for possible preferences for one hand over the other. The stimulus marked as behind the hand was always on the side of the hand movement. A correctly executed hand movement only covered one of the Gabors. Participants were instructed to keep their gaze on the fixation point (a black dot in the middle of the field of view). As in Carrasco et al. (2004), one of the two Gabor patches was always displayed with a set contrast (Standard), while the other was either of lower, identical, or higher in contrast (Target). In both experiments, we compared the percentage of trials reported to be higher in contrast in cases where the target appeared behind the invisible hand vs. cases where the target was not behind the hand. The temporal and spatial distribution of the targets in different conditions were kept constant in both experiments in relation to the participants’ hand movements. Apparatus Participants were seated behind a table, with a standard computer mouse under one hand and an empty mouse pad under the other so as to promote consistent hand movements (Fig. 1B). Participants were fitted with an Oculus Rift CV1 (Oculus VR, LLC; Oculus) VR headset with a refresh rate of 90 Hz, and a field of view of approximately 100° and precise, low-latency positional tracking. The Leap Motion Controller (Leap Motion, Inc.; Leap) Orion SDK (version 3.1.2) mounted on the headset was used to track hand movements. The Leap Motion sensor tracks the position, velocity, and orientation of hands and fingers with low latency and an average position accuracy of 1.2 mm (Weichert et al. 2013), with a field of view of 150° horizontally and 120° vertically. The experiments were conducted in a quiet and dark room. Stimuli The default virtual environment consisted of a uniform gray background with a small black fixation dot (0.7° in diameter) in the middle of the field of view, directly in front of the participant. The Gabor patches that appeared during hand movements were approximately 4° in diameter from the perspective of the subject. Patches were generated only when the hand moved over a predefined rectangular target area in the virtual environment. The distance of the patches from the central fixation point ranged between 3° and 9° to allow for some hand movement error in the horizontal axis. As in Carrasco et al. (2004), we varied the cycles per degree (CPD) of the gratings to avoid adaption to the stimuli and used 2 CPD and 4 CPD stimuli. The different contrasts and presentation times were chosen according to an extensive pilot study. The final contrasts corresponded to the following Michelson contrast values: 0.2, 0.24, 0.3, 0.36, 0.45. (Michelson contrast = (Lmax − Lmin)/(Lmax + Lmin), where Lmin is the lowest luminance of the stimulus and Lmax the maximum luminance of the stimulus.) The standard value used in all the trials was 0.3. Pairs of patches were shown for 133 ms on each trial, after which participants reported the orientation of the patch that seemed to have more contrast. In Experiment 2, participants were additionally asked to report their confidence in their answer on a scale from 1 (very unsure) to 4 (very sure). Participants did not receive any feedback about their responses during the experiment. Participants Participants were recruited through university lists and social media. A total of 54 healthy participants with normal or corrected-to-normal vision took part in the two experiments: 8 in the first (2 females, mean age 25), and 46 in the second experiment (27 females, mean age 23). In the second experiment, 30 participants were assigned to the experimental group and 16 participants to the control group. All of the participants read and signed informed consent forms and participated in the experiments voluntarily. The VR experiments conducted in this study were approved by the Ethics Committee of the University of Tartu (Estonia) and the Ethics Committee of the Psychology Department of the Université Libre de Bruxelles (Belgium). Procedure Instructions to the participants The experiment was introduced to the participants as a study on attention. The hand-tracking device was described as “a device that simply detects the general direction of hand movements that is needed to start each trial” in order to cover the true aim of the experiment. After signing an informed consent form, participants completed training on the required hand movement and on discriminating the Gabors. The exact protocol for hand movement training is explained elsewhere (Laak et al. 2017). Participants were informed that the Gabor patches would be triggered by the upwards movements of the hand. Experiment 1 Each participant performed a total of 800 trials, of which 40% showed the Gabor with the stronger contrast behind the hand, 40% showed it at a location not behind the hand, and 20% had equal contrast (standard vs. standard). The conditions were balanced and randomized for each participant, and each participant was exposed to all of the conditions. After every 100 trials, a short rest of 10 s occurred. The general design of a single trial for all of the experiments is shown in Fig. 1C. When participants missed the target area for the correct hand movement on several consecutive trials, they were verbally guided by the experimenter to improve their hand movement. After the experiment, participants answered control questions about complying with the instructions and were debriefed about the background and purposes of the study. Experiment 2 In order to investigate the effects of sensory attenuation on metacognition, participants were randomly assigned to either an experimental group or a control group. Each participant performed a total of 400 trials in the experimental group and 300 trials in the control group. The experimental group was similar to Experiment 1. In the control group however, the pair of Gabor patches was shown vertically around the fixation dot instead of horizontally so as to make it so that neither stimulus appeared in the visual area where the hand was moving. In both groups participants were additionally asked to give their confidence in their decision on a scale from 1 (very unsure) to 4 (very sure). Data preprocessing Data preprocessing and analysis were performed in R (version 3.1.2; R Core Team 2015) using the afex package (Singmann et al. 2015), BayesFactor package using the medium default prior (Morey et al. 2015) and ggplot2 package (Wickham 2009). We discarded trials where the hand movement was not within the allowed constraints. Taken together, 56% of trials were rejected from Experiment 1, and 34% of trials from Experiment 2. This was due to the extreme precision necessary to ensure that hand movements were exactly in the same region of the visual field as the intended stimulus. Next, we completely excluded participants that failed to show an increase in percentage of higher contrast judgements between the two lower test contrast values and the two higher test contrast values (hinting that the participant may have answered randomly). After this we were left with the following number of participants: 6 (Experiment 1) and 44 (Experiment 2). Statistical analysis For statistical testing we used within-subject repeated measures analysis of variance (ANOVA) to test for the mean difference in the percentage of trials reported to be of higher contrast in different conditions. Degrees of freedom were corrected using the Greenhouse–Geisser method. Welch’s t-tests were used to test for the statistical difference between accuracy, confidence, and metacognitive ability in the experimental group compared to the control group in Experiment 2 (the equal contrast trials were not included in this analysis since accuracy in this case cannot be computed). Metacognitive ability was estimated through the type II area under the receiver-operating curve (AROC), which determines the rate of correct and incorrect responses at each confidence level (Kornbrot 2006). A second approach for evaluating metacognitive ability was fitting a mixed logistic regression model of accuracy with group, confidence and their interaction as fixed effects, and a random participant intercept. The regression slope was taken as an indicator of metacognitive ability (see Siedlecka et al. 2016). We also used Bayesian statistics to assess the likelihood that data were in favor of the alternative or null hypothesis using the default medium prior of the BayesFactor R package (Morey et al. 2015). This is especially relevant to interpret non-significant P-values in conventional statistics (Dienes 2014). Bayes Factors (BFs) above 1 indicate evidence for the alternative hypothesis whereas BF below 1 indicates evidence for the null hypothesis. Results Experiment 1 Using a motion tracking device in combination with VR technology, we were able to investigate whether self-generated movements influence visual perception despite participants never seeing their hand (Fig. 1). Participants performed a two-alternative forced choice task where they had to report the orientation of the Gabor patch with the higher contrast. Crucially for the experiments, one of the Gabor patches was located in the same visual area as the self-generated hand movement. The main results are illustrated in Fig. 2, which suggest that for all contrast differences, target items that appear behind the hand are less often reported to be of higher contrast than targets that appear at the other location, which is indicative of sensory attenuation. A within-subjects ANOVA showed a main effect of contrast [F(1.37, 6.85) = 48.12, P < 0.001, ηp2 = 0.80] and importantly, also a strong effect of whether the stimulus appeared behind the invisible hand or not [F(1, 5) = 15.07, P = 0.01, ηp2 = 0.58]. We found no interaction between condition and contrast [F(1.79, 8.97) = 3.1, P = 0.10, ηp2 = 0.07]. This suggests that apparent contrast is indeed reduced when shown in the area of the visual field where the hand is currently moving. Figure 2. View largeDownload slide Results of Experiment 1. X axis is the value of the test contrast, Y axis denotes the percentage of trials that were reported as higher in contrast. Blue line shows the target stimulus appearing behind the hand, and red line shows the target stimulus not behind the hand. Figure 2. View largeDownload slide Results of Experiment 1. X axis is the value of the test contrast, Y axis denotes the percentage of trials that were reported as higher in contrast. Blue line shows the target stimulus appearing behind the hand, and red line shows the target stimulus not behind the hand. Experiment 2 The first goal of Experiment 2 was to replicate Experiment 1 in another laboratory. Additionally, Experiment 2 was also aimed at exploring the extent to which sensory attenuation influences metacognitive processes. The design of Experiment 2 was thus identical to that of Experiment 1, except that participants were also requested to report the confidence in their judgement on every trial, using a scale ranging from 1 (very unsure) to 4 (very sure). Experiment 2 included a control condition in which targets never appeared behind the moving hand. Experimental group Replication of Experiment 1 We first sought to replicate our findings from Experiment 1. The same analysis confirmed these results, by revealing a main effect of contrast [F(1.57, 45.39) = 212.10, P < 10−4, ηp2 = 0.66] and a significant difference in the percentage of “higher contrast” responses between the “behind the hand” condition and the “not behind the hand” condition [F(1, 29) = 11.13, P = 0.002, ηp2 = 0.18]. There was again no interaction between Condition and Contrast [F(3.60, 104.27) = 1.01, P = 0.40, ηp2 = 0.005]. Similarly to Experiment 1, this indicates that apparent contrast is reduced when shown in the same visual area as the self-generated movement. Confidence An analysis of variance revealed, as expected, a main effect of contrast on confidence [F(3.24, 93.99) = 55.49, P < 10−4, ηp2 = 0.17], as well as a significant difference in confidence ratings between the “behind the hand” condition and the “not behind the hand” condition [F(1, 29) = 6.17, P = 0.02, ηp2 = 0.004]. The interaction between Contrast and Condition was also significant [F(2.05, 59.42) = 10.12, P = 0.0001, ηp2 = 0.05] (Fig. 3). This suggests that participants were able to adjust their confidence according to the perceived contrasts. Figure 3. View largeDownload slide Mean confidence ratings in Experiment 2 (experimental group) as a function of condition (Blue line: target stimulus behind the hand; Red line: target stimulus not behind the hand). Percentages of trials that were reported as seen to be higher in contrast between the two conditions were similar to Experiment 1. Figure 3. View largeDownload slide Mean confidence ratings in Experiment 2 (experimental group) as a function of condition (Blue line: target stimulus behind the hand; Red line: target stimulus not behind the hand). Percentages of trials that were reported as seen to be higher in contrast between the two conditions were similar to Experiment 1. In order to evaluate the effect of self-generated movements on metacognitive ability we added a control group. Indeed, in the experimental group one of the stimuli was always behind the hand movement (the test contrast or the standard contrast), and thus there was always one of the two contrasts that appeared to be reduced. To investigate the effect of sensory attenuation on metacognitive ability, it was necessary to compare the experimental group to a control group in which both of the stimuli were shown in a different visual area than the self-generated hand movement. To do so, in the control group the stimuli were presented vertically around the fixation point instead of horizontally. This made it possible to use exactly the same procedure as used in the experimental group but with none of the stimuli presented in the visual area behind which the hand was moving. Control group We first controlled that participants in the control group performed the task correctly. Analysis of variance also revealed a main effect of contrast on the percentage of stimuli reported to be higher in contrast [F(1.45, 18.87) = 133.71, P < 10−4, ηp2 = 0.90] and on confidence ratings [F(3.01, 39.17) = 19.24, P < 10−4, ηp2 = 0.27]. Comparison between the experimental group and the control group We found no effect of group on type 1 sensitivity [d′: t(26.95) = −0.36, P = 0.72) or criterion [t(34.56) = 0.93, P = 0.36] and no effect of group on confidence ratings either [t(30.57) = 0.21, P = 0.84]. Metacognition Metacognitive accuracy as measured with the AROC did not differ between groups [t(21.54) = −0.49, P = 0.63, BF = 0.35] (Fig. 4) nor did Type II bias [BROC: t(35.40) = 0.68, P = 0.50, BF = 0.36]. The mixed logistic regressions between accuracy and confidence, with the regression slope taken as an indicator of metacognitive ability (see Siedlecka et al. 2016), yielded similar results (no difference between the control group and the experimental group slopes: estimate = −0.02, z = −0.38, P = 0.70). Figure 4. View largeDownload slide The mean AROC in Experiment 2 for the control group and the experimental group. Figure 4. View largeDownload slide The mean AROC in Experiment 2 for the control group and the experimental group. Discussion In two independent experiments, we observed that (invisible) self-generated movements influence visual sensitivity while leaving metacognitive accuracy intact. This suggests that perception is directly influenced by one’s expectations about the sensory consequences of one’s own movements. In our paradigm, visual targets that would normally be occluded by our own moving limbs were perceptually attenuated although these targets were fully visible. This was achieved by combining VR goggles and a hand tracking device, allowing us to present stimuli in the same visual area as the hand and at the same time being able to keep the hand invisible to the participants (Laak et al. 2017). This novel paradigm allowed us to investigate sensory attenuation in the visual domain, which until recently was often overlooked due to the technical limitations associated with the challenge of removing visual sensory feedback. After initiating the hand movement, participants had to indicate the orientation of the Gabor patch with higher contrast. We observed that the contrast of the stimulus behind the invisible hand was significantly reduced. We replicated these results in an independent group of participants and estimated whether metacognitive ability would also be affected by our experimental manipulation. In order to measure metacognitive accuracy, participants were additionally required to report their confidence in their response on each trial. There was no difference in metacognitive accuracy between the experimental group and the control group, suggesting that participants in the experimental group were able to adjust their subjective confidence to their objective performance. Indeed, even though the contrast of one of the stimuli was reduced through sensory attenuation, they could equally well judge their confidence in their decision. That is, if a low test contrast was attenuated they were more confident in their decision—they could tell that it was easier—and, similarly, if a high test contrast was attenuated they were less confident in their decision. This is in line with previous work from Kanai et al. (2010) who showed that metacognitive ability was preserved when attention was diverted using several methods as well as the study from Sherman et al. (2015) who found no difference in metacognitive ability between a full attention and a diverted attention condition in a perceptual task. Furthermore, being able to tell that there is sensory attenuation might be a crucial cue in recognizing an action as being internally triggered and not externally triggered. This has been emphasized in the study of schizophrenia patients who showed deficit in attenuating the sensory consequences of their actions which in turn gave rise to issues in discriminating self-generated actions from externally triggered events (Blakemore et al. 2003; Shergill et al. 2005; Fletcher and Frith 2009). Sensory attenuation thus appears to only influence first-order processes. The main results confirm those of Laak et al. (2017) but go a step further by indicating that self-generated movements do not only impact reaction times through attenuating attention, they also influence visual sensitivity per se. Our results thus offer support for the active inference account, which posits sensory attenuation of visual processing due to sensory precision being reduced during movement (Friston 2010; Hohwy 2013; Clark 2015). Alternative explanations to the present findings Active inference can readily explain our present results: in order to move, precision of the sensory data that would be in conflict with the predicted outcome has to be reduced and this is done by withdrawing attention from the parts of the visual field where the hand is currently moving. The consequence of withdrawn attention is that the Gabor that would be on the path of the moving hand (if the hand would be visible) is seen with less contrast. According to the active inference theory, sensory attenuation is the consequence of withdrawing attention from the sensory data coming from the specific parts of the visual field where the hand is moving. However, in principle one could also explain the present findings with a more general account of attentional suppression in space. In particular, one could claim that when a limb is moving, visual spatial attention is more generally withdrawn, i.e. not only from the trajectory where the hand is moving. From our present results we know that it cannot be a suppression of the whole visual field (otherwise we could not obtain differences between the two conditions), but for example it could be claimed that in the hemifield where the hand is currently moving attention is withdrawn. We think that there are several reasons to doubt such a general suppression. First, although we did not test for the spatial specificity in this study, in the previous work with the similar setup (Laak et al. 2017; experiment 2) we observed that attention seems not to be withdrawn from locations that are in the same visual hemifield as the hand movement but not directly on the movement path. Second, such a general suppression would be disadvantageous: For example, if one is reaching for an object this general attentional suppression would lead to the withdrawal of attention from the reach target. This would be unreasonable and in fact experimental data have demonstrated that attention is enhanced around reach targets (Rolfs et al. 2013). The last finding might prompt the question whether it is in conflict with the present results: how can reach targets be attentionally prioritized while the hand movement trajectory is attenuated? Active inference theory can explain this as enhancing the precision of the target is a separate process which can occur independently of and together with the reduction of precision from those parts of the visual field where the hand is moving. The more classic efference copy theory gives a more specific alternative explanation to the present results than the general suppression account. Efference copy theory suggests that sensory input that is predicted by the own motor command is canceled. Such forward models are learned over the lifetime and can be used to explain sensory attenuation (e.g. Blakemore et al. 1998). However, this theory has also trouble explaining the present results. In particular, subjects cannot have been learning over the lifetime that the hand movement predicts the appearance of a Gabor. In principle, one could suggest that the association between hand movement and appearance of Gabors could have been learned over the course of the experiment, but this cannot explain the difference between the two conditions (as all hand movements were followed by the presentation of both Gabors). More generally, the efference copy theory cannot account for several findings on the field that are readily explained by active inference theory (for overview see Brown et al. 2013; Clark 2015). Hence, overall, active inference casts the most comprehensive explanation of sensory attenuation both for the current findings and for the variety of results available in the literature. Limitations and further questions Due to the limits of the used hardware, it was impossible to record participant’s eye movements during the experiments. Therefore, no technical guarantee can be given that the participants maintained stable gaze on the fixation point throughout the experiment. To mitigate this, we had targets with varying contrast on both sides of the visual field, so participants had no incentive to prefer one side over the other. The participants were also instructed to keep their eyes on the fixation point during the trials and all verbally complied. Future research methodology could benefit from the next generation VR headsets and add-on devices rapidly becoming available at a reasonable price that allow eye-tracking (e.g. FOVE Inc. 2018; Tobii 2018). Prospective investigations are also required to examine more precisely how our findings generalize over different categories of stimuli or stimulus feature differentiation. It has been shown that these aspects can interact with attention and metacognition (Stein and Peelen 2017; Matthews et al. 2018). In addition, the precise influence of evidence-reliability remains to be elucidated in more extensive work as it has also been shown to affect confidence judgments and metacognition (Boldt et al. 2017; Bang and Fleming 2018; Denison et al. 2018). Using this paradigm, one could further investigate the characteristics of visual attenuation caused by self-generated hand movements. Indeed, it would be interesting to compare our results with a static version, in which participants just hold their hand at the target position all the time. According to the active inference theory, we would expect that without any movement the attenuation would disappear. One could also investigate the role of agency and intention to act by removing the self-generated component and having the experimenter move the hand of the participant (Limanowski et al. 2018). One could also test whether subjects who are less prone to sensory attenuation also exhibit diminished agency judgements. Furthermore, we would expect attenuation to build up with stronger expectations of hand movement over time (see Bays et al. 2006; Voss et al. 2008). The role of proprioception could be tested as well using a vibrator on the arm tendon in order to induce proprioceptive noise and disrupt these signals. Similarly, inducing a rubber hand illusion (Botvinick and Cohen 1998) has also been shown to lead to cases of sensory attenuation (Burin et al. 2017, 2018). Using this paradigm in that context could make it possible to better understand the role of body ownership and to assess the extent to which the illusion could misplace the effect of the self-generated movement on another visual area. Finally, the perceptual consequences of own hand movements have been established over decades of learning and cannot be “unlearned” with only a few hundred trials. It would be interesting to test how sensory attenuation develops for example for a newly learned tool. Conclusions We conducted two experiments showing that self-generated hand movements reduce the subjective contrast of objects in the part of the visual field where the hand is directly moving, but do not alter metacognitive monitoring. The present experimental paradigm provides a novel way to study sensory attenuation and demonstrates the usefulness of modern VR tools for investigating fundamental questions about the computations that run our lives. Acknowledgments We thank Tõnis Kristian Koppel for invaluable help with programming and the reviewers for improving our manuscript. Funding This work was supported by PUT1476 and IUT20-40 of the Estonian Research Council and an European Research Council Advanced Grant RADICAL to A.C. A.C. is a Research Director with the F.R.S.-FNRS (Belgium). J.A. was also supported by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement no. 799411. Data availability All data and codes are available from the authors upon request. References Bang D , Fleming SM . Distinct encoding of decision confidence in human medial prefrontal cortex . Proc Natl Acad Sci USA 2018 ; 115 : 6082 – 7 . Google Scholar Crossref Search ADS PubMed Bays PM , Flanagan JR , Wolpert DM . Attenuation of self-generated tactile sensations is predictive, not postdictive . PLoS Biol 2006 ; 4 : e28 . Google Scholar Crossref Search ADS PubMed Blakemore SJ , Wolpert DM , Frith CD . Central cancellation of self-produced tickle sensation . Nat Neurosci 1998 ; 1 : 635 . Google Scholar Crossref Search ADS PubMed Blakemore SJ , Oakley DA , Frith CD . Delusions of alien control in the normal brain . Neuropsychologia 2003 ; 41 : 1058 – 67 . Google Scholar Crossref Search ADS PubMed Boldt A , De Gardelle V , Yeung N . The impact of evidence reliability on sensitivity and bias in decision confidence . J Exp Psychol: Hum Percept Perform 2017 ; 43 : 1520 . Google Scholar Crossref Search ADS PubMed Botvinick M , Cohen J . Rubber hands ‘feel’ touch that eyes see . Nature 1998 ; 391 : 756 . Google Scholar Crossref Search ADS PubMed Brown H , Adams RA , Parees I et al. , Active inference, sensory attenuation and illusions . Cogn Process 2013 ; 14 : 411 – 27 . Google Scholar Crossref Search ADS PubMed Burin D , Pyasik M , Salatino A et al. , That’s my hand! Therefore, that’s my willed action: how body ownership acts upon conscious awareness of willed actions . Cognition 2017 ; 166 : 164 – 73 . Google Scholar Crossref Search ADS PubMed Burin D , Pyasik M , Ronga I et al. , “As long as that is my hand, that willed action is mine”: timing of agency triggered by body ownership . Conscious Cogn 2018 ; 58 : 186 – 92 . Google Scholar Crossref Search ADS PubMed Carrasco M , Ling S , Read S . Attention alters appearance . Nat Neurosci 2004 ; 7 : 308 . Google Scholar Crossref Search ADS PubMed Clark A . Surfing Uncertainty: Prediction, Action, and the Embodied Mind . Oxford : Oxford University Press , 2015 . Denison RN , Adler WT , Carrasco M et al. , Humans incorporate attention-dependent uncertainty into perceptual decisions and confidence . Proc Natl Acad Sci 2018 ; 115 : 11090 – 5 . Google Scholar Crossref Search ADS PubMed Dienes Z . Using Bayes to get the most out of non-significant results . Front Psychol 2014 ; 5 : 781 . Google Scholar Crossref Search ADS PubMed Fleming SM , Dolan RJ . The neural basis of metacognitive ability . Philos Trans R Soc Lond B Biol Sci . 2012 ; 367 : 1338 – 49 . Fletcher PC , Frith CD . Perceiving is believing: a Bayesian approach to explaining the positive symptoms of schizophrenia . Nat Rev Neurosci 2009 ; 10 : 48 . Google Scholar Crossref Search ADS PubMed Friston K . The free-energy principle: a unified brain theory? Nat Rev Neurosci 2010 ; 11 : 127 . Google Scholar Crossref Search ADS PubMed FOVE Inc. Eye Tracking VR dev kit . 2018. Retrieved https://www.getfove.com (17 June 2018, date last accessed). Galvin SJ , Podd JV , Drga V et al. , Type 2 tasks in the theory of signal detectability: discrimination between correct and incorrect decisions . Psychon Bull Rev 2003 ; 10 : 843 – 76 . Google Scholar Crossref Search ADS PubMed Hohwy J . The Predictive Mind . Oxford : Oxford University Press , 2013 . Kanai R , Walsh V , Tseng CH . Subjective discriminability of invisibility: a framework for distinguishing perceptual and attentional failures of awareness . Conscious Cogn 2010 ; 19 : 1045 – 57 . Google Scholar Crossref Search ADS PubMed Koriat A . Metacognition and Consciousness . In: Zelazo PD , Moscovitch M , Thompson E (eds), The Cambridge handbook of consciousness . Cambridge, UK : Cambridge University Press , 2007 , 289 – 325 . Kornbrot DE . Signal detection theory, the approach of choice: model-based and distribution-free measures and evaluation . Percept Psychophys 2006 ; 68 : 393 – 414 . Google Scholar Crossref Search ADS PubMed Laak KJ , Vasser M , Uibopuu OJ et al. , Attention is withdrawn from the area of the visual field where the own hand is currently moving . Neurosci Conscious 2017 ; 3 . Doi: 10.1093/nc/niw02. Limanowski J , Sarasso P , Blankenburg F . Different responses of the right superior temporal sulcus to visual movement feedback during self‐generated vs. externally generated hand movements . Eur J Neurosci 2018 ; 47 : 314 – 20 . Google Scholar Crossref Search ADS PubMed Matthews J , Schröder P , Kaunitz L et al. , Conscious access in the near absence of attention: critical extensions on the dual-task paradigm . Philos Trans R Soc B 2018 ; 373 : 20170352 . Google Scholar Crossref Search ADS Metcalfe J , Shimamura AP . Metacognition . Cambridge, MA : MIT Press , 1996 . Morey RD , Rouder JN , Jamil T et al. , Package ‘BayesFactor’ . 2015. http://cran. r-project. org/web/packages/BayesFactor/BayesFactor.pdf (10 June 2015, date last accessed). R Core Team . A Language and Environment for Statistical Computing . Vienna, Austria : R Foundation for Statistical Computing , 2015 . https://www.R-project.org/. Rahnev D , Maniscalco B , Graves T et al. , Attention induces conservative subjective biases in visual perception . Nat Neurosci 2011 ; 14 : 1513 . Google Scholar Crossref Search ADS PubMed Rolfs M , Lawrence BM , Carrasco M . Reach preparation enhances visual performance and appearance . Philos Trans R Soc B 2013 ; 368 : 20130057 . Google Scholar Crossref Search ADS Shergill SS , Samson G , Bays PM et al. , Evidence for sensory prediction deficits in schizophrenia . Am J Psychiatry 2005 ; 162 : 2384 – 6 . Google Scholar Crossref Search ADS PubMed Sherman MT , Seth AK , Barrett AB et al. , Prior expectations facilitate metacognition for perceptual decision . Conscious Cogn 2015 ; 35 : 53 – 65 . Google Scholar Crossref Search ADS PubMed Siedlecka M , Paulewicz B , Wierzchoń M . But I was so sure! Metacognitive judgments are less accurate given prospectively than retrospectively . Front Psychol 2016 ; 7 : 218 . Google Scholar Crossref Search ADS PubMed Singmann H , Bolker B , Westfall J et al. , afex: Analysis of Factorial Experiments. R package version 0.13–145 . 2015 . Available at: http://CRAN.R-project.org/package=afex. Stein T , Peelen MV . Object detection in natural scenes: independent effects of spatial and category-based attention . Attent Percept Psychophys 2017 ; 79 : 738 – 52 . Google Scholar Crossref Search ADS Tobii AB . Tobii Pro VR Integration . 2018. Retrieved from https://www.tobiipro.com/product-listing/vr-integration/ (17 June 2018, date last accessed). Van Doorn G , Hohwy J , Symmons M . Can you tickle yourself if you swap bodies with someone else? Conscious Cogn 2014 ; 23 : 1 – 11 . Google Scholar Crossref Search ADS PubMed Van Doorn G , Paton B , Howell J et al. , Attenuated self-tickle sensation even under trajectory perturbation . Conscious Cogn 2015 ; 36 : 147 – 53 . Google Scholar Crossref Search ADS PubMed Voss M , Ingram JN , Wolpert DM et al. , Mere expectation to move causes attenuation of sensory signals . PLoS One 2008 ; 3 : e2866 . Google Scholar Crossref Search ADS PubMed Weichert F , Bachmann D , Rudak B et al. , Analysis of the accuracy and robustness of the leap motion controller . Sensors 2013 ; 13 : 6380 – 93 . Google Scholar Crossref Search ADS PubMed Wickham H . ggplot2: Elegant Graphics for Data Analysis , Vol. 1 . New York : Springer , 2009 , 3 . Wilimzig C , Tsuchiya N , Fahle M et al. , Spatial attention increases performance but not subjective confidence in a discrimination task . J Vis 2008 ; 8 : 7 . Google Scholar Crossref Search ADS PubMed © The Author(s) 2019. Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. TI - Waving goodbye to contrast: self-generated hand movements attenuate visual sensitivity JF - Neuroscience of Consciousness DO - 10.1093/nc/niy013 DA - 2019-01-01 UR - https://www.deepdyve.com/lp/oxford-university-press/waving-goodbye-to-contrast-self-generated-hand-movements-attenuate-t0x0cxLzjh VL - 2019 IS - 1 DP - DeepDyve ER -