Visual–Olfactory Interactions: Bimodal Facilitation and Impact on the Subjective Experience

Visual–Olfactory Interactions: Bimodal Facilitation and Impact on the Subjective Experience Abstract Odors are inherently ambiguous and therefore susceptible to redundant sensory as well as context information. The identification of an odor object relies largely on visual input. Thus far, it is unclear whether visual and olfactory stimuli are indeed integrated at an early perceptual stage and which role the congruence between the visual and olfactory inputs plays. Previous studies on visual–olfactory interaction used either congruent or incongruent information, leaving it open whether nuances of visual–olfactory congruence shape perception differently. We aimed to answer 1) whether visual–olfactory information is integrated at early stages of processing, 2) whether visual–olfactory congruence is a gradual or dichotomous phenomenon, and 3) whether visual information influences bimodal stimulus evaluation and odor identity. We found a bimodal response time speedup that is consistent with parallel processing according to race models. Subjectively, pleasantness of bimodal stimuli increased with increasing congruence, and orange images biased odor composition toward orange. Visual–olfactory congruence was perceived in gradual and distinct categories, consistent with the notion that congruence is a gradual phenomenon. Together, the data provide evidence for bimodal facilitation consistent with parallel processing of the visual and olfactory stimuli, and that visual–olfactory interactions influence various levels of the subjective experience. congruence, cross-modal interaction, multisensory integration, odor perception, visual–olfactory interaction Introduction The identification of odor objects relies more heavily on input from other sensory channels or context information than the identification of visual stimuli (Cain 1979). For example, it is difficult to identify an object (e.g., a banana) by its smell alone. This inherent ambiguity makes odors particularly prone to benefit from and to integrate additional information. Indeed less effective unisensory stimuli are more likely to be integrated and benefit from multisensory integration (Meredith and Stein 1983) and they are also more tolerant to stimulus asynchronies (Krueger Fister et al. 2015). Cross-modal interactions between visual and olfactory sensory cues have been demonstrated to operate in both directions. For instance, odors can modulate visual perception and performance in visual tasks, particularly by influencing the saliency of concordant visual objects during attentional blink (Robinson et al. 2013), binocular rivalry (Zhou et al. 2010, 2012), visual search (Chen et al. 2013), and eye movements (Seo et al. 2010). Odor perception and identification have also been shown to improve through verbal labels (Herz and von Clef 2001; Bensafi et al. 2014), as well as visual (Morrot et al. 2001; Gottfried and Dolan 2003; Seigneuric et al. 2010), gustatory (Green et al. 2012), and auditory (Seo et al. 2014) sensory information. Visual information can also support the formation of odor object identity for previously unknown odors via cross-modal associative learning (Qu et al. 2016). So far, it remains unknown whether visual and olfactory information are integrated at early sensory stages (bottom up), or if visual–olfactory interactions take place at later evaluative stages, allowing for top-down influences (De Meo et al. 2015). Notably, these models are likely not mutually exclusive (Talsma 2015). Measurement of response times (RTs) has proven to be a useful tool for examining early-stage multisensory interaction. When comparing participants’ RTs in a unisensory with performance in a multisensory setting, one typically finds that RTs are systematically reduced as more target stimuli are added, for example, participants respond faster in a bimodal than in a unimodal setting if instructed to respond to any of the presented stimuli, regardless of modality. However, this multisensory gain can be the product of the temporal overlap between multiple response processes. Assuming that the modality more quickly processed determines the RT, and further assuming that RTs vary randomly from trial to trial, “slow” processing of 1 trial could co-occur with “fast” processing in the other, effectively leading to an observed speedup of mean RT (Raab 1962). Models assuming this kind of competitive processing are typically called “race models,” and propose that whichever channel finishes processing first “wins the race” and determines the observed RT. These models are defined by parallel processing channels that do not pool information at any pre-decisional stages. This model by Raab (1962) demanded stochastically independent channel processing speeds. Miller (1982) later proposed an upper performance bound for parallel processing models such as Raab’s, assuming perfect negative correlation between channel processing speeds, typically called race model inequality (RMI) or Miller bound. Similarly, a lower performance limit assuming perfect positive correlation has been proposed (Grice bound; Grice et al. 1984a, 1984b). Performance in tasks with bimodal sensory input, which exceeds the Miller bound, provides evidence for information pooling across channels, or multimodal integration; performance below the Grice bound suggests debilitation below unimodal performance. Bimodal performance falling between 2 bounds can be accounted for by race models and it is typically assumed to provide neither sufficient proof of integration, nor definite evidence against it. The estimation of a workload capacity coefficient based on RTs provides a measure of a participant’s ability to process multiple stimuli simultaneously (Townsend and Ashby 1978, 1983; Townsend and Eidels 2011), and allows conclusions to be made about performance changes as the task demands are varied, for example, if participants have to respond to a bimodal stimulus compared with responding to unimodal stimuli. While multisensory response facilitation has been studied in depth for bimodal stimuli involving the non-chemical senses—vision, audition, and touch (Gielen et al. 1983; Miller 1986; Forster et al. 2002; Diederich and Colonius 2004), very few studies have investigated bimodal integration of the chemical senses, and these with inconsistent results. Though response facilitation has been observed for congruent olfactory–gustatory stimuli (Veldhuizen et al. 2010), incongruent olfactory–gustatory stimuli yielded response debilitation (Shepard et al. 2015). For example, response facilitations that could be explained by parallel processing were observed in a visual–olfactory object identification task (Höchenberger et al. 2015). Late-stage sensory interactions, sometimes referred to cross-modal interactions (De Meo et al. 2015), reflect top-down cognitive control. Of particular interest for odor-related sensory integration are modulations of the subjective experience and hedonic evaluation given the unique direct link between the olfactory and limbic systems (Courtiol and Wilson 2017). Accordingly, the few studies that have studied hedonic evaluation in the context of multisensory interaction involve olfaction. Among those studies, the semantic congruence, that is how well 2 stimuli match, has been shown to enhance pleasantness for individual components of a bimodal stimulus. For example, sound can enhance odor pleasantness (Seo et al. 2014) and odors can enhance both image (Yamada et al. 2014) and taste (Schifferstein and Verlegh 1996; Small et al. 2004) pleasantness as well as the pleasantness of bimodal olfactory–gustatory stimuli (Amsellem and Ohla 2016). While congruence is commonly approached as a dichotomous phenomenon, and experimental designs focus on comparing congruent and incongruent stimuli (Gottfried and Dolan 2003), congruence perception is more complex, comprising several gradual levels, and congruence influences pleasantness systematically, at least for olfactory–gustatory stimuli (Amsellem and Ohla 2016). Whether this is also the case for visual–olfactory stimuli has yet to be demonstrated. Given the intrinsic ambiguity of odors, visual information helps disambiguate olfactory percepts by biasing odor object perception toward the visual context (Degel and Köster 1998, 1999; Morrot et al. 2001). The role of congruence between the visual and olfactory inputs in these biases remains to be elucidated. The aims of this study were to explore visual–olfactory interactions at early sensory and late perceptual evaluation levels and to answer 1) whether visual–olfactory information is indeed integrated at early stages of processing, 2) whether visual–olfactory congruence is a gradual or dichotomous phenomenon, and 3) whether visual information influences bimodal stimulus evaluation and odor identity. Materials and methods Participants Twenty-one healthy participants reporting no smelling or tasting disorders took part in the study. Two participants’ datasets were excluded due to technical problems during data collection; data from 5 participants were excluded because of 50% or more missing responses in at least 1 condition. Data from 14 participants (2 left-handed, 6 men, mean age = 27.9 years, SD = 2.7) are reported here. Participants gave written informed consent and received compensatory payment. The study was conducted in compliance with the revised declaration of Helsinki and approved by the ethics committee of the German Psychological Society. Stimuli and apparatus Odorants (O) were orange (SAFC, CAS 8008-57-9, 15%) and chicken (Takasago, TAK 120580, 10%) fragrances in mineral oil (Acros Organics, CAS 8042-47-5) and 3 mixtures of both odorants representing a perceptual half–half (60% chicken/40% orange), a dominant orange (30% chicken/70% orange), and a dominant chicken (70% chicken/30% orange) odor. The orange and chicken odors were isointense and mixture proportions were validated in a previous study (Amsellem and Ohla 2016). Odorants were delivered birhinally for 1000 ms at a flow rate of 1.5 L/min with 0.25 L/min unscented background air for each nostril through an olfactometer (Lundström et al. 2010), which embeds the odorants in a stream of unscented air at the same flow rate, thereby minimizing any tactile cue from switching the valves. To reduce irritation of the nasal mucosa, we reduced the flow of unscented air to 0.25 L/min during part of the interstimulus interval, from 500 ms after odor offset to 1000–2000 ms before presentation of the subsequent odor. Visual stimuli (V) were 3 images of chicken and orange from the Food-pics database (Blechert et al. 2014) (no. 200, 245, 301, 315, 365, 546). Orange and chicken images were similarity pleasant (t13 = 2.02, P = 0.064). Images were scaled to gray to remove color cues and presented on a TFT monitor with a resolution of 1680 × 1050 pixels at an eye distance of approximately 60 cm, corresponding to an object size of approximately 12° of visual angle. The monitor refresh rate was 60 Hz. Bimodal stimuli (VO) were all possible bimodal combinations of the 5 odor and 2 image categories (Figure 1B). The combination resulted in 5 levels of VO congruence: incongruent (e.g., orange odor and chicken image), intermediary incongruent, intermediary, intermediary congruent, and congruent (e.g., chicken odor and chicken image; Amsellem and Ohla 2016). In VO trials, the valves of the olfactometer were opened 200 ms before visual stimulation to ensure physical simultaneity of the stimuli. The timing was determined with photo-ionization detection and has been used successfully in previous studies (Höchenberger et al. 2015; Amsellem and Ohla 2016). Figure 1. View largeDownload slide Experimental paradigm and design matrix. (A) Participants viewed a green cross-hair (dashed line) for 1–2 s before the onset of either of the following stimuli: odor (O), image (V), or bimodal odor–image combination (VO), which lasted 1 s. Participants were to perform a speeded detection task and delayed ratings for stimulus intensity, pleasantness, bimodal congruence, and odor composition. (B) Bimodal stimuli were presented with different degrees of congruence. Images were taken from the Food.pics database (Blechert et al. 2014). Figure 1. View largeDownload slide Experimental paradigm and design matrix. (A) Participants viewed a green cross-hair (dashed line) for 1–2 s before the onset of either of the following stimuli: odor (O), image (V), or bimodal odor–image combination (VO), which lasted 1 s. Participants were to perform a speeded detection task and delayed ratings for stimulus intensity, pleasantness, bimodal congruence, and odor composition. (B) Bimodal stimuli were presented with different degrees of congruence. Images were taken from the Food.pics database (Blechert et al. 2014). Participants were presented images alone (V), odors alone (O), and all possible combinations of visual–olfactory stimuli (VO) in pseudo-random order. Each stimulus category was repeated 15 times: 5 odor categories, 2 image categories, and 10 bimodal combinations amounting to a total of 255 trials. Procedure Participants were seated in a sound-attenuated testing booth in front of a 24” TFT screen displaying the instructions and visual analogue rating scales (VAS). Prior to testing, participants received instructions on the use of the response box and the rating scales. Participants were instructed to press the central button of the response box (Serial Response Box, Psychological Software Tools, Inc.) as soon as they detected a stimulus irrespective the modality. For bimodal trials, they were then to report whether the visual, olfactory or both modalities drove the response (driving–modality response); due to technical difficulties, these responses were not prompted or collected for 2 participants. Intensity was described as representing the strength of the stimulus, that is how strong (or weak) the stimulus was. Pleasantness was described as representing how pleasant (or unpleasant or neutral) the stimulus was. Composition was described as representing the perceived proportions of orange and chicken on a continuum ranging from orange only to chicken only. For bimodal VO stimuli, congruence was defined as representing the amount of the visual–olfactory match (from mismatch to perfect match). Participants practiced the task and the ratings and were familiarized with all stimuli before testing. Each trial started with the color change of a cross-hair, from black to green, on the screen 1000–2000 ms before stimulus presentation. The cross-hair cued participants to inhale and to pay close attention to both (olfactory and visual) sensory channels; it remained on the screen during stimulus presentation and was replaced by the rating scales presented in random order 300 ms after the stimulation ended. Participants were to perform the detection task during stimulus presentation. Responses were recorded from stimulus onset to 500 ms after offset. During O and VO trials, participants rated stimulus intensity, pleasantness, and perceived odor composition. During VO trials, participants rated the congruence of the bimodal stimuli. During V trials, participants rated stimulus intensity only. For all ratings, participants were to move the cursor of a computer mouse along a visual analogue scale, which recorded rating values from 0 to 100. The scales were anchored with numeric values and labels, that is, 0 “no sensation” and 10 “extremely intense” for intensity, −5 “very unpleasant” and +5 “extremely pleasant” for pleasantness, and 0 “incongruent” and 10 “congruent” for bimodal congruence. The composition (dominant olfactory tone) of each odorant was rated on a scale spanning from “orange” (left) to “chicken” (right). Each scale was displayed on the screen for up to 4000 ms. The scales appeared in a random order to minimize halo effects (Thorndike 1920) and were interleaved with a blank screen for 800 ms. On average, a trial lasted 20 s. The experiment lasted no more than 2 hours (see Figure 1). In order to minimize auditory cues from the olfactometer and air flow, participants were presented brown noise via isolating in-ear-phones. Analysis First, we removed all trials in which participants failed to provide stimulus ratings. Specifically, we dropped trials without pleasantness rating (28 trials), O trials without intensity rating (3 trials), O and VO trials without composition rating (9 trials), and VO trials without congruence rating (9 trials). Further we assumed that only O trials with intensity ratings greater than 5 had been perceived, and consequently excluded trials with lower ratings from analysis (88 trials). Lastly, trials with RTs less than 150 ms and greater than 1500 ms were removed (324). In total, 461 trials (12.9% of the data) were excluded from analysis. Response times Mean RTs for V (RTV), O (RTO), and VO (RTVO) conditions and corresponding standard deviations (SDs) were calculated separately for each participant. Further, we estimated the respective empirical cumulative RT distribution functions (CDFs; FO, FV, FVO; Ulrich et al. 2007). Based on these data, we derived different performance benchmarks to test for multisensory interaction during bimodal stimulation, namely mean multisensory response enhancement (MRE), upper and lower bounds for the RT distributions under the assumption of parallel processing (Miller and Grice bounds, respectively), RT distributions under the assumption of stochastically independent visual and olfactory channels (UCIP model), and workload capacity coefficients. The MRE measures relative changes in mean processing time of multisensory compared with unisensory stimuli. Originally introduced by Meredith and Stein (1986) for the application with individual neurons, it can also be used with RT data derived from behavioral experiments (Diederich and Colonius 2004). In a visual–olfactory setting, the formula takes the form  MRE=min(RTV, RTO)– RTVOmin(RTV,RTO) Positive values suggest facilitation and negative values imply debilitation. MRE was calculated for each participant separately and submitted to a 1-sample t-test against zero. Next, we calculated model predictions based on RT CDFs. Raab’s (1962) model assumes parallel processing in stochastically independent channels (Stevenson et al. 2014). This model is also known as UCIP model (unlimited capacity, independent, parallel; Townsend and Wenger 2004) and defined as  FVO(t)=FUCIP(t)= FV(t)+ FO(t)−FV(t) × FO(t) The RT CDFs F{V,O,VO}(t) depict the empirical probability of observing a response until time t after stimulus onset. Deviations from this model can be indicative of a violation of the stochastic-independence assumption, and suggest either positive or negative correlation between channel processing speeds. In fact, performance exceeding the model prediction has frequently been witnessed in psychophysical studies. Miller (1982) noted that, “[instead] of independence […] there is a consistent negative correlation between detections of signals on different channels.” He proposed a new upper performance limit, assuming perfect negative correlation between channel processing times, commonly called race model inequality (RMI) or Miller bound (Miller 1982; Colonius 1990). It is calculated as the sum of both unimodal RT CDFs:  FVO(t)≤FMiller(t)=FV(t)+FO(t) The Miller bound is reached only if participants “detect one of the signals but miss the other one” (Miller 1982), yielding a net speedup of observed bimodal responses. If the bound is exceeded, performance cannot be explained with parallel processing anymore, and this finding is typically interpreted as evidence for coactivation or neural integration. Likewise, a lower performance limit was suggested (Grice et al. 1984a, 1984b). It assumes a perfect positive correlation between channel processing times (Colonius 1990). Accordingly, the Grice bound is reached only if processing is either slow or fast in both channels simultaneously. This Grice bound is defined as the faster of the 2 unimodal channels at every time point:  FVO(t)≥FGrice(t)=max[FV(t), FO(t)] All empirical CDFs and derived model predictions were evaluated at 10 equidistantly spaced points (the 5th, 15th, …, 95th percentiles) and averaged across participants (Vincent 1912; Ratcliff 1979). FVO was compared with the Miller and Grice bounds using 1-tailed paired t-tests, and to the UCIP model prediction using 2-tailed paired t-tests at each of the percentiles. The significance level α was set a priori to 0.05, and all P-values were corrected for multiple testing following the Holm–Bonferroni procedure. Capacity coefficients were calculated from the logarithm of the unimodal (SV(t) and SO(t)) and bimodal (SVO(t)) survivor functions, S(t)= 1−F(t) (Townsend and Nozawa 1995):  C(t)= log[SVO(t)]log[SV(t) × SO(t)] We then calculated mean capacity coefficients for all participants and submitted them to a 1-sample t-test against 1 to test for significant deviations from unlimited-capacity processing. Note that this performance benchmark—a capacity coefficient of 1—is predicted if all assumptions of the UCIP model mentioned above are met. Townsend and Eidels (2011) proposed a way to transform the Miller and Grice bounds to capacity space for straightforward model comparisons, yielding the inequalities  C(t)≤CMiller(t)= log[SV(t)−SO(t)−1]log[SV(t)× SO(t)] And  C(t)≥CGrice(t)=log{min[SV(t),SO(t)]}min[SV(t),SO(t)] These boundaries were calculated for all participants separately to help us obtain a more comprehensive impression of individual performance dynamics. Average capacity coefficients were submitted to a 1-sample t-test against 1 to test for significant deviations from the UCIP model. Perceptual ratings For all analyses, parametric statistical tests were applied as ratings were distributed normally according to the Kolmogorov–Smirnov test (all P > 0.05). Greenhouse–Geisser correction was applied for violation of sphericity; uncorrected degrees of freedom and corrected P-values are reported. For all statistical tests, the significance level α was set a priori to 0.05. First, we tested for differences in odor pleasantness and intensity with 2 repeated-measures ANOVAs with the factor odor (5 levels: orange, dominant orange, perceptual half–half, dominant chicken, and chicken) followed with paired-samples Student’s t-tests when a main effect was found. To test whether participants perceived degrees of congruence in the bimodal stimuli and whether congruence modulated the pleasantness of the VO stimuli, we submitted pleasantness ratings to a repeated-measures ANOVA with the factor congruence (5 levels: incongruent, intermediary incongruent, intermediary, intermediary congruent, and congruent) followed by paired-samples Student’s t-tests when a main effect was present. Participants reported the perceived proportions of orange and chicken odor in O and VO trials. The scale ranged from orange (0) to chicken (100); accordingly, low values indicate more orange and larger values indicate more chicken. To test general influence of visual information of reported odor composition, irrespective of odor, we normalized ratings for each odor category and for each participant, collapsed the ratings across all odors for each participant and performed pairwise Student t-tests (orange vs. none, chicken vs. none, orange vs. chicken). Then, to explore specific influences of images on perceived odor composition, we submitted odor composition ratings for each odor category to a 2-way ANOVA with the factors odor (5 levels: orange, dominant orange, perceptual half–half, dominant chicken and chicken) and image (3 levels: orange, none, chicken), followed with paired-samples Student’s t-tests when a main effect was present. Results Response times Mean RTs and corresponding SDs for all participants are summarized in Table 1. On average, RTVO were the fastest (492 ms), RTV were marginally slower (507 ms), and RTO were markedly slower (777 ms). To assess bimodal performance gains based on mean RTs, we calculated MRE, that is, the relative difference between RTVO and the shorter of the mean unimodal RTs. The average MRE was 1.6% and not significantly different from zero (t13 = 0.50, P = 0.63). This finding may be due to the considerable variability across participants (see Table 1), with values ranging from 23.6% (strong facilitation, participant 412) to −30.9% (excessive debilitation, participant 695; Supplementary Figure 1). The differences in RT means, RTV−RTO, was calculated to assess the time difference between perceived V and O stimulus onset. Responses to V were between 52 and 525 ms faster than responses to O stimuli. Table 1. Means and SDs of RTs and capacity coefficients, and MRE Participant  RT (ms)      V  O  VO  V–O    C(t)  Mean  SD  Mean  SD  Mean  SD  Mean  SD  MRE (%)  Mean  SD  412  647  255  773  237  494  160  −126  18  23.6  1.47  0.59  474  486  151  811  192  478  160  −325  −41  1.7  1.07  0.13  500  414  144  858  321  388  139  −443  −177  6.4  1.07  0.18  589  790  248  842  261  684  218  −52  −13  13.4  0.94  0.18  631  561  396  695  276  532  260  −133  120  5.3  0.68  0.15  645  341  87  557  161  367  91  −217  −73  −7.7  0.66  0.15  663  475  255  747  343  453  148  −272  −88  4.8  0.83  0.36  695  493  259  674  287  645  304  −182  −28  −30.9  0.38  0.08  698  480  211  803  285  464  180  −323  −74  3.2  0.92  0.18  707  428  110  769  235  426  120  −341  −125  0.4  1.02  0.13  763  471  177  868  250  469  196  −397  −73  0.4  1.00  0.28  791  413  147  938  321  435  167  −525  −174  −5.2  0.84  0.28  812  554  149  709  325  499  165  −155  −176  10.0  0.89  0.24  828  542  188  833  316  555  207  −291  −128  −2.3  0.71  0.22  Average  507  198  777  272  492  180  −270  −74  1.6  0.89  0.22  Participant  RT (ms)      V  O  VO  V–O    C(t)  Mean  SD  Mean  SD  Mean  SD  Mean  SD  MRE (%)  Mean  SD  412  647  255  773  237  494  160  −126  18  23.6  1.47  0.59  474  486  151  811  192  478  160  −325  −41  1.7  1.07  0.13  500  414  144  858  321  388  139  −443  −177  6.4  1.07  0.18  589  790  248  842  261  684  218  −52  −13  13.4  0.94  0.18  631  561  396  695  276  532  260  −133  120  5.3  0.68  0.15  645  341  87  557  161  367  91  −217  −73  −7.7  0.66  0.15  663  475  255  747  343  453  148  −272  −88  4.8  0.83  0.36  695  493  259  674  287  645  304  −182  −28  −30.9  0.38  0.08  698  480  211  803  285  464  180  −323  −74  3.2  0.92  0.18  707  428  110  769  235  426  120  −341  −125  0.4  1.02  0.13  763  471  177  868  250  469  196  −397  −73  0.4  1.00  0.28  791  413  147  938  321  435  167  −525  −174  −5.2  0.84  0.28  812  554  149  709  325  499  165  −155  −176  10.0  0.89  0.24  828  542  188  833  316  555  207  −291  −128  −2.3  0.71  0.22  Average  507  198  777  272  492  180  −270  −74  1.6  0.89  0.22  View Large Table 1. Means and SDs of RTs and capacity coefficients, and MRE Participant  RT (ms)      V  O  VO  V–O    C(t)  Mean  SD  Mean  SD  Mean  SD  Mean  SD  MRE (%)  Mean  SD  412  647  255  773  237  494  160  −126  18  23.6  1.47  0.59  474  486  151  811  192  478  160  −325  −41  1.7  1.07  0.13  500  414  144  858  321  388  139  −443  −177  6.4  1.07  0.18  589  790  248  842  261  684  218  −52  −13  13.4  0.94  0.18  631  561  396  695  276  532  260  −133  120  5.3  0.68  0.15  645  341  87  557  161  367  91  −217  −73  −7.7  0.66  0.15  663  475  255  747  343  453  148  −272  −88  4.8  0.83  0.36  695  493  259  674  287  645  304  −182  −28  −30.9  0.38  0.08  698  480  211  803  285  464  180  −323  −74  3.2  0.92  0.18  707  428  110  769  235  426  120  −341  −125  0.4  1.02  0.13  763  471  177  868  250  469  196  −397  −73  0.4  1.00  0.28  791  413  147  938  321  435  167  −525  −174  −5.2  0.84  0.28  812  554  149  709  325  499  165  −155  −176  10.0  0.89  0.24  828  542  188  833  316  555  207  −291  −128  −2.3  0.71  0.22  Average  507  198  777  272  492  180  −270  −74  1.6  0.89  0.22  Participant  RT (ms)      V  O  VO  V–O    C(t)  Mean  SD  Mean  SD  Mean  SD  Mean  SD  MRE (%)  Mean  SD  412  647  255  773  237  494  160  −126  18  23.6  1.47  0.59  474  486  151  811  192  478  160  −325  −41  1.7  1.07  0.13  500  414  144  858  321  388  139  −443  −177  6.4  1.07  0.18  589  790  248  842  261  684  218  −52  −13  13.4  0.94  0.18  631  561  396  695  276  532  260  −133  120  5.3  0.68  0.15  645  341  87  557  161  367  91  −217  −73  −7.7  0.66  0.15  663  475  255  747  343  453  148  −272  −88  4.8  0.83  0.36  695  493  259  674  287  645  304  −182  −28  −30.9  0.38  0.08  698  480  211  803  285  464  180  −323  −74  3.2  0.92  0.18  707  428  110  769  235  426  120  −341  −125  0.4  1.02  0.13  763  471  177  868  250  469  196  −397  −73  0.4  1.00  0.28  791  413  147  938  321  435  167  −525  −174  −5.2  0.84  0.28  812  554  149  709  325  499  165  −155  −176  10.0  0.89  0.24  828  542  188  833  316  555  207  −291  −128  −2.3  0.71  0.22  Average  507  198  777  272  492  180  −270  −74  1.6  0.89  0.22  View Large For fast responses, the mean RT distribution functions FVO and FV were largely superimposed (Figure 2A), indicating that the bimodal responses were driven by the visual constituent alone. At around the 45th percentile, FVO shifted left of FV, separating both curves. This diversion occurred slightly after the time at which the fastest olfactory responses were observed; participants started benefiting from the bimodal stimulation, and consistently responded faster than in the unimodal conditions. Figure 2. View largeDownload slide RT distributions and race model predictions. (A) Mean cumulative RT distributions (CDF). The distribution for V distribution was located left of the O distribution, reflecting faster visual than olfactory responses. The VO distribution closely followed V up to the 45th percentile, and shifted left for greater RTs. Note that this diversion occurred slightly after the time the first olfactory responses were observed. (B) Comparison of observed bimodal CDF to model predictions. The bimodal distribution never violated the Miller bound and was always shifted right of the UCIP model assumption, even reaching the Grice bound at some percentiles. Figure 2. View largeDownload slide RT distributions and race model predictions. (A) Mean cumulative RT distributions (CDF). The distribution for V distribution was located left of the O distribution, reflecting faster visual than olfactory responses. The VO distribution closely followed V up to the 45th percentile, and shifted left for greater RTs. Note that this diversion occurred slightly after the time the first olfactory responses were observed. (B) Comparison of observed bimodal CDF to model predictions. The bimodal distribution never violated the Miller bound and was always shifted right of the UCIP model assumption, even reaching the Grice bound at some percentiles. The observed bimodal RT distribution FVO never reached the Miller bound (Figure 2B). Instead, it was shifted right of the UCIP model; however, this shift was not significant at any percentiles (2-tailed t-tests, all P>0.05; Table 2). The Grice bound was violated at the 5th to 45th percentiles; again, these violations were not significant (1-tailed t-tests, all P>0.05; Table 2). This rightward shift relative to the UCIP model could, nevertheless, be indicative of a positive correlation between visual and olfactory channel processing times, and thus limited-capacity processing. Table 2. Paired comparison of observed bimodal CDFs to model predictions at specific percentiles Proportion of responses  VO—Miller bound  VO—UCIP model  VO—Grice bound  t13  P  t13  P  t13  P  0.05  1.58  0.14  1.58  0.76  0.99  1.00  0.15  2.04  0.14  1.98  0.69  0.19  1.00  0.25  1.96  0.14  1.63  0.76  0.02  1.00  0.35  2.13  0.14  1.83  0.73  0.27  1.00  0.45  2.21  0.14  1.81  0.73  0.53  1.00  0.55  2.02  0.14  1.36  0.76  −0.85  1.00  0.65  2.30  0.13  1.30  0.76  −1.01  1.00  0.75  2.70  0.07  1.15  0.76  −2.11  0.24  0.85  3.36  0.02  1.48  0.76  −2.37  0.17  0.95  4.85  < 0.01  1.90  0.72  −1.18  1.00  Proportion of responses  VO—Miller bound  VO—UCIP model  VO—Grice bound  t13  P  t13  P  t13  P  0.05  1.58  0.14  1.58  0.76  0.99  1.00  0.15  2.04  0.14  1.98  0.69  0.19  1.00  0.25  1.96  0.14  1.63  0.76  0.02  1.00  0.35  2.13  0.14  1.83  0.73  0.27  1.00  0.45  2.21  0.14  1.81  0.73  0.53  1.00  0.55  2.02  0.14  1.36  0.76  −0.85  1.00  0.65  2.30  0.13  1.30  0.76  −1.01  1.00  0.75  2.70  0.07  1.15  0.76  −2.11  0.24  0.85  3.36  0.02  1.48  0.76  −2.37  0.17  0.95  4.85  < 0.01  1.90  0.72  −1.18  1.00  Tests against Miller and Grice bounds were 1-tailed. All P-values Holm–Bonferroni corrected for multiple testing. View Large Table 2. Paired comparison of observed bimodal CDFs to model predictions at specific percentiles Proportion of responses  VO—Miller bound  VO—UCIP model  VO—Grice bound  t13  P  t13  P  t13  P  0.05  1.58  0.14  1.58  0.76  0.99  1.00  0.15  2.04  0.14  1.98  0.69  0.19  1.00  0.25  1.96  0.14  1.63  0.76  0.02  1.00  0.35  2.13  0.14  1.83  0.73  0.27  1.00  0.45  2.21  0.14  1.81  0.73  0.53  1.00  0.55  2.02  0.14  1.36  0.76  −0.85  1.00  0.65  2.30  0.13  1.30  0.76  −1.01  1.00  0.75  2.70  0.07  1.15  0.76  −2.11  0.24  0.85  3.36  0.02  1.48  0.76  −2.37  0.17  0.95  4.85  < 0.01  1.90  0.72  −1.18  1.00  Proportion of responses  VO—Miller bound  VO—UCIP model  VO—Grice bound  t13  P  t13  P  t13  P  0.05  1.58  0.14  1.58  0.76  0.99  1.00  0.15  2.04  0.14  1.98  0.69  0.19  1.00  0.25  1.96  0.14  1.63  0.76  0.02  1.00  0.35  2.13  0.14  1.83  0.73  0.27  1.00  0.45  2.21  0.14  1.81  0.73  0.53  1.00  0.55  2.02  0.14  1.36  0.76  −0.85  1.00  0.65  2.30  0.13  1.30  0.76  −1.01  1.00  0.75  2.70  0.07  1.15  0.76  −2.11  0.24  0.85  3.36  0.02  1.48  0.76  −2.37  0.17  0.95  4.85  < 0.01  1.90  0.72  −1.18  1.00  Tests against Miller and Grice bounds were 1-tailed. All P-values Holm–Bonferroni corrected for multiple testing. View Large Workload capacity dynamics varied greatly between participants (Figure 3; Supplementary Figure 2): some appeared to benefit from bimodal stimulation and exhibited super-capacity processing, indicated by capacity coefficients greater than 1 over extended periods of time, while others suffered from debilitation and exposed limited-capacity processing, indicated by capacity coefficients less than 1 over an extended period of time. Most participants, however, exhibited dynamics involving intermittent super- and limited-capacity processing as well as periods of unlimited-capacity processing. Mean capacity ranged from 0.38 to 1.47 with a global average of 0.89, indicating moderately limited-capacity processing (Table 1). However, deviation from the UCIP model was not significant (1-sample t-test against 1; t13=−1.60, P=0.12). Figure 3. View largeDownload slide Workload capacity coefficients. Solid lines represent individual participants, the bold horizontal line depicts the reference UCIP model (unlimited capacity). Values greater than 1 suggest super-capacity processing, and values lesser than 1 indicate limited-capacity processing. Figure 3. View largeDownload slide Workload capacity coefficients. Solid lines represent individual participants, the bold horizontal line depicts the reference UCIP model (unlimited capacity). Values greater than 1 suggest super-capacity processing, and values lesser than 1 indicate limited-capacity processing. Perceptual ratings Odors were overall moderately intense (range: 44.31–68.39) and moderately pleasant (range: 38.83–67.92). Intensity and pleasantness ratings differed significantly between odorants (intensity: F4,52 = 23.06, P < 0.001, ηp2 = 0.639, Figure 4A; pleasantness: F4,52 = 12.38, P < 0.001, ηp2 = 0.488, Figure 4B). Non-mixed odorants (i.e., orange and chicken) were on average more intense than the odor mixtures (t13 = 6.04, P < 0.001). Pleasantness decreased as the proportion of chicken increased as indicated by the significant paired-samples t-tests (Table 3). Table 3. Paired comparisons between odor categories for intensity, pleasantness, and composition ratings Comparison  Intensity  Pleasantness  Composition  t13  P  t13  P  t13  P  Orange  Dom. orange  3.158  0.008  3.026  0.01  −5.037  <0.001  Orange  Percept. half–half  1.016  0.328  4.406  <0.001  −7.833  <0.001  Orange  Dom. chicken  0.306  0.764  4.834  <0.001  −14.507  <0.001  Orange  Chicken  −7.045  <0.001  3.775  0.002  −22.515  <0.001  Dom. orange  Percept. half–half  −2.631  0.021  4.081  0.001  −5.394  <0.001  Dom. orange  Dom. chicken  −2.033  0.063  4.438  <0.001  −11.541  <0.001  Dom. orange  Chicken  −7.129  <0.001  3.269  0.006  −13.975  <0.001  Percept. half–half  Dom. chicken  −0.469  0.647  2.551  0.024  −5.69  <0.001  Percept. half–half  Chicken  −6.608  <0.001  2.165  0.05  −10.391  <0.001  Dom. chicken  Chicken  −5.569  <0.001  1.291  0.219  −6.975  <0.001  Comparison  Intensity  Pleasantness  Composition  t13  P  t13  P  t13  P  Orange  Dom. orange  3.158  0.008  3.026  0.01  −5.037  <0.001  Orange  Percept. half–half  1.016  0.328  4.406  <0.001  −7.833  <0.001  Orange  Dom. chicken  0.306  0.764  4.834  <0.001  −14.507  <0.001  Orange  Chicken  −7.045  <0.001  3.775  0.002  −22.515  <0.001  Dom. orange  Percept. half–half  −2.631  0.021  4.081  0.001  −5.394  <0.001  Dom. orange  Dom. chicken  −2.033  0.063  4.438  <0.001  −11.541  <0.001  Dom. orange  Chicken  −7.129  <0.001  3.269  0.006  −13.975  <0.001  Percept. half–half  Dom. chicken  −0.469  0.647  2.551  0.024  −5.69  <0.001  Percept. half–half  Chicken  −6.608  <0.001  2.165  0.05  −10.391  <0.001  Dom. chicken  Chicken  −5.569  <0.001  1.291  0.219  −6.975  <0.001  View Large Table 3. Paired comparisons between odor categories for intensity, pleasantness, and composition ratings Comparison  Intensity  Pleasantness  Composition  t13  P  t13  P  t13  P  Orange  Dom. orange  3.158  0.008  3.026  0.01  −5.037  <0.001  Orange  Percept. half–half  1.016  0.328  4.406  <0.001  −7.833  <0.001  Orange  Dom. chicken  0.306  0.764  4.834  <0.001  −14.507  <0.001  Orange  Chicken  −7.045  <0.001  3.775  0.002  −22.515  <0.001  Dom. orange  Percept. half–half  −2.631  0.021  4.081  0.001  −5.394  <0.001  Dom. orange  Dom. chicken  −2.033  0.063  4.438  <0.001  −11.541  <0.001  Dom. orange  Chicken  −7.129  <0.001  3.269  0.006  −13.975  <0.001  Percept. half–half  Dom. chicken  −0.469  0.647  2.551  0.024  −5.69  <0.001  Percept. half–half  Chicken  −6.608  <0.001  2.165  0.05  −10.391  <0.001  Dom. chicken  Chicken  −5.569  <0.001  1.291  0.219  −6.975  <0.001  Comparison  Intensity  Pleasantness  Composition  t13  P  t13  P  t13  P  Orange  Dom. orange  3.158  0.008  3.026  0.01  −5.037  <0.001  Orange  Percept. half–half  1.016  0.328  4.406  <0.001  −7.833  <0.001  Orange  Dom. chicken  0.306  0.764  4.834  <0.001  −14.507  <0.001  Orange  Chicken  −7.045  <0.001  3.775  0.002  −22.515  <0.001  Dom. orange  Percept. half–half  −2.631  0.021  4.081  0.001  −5.394  <0.001  Dom. orange  Dom. chicken  −2.033  0.063  4.438  <0.001  −11.541  <0.001  Dom. orange  Chicken  −7.129  <0.001  3.269  0.006  −13.975  <0.001  Percept. half–half  Dom. chicken  −0.469  0.647  2.551  0.024  −5.69  <0.001  Percept. half–half  Chicken  −6.608  <0.001  2.165  0.05  −10.391  <0.001  Dom. chicken  Chicken  −5.569  <0.001  1.291  0.219  −6.975  <0.001  View Large Figure 4. View largeDownload slide Perceptual ratings. Odors intensity (A) and odor pleasantness (B) for each odor category. Congruence (C) and pleasantness (D) ratings for bimodal visual–olfactory stimuli. Box-plots represent the distribution of the means; median and mean of the distributions are represented by the horizontal line and diamond, respectively; whiskers represent the 1.5 times the interquartile range (IQR); dots indicate outliers. Inc., incongruent; Interm. inc., intermediary incongruent; Interm., intermediary; Interm. con., intermediary congruent; Con., congruent. Figure 4. View largeDownload slide Perceptual ratings. Odors intensity (A) and odor pleasantness (B) for each odor category. Congruence (C) and pleasantness (D) ratings for bimodal visual–olfactory stimuli. Box-plots represent the distribution of the means; median and mean of the distributions are represented by the horizontal line and diamond, respectively; whiskers represent the 1.5 times the interquartile range (IQR); dots indicate outliers. Inc., incongruent; Interm. inc., intermediary incongruent; Interm., intermediary; Interm. con., intermediary congruent; Con., congruent. VO stimuli were moderately pleasant (range: 49.18–58.10). Participants perceived congruence differences in the different VO stimuli indicated by a significant main effect of congruence (F4,52 = 95.07, P < 0.001, ηp2 = 0.880; Figure 4C) and significant pairwise tests (Table 4). Furthermore, the designed congruence levels modulated the VO pleasantness (F4,52 = 3.737, P = 0.01, ηp2 = 0.223; Figure 4D). Table 4. Paired comparisons between congruence categories for congruence and pleasantness ratings Comparison  Congruence  Pleasantness  t13  P  t13  P  Inc. vs Interm. Inc.  −9.932  <0.001  −1.296  0.186  Inc. vs Interm.  −11.517  <0.001  −1.489  0.16  Inc. vs Interm. Con.  −9.889  <0.001  −2.323  <0.05  Inc. vs Con.  −12.771  <0.001  −3.002  <0.05  Interm. Inc. vs Interm.  −6.053  <0.001  −0.733  0.476  Interm. Inc. vs Interm. Con.  −6.707  <0.001  −1.372  0.193  Interm. Inc. vs Con.  −9.981  <0.001  −2.183  <0.05  Interm. vs Interm. Con.  −4.694  <0.001  −0.287  0.778  Interm. vs Con.  −9.187  <0.001  −1.558  0.143  Interm. Con. vs Con.  −7.239  <0.001  −1.914  0.078  Comparison  Congruence  Pleasantness  t13  P  t13  P  Inc. vs Interm. Inc.  −9.932  <0.001  −1.296  0.186  Inc. vs Interm.  −11.517  <0.001  −1.489  0.16  Inc. vs Interm. Con.  −9.889  <0.001  −2.323  <0.05  Inc. vs Con.  −12.771  <0.001  −3.002  <0.05  Interm. Inc. vs Interm.  −6.053  <0.001  −0.733  0.476  Interm. Inc. vs Interm. Con.  −6.707  <0.001  −1.372  0.193  Interm. Inc. vs Con.  −9.981  <0.001  −2.183  <0.05  Interm. vs Interm. Con.  −4.694  <0.001  −0.287  0.778  Interm. vs Con.  −9.187  <0.001  −1.558  0.143  Interm. Con. vs Con.  −7.239  <0.001  −1.914  0.078  Inc., Incongruent; Interm. Inc, Intermediary Incongruent; Interm, Intermediary; Interm. Con., Intermediary Congruent; Con., Congruent. View Large Table 4. Paired comparisons between congruence categories for congruence and pleasantness ratings Comparison  Congruence  Pleasantness  t13  P  t13  P  Inc. vs Interm. Inc.  −9.932  <0.001  −1.296  0.186  Inc. vs Interm.  −11.517  <0.001  −1.489  0.16  Inc. vs Interm. Con.  −9.889  <0.001  −2.323  <0.05  Inc. vs Con.  −12.771  <0.001  −3.002  <0.05  Interm. Inc. vs Interm.  −6.053  <0.001  −0.733  0.476  Interm. Inc. vs Interm. Con.  −6.707  <0.001  −1.372  0.193  Interm. Inc. vs Con.  −9.981  <0.001  −2.183  <0.05  Interm. vs Interm. Con.  −4.694  <0.001  −0.287  0.778  Interm. vs Con.  −9.187  <0.001  −1.558  0.143  Interm. Con. vs Con.  −7.239  <0.001  −1.914  0.078  Comparison  Congruence  Pleasantness  t13  P  t13  P  Inc. vs Interm. Inc.  −9.932  <0.001  −1.296  0.186  Inc. vs Interm.  −11.517  <0.001  −1.489  0.16  Inc. vs Interm. Con.  −9.889  <0.001  −2.323  <0.05  Inc. vs Con.  −12.771  <0.001  −3.002  <0.05  Interm. Inc. vs Interm.  −6.053  <0.001  −0.733  0.476  Interm. Inc. vs Interm. Con.  −6.707  <0.001  −1.372  0.193  Interm. Inc. vs Con.  −9.981  <0.001  −2.183  <0.05  Interm. vs Interm. Con.  −4.694  <0.001  −0.287  0.778  Interm. vs Con.  −9.187  <0.001  −1.558  0.143  Interm. Con. vs Con.  −7.239  <0.001  −1.914  0.078  Inc., Incongruent; Interm. Inc, Intermediary Incongruent; Interm, Intermediary; Interm. Con., Intermediary Congruent; Con., Congruent. View Large The perceived composition of the 5 odors, when presented alone, followed the designed proportions and replicated our previous findings (Amsellem and Ohla 2016): participants reported the ratio of the odor components (orange and chicken) to vary gradually, resulting in a main effect of odor (F4,52 = 136.0, P < 0.001, ηp2 = 0.913; Figure 5B, middle bars) and significant pairwise comparisons (Table 3). These effects persisted when odors were presented with any image (all t13 < −4.705, P < 0.001). In order to assess the general effects of visual information on odor composition independent of odor category, we compared composition ratings between each image category, orange and chicken, with the no image control. We found that orange images yielded a significant shift in odor composition toward orange compared with the no image control (orange vs. none: t13 = −2.5958, P < 0.05, Figure 5A). In contrast, chicken images did not influence perceived odor composition compared with the no image control (chicken vs. none: t13 = 0.30, P = 0.77, Figure 5A). Figure 5. View largeDownload slide Odor composition ratings aggregated for odor categories after participant-level z-score normalization (A) and for each odor category separately (B). Box-plots represent distributions of means; median and mean of the distributions are represented by the horizontal line and diamond, respectively; whiskers represent 1.5 times the interquartile range (IQR); dots indicate outliers. DomO, dominant orange; Percept50, perceptual half–half; DomC, dominant chicken. Figure 5. View largeDownload slide Odor composition ratings aggregated for odor categories after participant-level z-score normalization (A) and for each odor category separately (B). Box-plots represent distributions of means; median and mean of the distributions are represented by the horizontal line and diamond, respectively; whiskers represent 1.5 times the interquartile range (IQR); dots indicate outliers. DomO, dominant orange; Percept50, perceptual half–half; DomC, dominant chicken. We then explored these effects for each odor category and observed that the presence of an image modulated odor composition differentially, resulting in a significant odor–image interaction (F8,104 = 4.383, P < 0.001, ηp2 = 0.252, Figure 5B), and a main effect of odor category was observed (F4,52 = 146.935, P < 0.001, ηp2 = 0.919; Figure 5B) without a significant main effect of image category (F2,26 = 2.18, P = 0.13, ηp2 = 0.144, Figure 5B). Pairwise comparisons are summarized in Table 5. Table 5. Paired comparisons between image types for odor composition ratings Comparison  Odor composition  Odor  Image  t13  P  Orange  Orange vs. none  −1.259  0.23  Chicken vs. none  3.357  0.005  Chicken vs. orange  −3.871  0.002  Dominant orange  Orange vs. none  −0.01  0.992  Chicken vs. none  1.657  0.122  Chicken vs. orange  −1.328  0.207  Perceptual half–half  Orange vs. none  −2.896  0.013  Chicken vs. none  −2.748  0.017  Chicken vs. orange  −0.771  0.455  Dominant chicken  Orange vs. none  −0.776  0.452  Chicken vs. none  −1.77  0.1  Chicken vs. orange  0.853  0.409  Chicken  Orange vs. none  −1.717  0.11  Chicken vs. none  −2.506  0.026  Chicken vs. orange  0.629  0.54  Comparison  Odor composition  Odor  Image  t13  P  Orange  Orange vs. none  −1.259  0.23  Chicken vs. none  3.357  0.005  Chicken vs. orange  −3.871  0.002  Dominant orange  Orange vs. none  −0.01  0.992  Chicken vs. none  1.657  0.122  Chicken vs. orange  −1.328  0.207  Perceptual half–half  Orange vs. none  −2.896  0.013  Chicken vs. none  −2.748  0.017  Chicken vs. orange  −0.771  0.455  Dominant chicken  Orange vs. none  −0.776  0.452  Chicken vs. none  −1.77  0.1  Chicken vs. orange  0.853  0.409  Chicken  Orange vs. none  −1.717  0.11  Chicken vs. none  −2.506  0.026  Chicken vs. orange  0.629  0.54  View Large Table 5. Paired comparisons between image types for odor composition ratings Comparison  Odor composition  Odor  Image  t13  P  Orange  Orange vs. none  −1.259  0.23  Chicken vs. none  3.357  0.005  Chicken vs. orange  −3.871  0.002  Dominant orange  Orange vs. none  −0.01  0.992  Chicken vs. none  1.657  0.122  Chicken vs. orange  −1.328  0.207  Perceptual half–half  Orange vs. none  −2.896  0.013  Chicken vs. none  −2.748  0.017  Chicken vs. orange  −0.771  0.455  Dominant chicken  Orange vs. none  −0.776  0.452  Chicken vs. none  −1.77  0.1  Chicken vs. orange  0.853  0.409  Chicken  Orange vs. none  −1.717  0.11  Chicken vs. none  −2.506  0.026  Chicken vs. orange  0.629  0.54  Comparison  Odor composition  Odor  Image  t13  P  Orange  Orange vs. none  −1.259  0.23  Chicken vs. none  3.357  0.005  Chicken vs. orange  −3.871  0.002  Dominant orange  Orange vs. none  −0.01  0.992  Chicken vs. none  1.657  0.122  Chicken vs. orange  −1.328  0.207  Perceptual half–half  Orange vs. none  −2.896  0.013  Chicken vs. none  −2.748  0.017  Chicken vs. orange  −0.771  0.455  Dominant chicken  Orange vs. none  −0.776  0.452  Chicken vs. none  −1.77  0.1  Chicken vs. orange  0.853  0.409  Chicken  Orange vs. none  −1.717  0.11  Chicken vs. none  −2.506  0.026  Chicken vs. orange  0.629  0.54  View Large Discussion The aims of this study were to explore visual–olfactory interactions at early sensory and late perceptual evaluation levels. Specifically, we aimed to answer 1) whether visual–olfactory information is integrated at early stages of processing, 2) whether visual–olfactory congruence is a gradual or dichotomous phenomenon, and 3) whether visual information influences bimodal stimulus evaluation and odor identity. Previous studies on VO interaction employed mostly congruent or congruent and incongruent stimuli (Gottfried and Dolan 2003; Dematte et al. 2006), thereby limiting the concept of congruence to a dichotomy. By combining images with carefully designed odor mixtures, we found that participants perceived different nuances of congruence for bimodal stimuli suggesting that VO congruence is experienced gradually rather than dichotomously, that is, as either congruence or incongruence, in line with our previous finding for odor–taste combinations (Amsellem and Ohla 2016). Congruence between the sensory inputs systematically influenced pleasantness such that the least congruent combinations yielded the lowest and the most congruent combinations yielded the highest pleasantness ratings. Similar findings have been previously observed for bimodal odor–taste stimuli (Schifferstein and Verlegh 1996; Small et al. 2004; Amsellem and Ohla 2016), indicating a modality-independent link between congruence and pleasantness. Participants evaluated odor composition with high accuracy suggesting that they were able to decompose the odorants into their components (Laing et al. 1984; Laing and Francis 1989; Jinks and Laing 2001; Amsellem and Ohla 2016), which resulted in the perception of distinct sensory entities. Odor composition perception was influenced by visual information: in the presence of orange images, odor composition ratings were shifted toward the orange component, an effect that was observed for all odor categories irrespective of VO congruence. This finding is in line with the notion that odor identification critically depends on additional information as odors in isolation are notoriously ambiguous (Cain 1979). Identification and discrimination of odors in the absence of additional information is difficult (Davis 1981), and color cues (Zellner et al. 1991), verbal labels (Cain et al. 1998; Herz and von Clef 2001), and images (Gottfried and Dolan 2003) have been shown to improve odor perception. Surprisingly and in contrast, the presence of chicken images yielded no consistent effects on odor composition ratings. It is possible that the higher intensity and associated reduced pleasantness of chicken odor compared with orange odor contributed to that finding. Although orange and chicken images were iso-pleasant, the corresponding odors were not; thus, it is possible that the “negative” chicken odor is less likely to be enhanced in the odor mixture. Alternatively, the relative ambiguity of chicken compared with orange, as chicken may be interpreted as different meats while orange is more unequivocal, may have reduced the potential of chicken images to enhance chicken odor. Assuming stronger ambiguity of chicken images, one could hypothesize that participants relied more on the odors alone than on the VO combination to assess odor composition. In this case, participants would have already extracted sufficient information from the odors alone to perform the rating. Similarly, the large perceptual distance between orange and chicken rendered the task relatively easy. It remains to be tested whether participants would rely more on visual information when the odorants were perceptually closer, for example, 2 fruits of the same category (e.g., berries). In contrast to our previous findings (Amsellem and Ohla 2016), participants rated orange odor more pleasant and less intense than chicken odor. The greater intrinsic typicality of the orange odor compared with the more ambiguous meat fragrance of chicken could, at least in part, account for the reported differences in pleasantness in line with the notion that familiar odors are more pleasant than unfamiliar ones (Delplanque et al. 2008). Our finding of reduced odor intensity for the mixtures compared with the orange and chicken most likely reflects mixture-related attenuation (Berglund and Olsson 1993; Cain et al. 1995) and replicates previous findings using the same odorants (Amsellem and Ohla 2016). In line with the assumption that a combined sampling for multiple target stimuli leads to speedup and a narrower RT distribution compared with individual sampling for single stimuli (Raab 1962), we found faster responses and smaller RT variability for bimodal stimuli compared with their unimodal constituents. However, comparisons between mean bimodal and the fastest unimodal responses, as indicated by MRE, revealed no significant bimodal speedup, which could have been due to large interindividual differences. Similarly, we found no negative correlation between MRE and perceived V and O stimulus onsets, indicating that perceived onset-asynchronies between V and O stimuli alone cannot fully explain interindividual differences of visual–olfactory MRE in our experimental setting. This argument is further corroborated by the observation that less effective stimuli are more tolerant of stimulus asynchronies (Höchenberger et al. 2015; Krueger Fister et al. 2015). Analysis of RT distributions revealed that VO responses were mostly driven by the visual modality, as indicated by their superimposed CDFs, FVO and FV, until the 45th percentile. After that point, FVO deviated toward faster responses, that is, FV and FO started to overlap supporting the assumption that facilitation is greatest when stimulus constituents are perceived simultaneously, as has already been demonstrated in experiments using visual–olfactory (Höchenberger et al. 2015), audio–visual (Hershenson 1962), and audio–visual–tactile stimuli (Diederich and Colonius 2004). However, the mean bimodal RT CDF did not violate the Miller bound, indicating the observed bimodal improvement could be the result of statistical facilitation alone (Miller 1982). Instead, FVO was shifted right of the UCIP model in the direction of the Grice bound. Although this shift was not statistically significant, it could be indicative of a positive correlation between visual and olfactory channel processing times, as the Grice bound indicates “perfect” positive correlation (Colonius 1990). This suggests that, in each trial, processing speed of either the visual or the olfactory channel determines, to some extent, the processing speed of the other channel. Note that the Miller and Grice bounds merely provide boundaries for an entire group of parallel processing models, called race models. Violation of either boundary leads to a rejection of all possible race models, including the UCIP model, which is a race model assuming stochastically independent channel processing times. Individual statistical comparisons between the experimentally collected data and different models provide useful information to further classify the underlying processing architecture. Our interpretation of dependent processing is further substantiated by the capacity analysis, which revealed slightly limited-capacity processing during bimodal stimulation, that is, performance dropped below a prediction assuming parallel processing in stochastically independent channels (Townsend and Nozawa 1995; Townsend and Wenger 2004; Townsend and Eidels 2011). An alternative, though not mutually exclusive interpretation, would allow for some degree of information exchange (cross talk) between channels according to a dynamic model of “interactive” parallel processing (Townsend and Nozawa 1995). This system may operate at limited capacity in the case of negative channel interactions, while super-capacity is associated with positive interactions. Within this framework, one could speculate that numerous participants were affected by inhibitory cross-channel interactions in bimodal trials. Notably, race models critically rely on the (implicit) assumption of context invariance, which demands that processing in an individual channel is identical in both the unimodal and the multimodal setting (Ashby and Townsend 1986; Colonius 1990; Townsend and Wenger 2004). Therefore, violations of context invariance are a prerequisite for violations of the Miller and Grice bounds. However, context invariance “implies unlimited capacity”, and “violation of stochastic independence in real-time systems often leads to a failure of context invariance” (Townsend and Wenger 2004). Considering the deviations from unlimited-capacity processing and the shift of the bimodal RT distribution in direction of the Grice bound, which is associated with a positive correlation of channel processing speeds and thus implies a violation of stochastic independence, one could speculate that the context invariance assumption was violated by at least some participants. Mordkoff and Yantis (1991) found that presentation statistics (contingencies) favoring redundant-target trials, a visual go/no-go task, lead to violations of the Miller bound. Although contingencies in this study were biased toward bimodal stimuli (approx. 60% bimodal trials), we did not observe race model violations. This further corroborates the absence of facilitatory cross-channel interactions between the visual and olfactory modalities, in line with our previous findings that revealed parallel processing of visual and olfactory information in an object identification task (Höchenberger et al. 2015). Together these findings indicate that bimodal visual–olfactory processing follows the same mechanism at early (detection) and later (identification) perceptual stages. Although we cannot rule out that visual–olfactory integration may have taken place at early levels of sensory processing, as bimodal CDFs were always located between the Miller and Grice bounds, this view is not substantiated by known physiological prerequisites for early sensory integration, that is, monosynaptic connections between the primary visual and olfactory cortices or bimodal visual–olfactory neurons within these areas. For auditory–visual information, early sensory integration is grounded on multisensory neurons in the superior colliculus (Wallace and Stein 1997) along with direct (Falchier et al. 2002; Budinger and Scheich 2009; Cappe et al. 2009) and indirect (van den Brink et al. 2014) connection between auditory and visual sensory cortices. Late auditory–visual interactions, however, are likely established via pooling of sensory information in heteromodal areas such as the superior temporal sulcus, intraparietal sulcus, parieto-occipital cortex, posterior insula, as well as prefrontal and premotor areas (Calvert et al. 2000; Calvert 2001). Similarly, the perirhinal cortex has been proposed a prime candidate and a processing hub for visual–olfactory information exchange (Qu et al. 2016) because of its numerous reciprocal connections, particularly with the inferior temporal cortex, which is involved in object perception (Grill-Spector and Weiner 2014) and associations of sensory representations, and the rhinal cortex, a subdivision, is critical for the association of flavor with visual food objects in monkeys (Parker and Gaffan 1998) and olfactory–visual associative learning in humans (Qu et al. 2016). While activation of this network may not give rise to early sensory integration reflected in simple RTs, it is likely contributing to later visual–olfactory interactions. Conclusion The present data yielded a bimodal VO RT speedup that is consistent with parallel processing according to race models. While these models do not refute the possibility of integration, our data, together with the fact that direct connections between visual and olfactory areas as well as bimodal neurons have not been discovered yet, render VO integration at the earliest level of the processing cascade unlikely. At later, evaluative levels of processing, however, bimodal interactions exhibited effects on the pleasantness of bimodal stimuli as well as odor identity suggesting dual, early and late, visual–olfactory interactions similar to audio–visual interaction. Supplementary data Supplementary data can be found at Chemical Senses online. Funding This research was conducted with institutional budget. Acknowledgments The authors thank Andrea Katschak for help with data acquisition. References Amsellem S, Ohla K. 2016. Perceived odor-taste congruence influences intensity and pleasantness differently. Chem Senses . 41: 677– 684. Google Scholar CrossRef Search ADS PubMed  Ashby FG, Townsend JT. 1986. Varieties of perceptual independence. Psychol Rev . 93: 154– 179. Google Scholar CrossRef Search ADS PubMed  Bensafi M, Croy I, Phillips N, Rouby C, Sezille C, Gerber J, Small DM, Hummel T. 2014. The effect of verbal context on olfactory neural responses. Hum Brain Mapp . 35: 810– 818. Google Scholar CrossRef Search ADS PubMed  Berglund B, Olsson MJ. 1993. Odor-intensity interaction in binary mixtures. J Exp Psychol Hum Percept Perform . 19: 302– 314. Google Scholar CrossRef Search ADS PubMed  Blechert J, Meule A, Busch NA, Ohla K. 2014. Food-pics: an image database for experimental research on eating and appetite. Front Psychol . 5: 617. Google Scholar CrossRef Search ADS PubMed  Budinger E, Scheich H. 2009. Anatomical connections suitable for the direct processing of neuronal information of different modalities via the rodent primary auditory cortex. Hear Res . 258: 16– 27. Google Scholar CrossRef Search ADS PubMed  Cain WS. 1979. To know with the nose: keys to odor identification. Science . 203: 467– 470. Google Scholar CrossRef Search ADS PubMed  Cain WS, de Wijk R, Lulejian C, Schiet F, See LC. 1998. Odor identification: perceptual and semantic dimensions. Chem Senses . 23: 309– 326. Google Scholar CrossRef Search ADS PubMed  Cain WS, Schiet FT, Olsson MJ, de Wijk RA. 1995. Comparison of models of odor interaction. Chem Senses . 20: 625– 637. Google Scholar CrossRef Search ADS PubMed  Calvert GA. 2001. Crossmodal processing in the human brain: insights from functional neuroimaging studies. Cereb Cortex . 11: 1110– 1123. Google Scholar CrossRef Search ADS PubMed  Calvert GA, Campbell R, Brammer MJ. 2000. Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Curr Biol . 10: 649– 657. Google Scholar CrossRef Search ADS PubMed  Cappe C, Morel A, Barone P, Rouiller EM. 2009. The thalamocortical projection systems in primate: an anatomical support for multisensory and sensorimotor interplay. Cereb Cortex . 19: 2025– 2037. Google Scholar CrossRef Search ADS PubMed  Chen A, Deangelis GC, Angelaki DE. 2013. Functional specializations of the ventral intraparietal area for multisensory heading discrimination. J Neurosci . 33: 3567– 3581. Google Scholar CrossRef Search ADS PubMed  Colonius H. 1990. Possibly Dependent Probability Summation of Reaction Time. J Math Psychol . 34: 253– 275. Google Scholar CrossRef Search ADS   Courtiol E, Wilson DA. 2017. The Olfactory Mosaic: Bringing an Olfactory Network Together for Odor Perception. Perception . 46: 320– 332. Google Scholar CrossRef Search ADS PubMed  Davis RG. 1981. The role of nonolfactory context cues in odor identification. Percept Psychophys . 30: 83– 89. Google Scholar CrossRef Search ADS PubMed  Degel J, Köster EP. 1998. Implicit memory for odors: a possible method for observation. Percept Mot Skills . 86: 943– 952. Google Scholar CrossRef Search ADS PubMed  Degel J, Köster EP. 1999. Odors: implicit memory and performance effects. Chem Senses . 24: 317– 325. Google Scholar CrossRef Search ADS PubMed  Delplanque S, Grandjean D, Chrea C, Aymard L, Cayeux I, Le Calvé B, Velazco MI, Scherer KR, Sander D. 2008. Emotional processing of odors: evidence for a nonlinear relation between pleasantness and familiarity evaluations. Chem Senses . 33: 469– 479. Google Scholar CrossRef Search ADS PubMed  Dematte ML, Sanabria D, Spence C. 2006. Cross-modal associations between odors and colors. Chem Senses . 31: 531– 538. Google Scholar CrossRef Search ADS PubMed  De Meo R, Murray MM, Clarke S, Matusz PJ, Soto-Faraco S, Wallace MT. 2015. Top-down control and early multisensory processes: chicken vs. egg. Front Integr Neurosci . 9: 17. Google Scholar CrossRef Search ADS PubMed  Diederich A, Colonius H. 2004. Bimodal and trimodal multisensory enhancement: effects of stimulus onset and intensity on reaction time. Percept Psychophys . 66: 1388– 1404. Google Scholar CrossRef Search ADS PubMed  Falchier A, Clavagnier S, Barone P, Kennedy H. 2002. Anatomical evidence of multimodal integration in primate striate cortex. J Neurosci . 22: 5749– 5759. Google Scholar CrossRef Search ADS PubMed  Forster B, Cavina-Pratesi C, Aglioti SM, Berlucchi G. 2002. Redundant target effect and intersensory facilitation from visual-tactile interactions in simple reaction time. Exp Brain Res . 143: 480– 487. Google Scholar CrossRef Search ADS PubMed  Gielen SC, Schmidt RA, Van den Heuvel PJ. 1983. On the nature of intersensory facilitation of reaction time. Percept Psychophys . 34: 161– 168. Google Scholar CrossRef Search ADS PubMed  Gottfried JA, Dolan RJ. 2003. The nose smells what the eye sees: crossmodal visual facilitation of human olfactory perception. Neuron . 39: 375– 386. Google Scholar CrossRef Search ADS PubMed  Green BG, Nachtigal D, Hammond S, Lim J. 2012. Enhancement of retronasal odors by taste. Chem Senses . 37: 77– 86. Google Scholar CrossRef Search ADS PubMed  Grice GR, Canham L, Boroughs JM. 1984a. Combination rule for redundant information in reaction time tasks with divided attention. Percept Psychophys . 35: 451– 463. Google Scholar CrossRef Search ADS   Grice GR, Canham L, Gwynne JW. 1984b. Absence of a redundant-signals effect in a reaction time task with divided attention. Percept Psychophys . 36: 565– 570. Google Scholar CrossRef Search ADS   Grill-Spector K, Weiner KS. 2014. The functional architecture of the ventral temporal cortex and its role in categorization. Nat Rev Neurosci . 15: 536– 548. Google Scholar CrossRef Search ADS PubMed  Hershenson M. 1962. Reaction time as a measure of intersensory facilitation. J Exp Psychol . 63: 289– 293. Google Scholar CrossRef Search ADS PubMed  Herz RS, von Clef J. 2001. The influence of verbal labeling on the perception of odors: evidence for olfactory illusions? Perception . 30: 381– 391. Google Scholar CrossRef Search ADS PubMed  Höchenberger R, Busch NA, Ohla K. 2015. Nonlinear response speedup in bimodal visual-olfactory object identification. Front Psychol . 6: 1477. Google Scholar CrossRef Search ADS PubMed  Jinks A, Laing DG. 2001. The analysis of odor mixtures by humans: evidence for a configurational process. Physiol Behav . 72: 51– 63. Google Scholar CrossRef Search ADS PubMed  Krueger Fister J, Stevenson RA, Nidiffer AR, Barnett ZP, Wallace MT. 2015. Stimulus intensity modulates multisensory temporal processing. Neuropsychologia . 88:92–100. Laing DG, Francis GW. 1989. The capacity of humans to identify odors in mixtures. Physiol Behav . 46: 809– 814. Google Scholar CrossRef Search ADS PubMed  Laing DG, Panhuber H, Willcox ME, Pittman EA. 1984. Quality and intensity of binary odor mixtures. Physiol Behav . 33: 309– 319. Google Scholar CrossRef Search ADS PubMed  Lundström JN, Gordon AR, Alden EC, Boesveldt S, Albrecht J. 2010. Methods for building an inexpensive computer-controlled olfactometer for temporally-precise experiments. Int J Psychophysiol . 78: 179– 189. Google Scholar CrossRef Search ADS PubMed  Meredith MA, Stein BE. 1983. Interactions among converging sensory inputs in the superior colliculus. Science . 221: 389– 391. Google Scholar CrossRef Search ADS PubMed  Meredith MA, Stein BE. 1986. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. J Neurophysiol . 56: 640– 662. Google Scholar CrossRef Search ADS PubMed  Miller J. 1982. Divided attention: evidence for coactivation with redundant signals. Cogn Psychol . 14: 247– 279. Google Scholar CrossRef Search ADS PubMed  Miller J. 1986. Timecourse of coactivation in bimodal divided attention. Percept Psychophys . 40: 331– 343. Google Scholar CrossRef Search ADS PubMed  Mordkoff JT, Yantis S. 1991. An interactive race model of divided attention. J Exp Psychol Hum Percept Perform . 17: 520– 538. Google Scholar CrossRef Search ADS PubMed  Morrot G, Brochet F, Dubourdieu D. 2001. The color of odors. Brain Lang . 79: 309– 320. Google Scholar CrossRef Search ADS PubMed  Parker A, Gaffan D. 1998. Memory after frontal/temporal disconnection in monkeys: conditional and non-conditional tasks, unilateral and bilateral frontal lesions. Neuropsychologia . 36: 259– 271. Google Scholar CrossRef Search ADS PubMed  Qu LP, Kahnt T, Cole SM, Gottfried JA. 2016. De Novo Emergence of Odor Category Representations in the Human Brain. J Neurosci . 36: 468– 478. Google Scholar CrossRef Search ADS PubMed  Raab DH. 1962. Statistical facilitation of simple reaction times. Trans N Y Acad Sci . 24: 574– 590. Google Scholar CrossRef Search ADS PubMed  Ratcliff R. 1979. Group reaction time distributions and an analysis of distribution statistics. Psychol Bull . 86: 446– 461. Google Scholar CrossRef Search ADS PubMed  Robinson AK, Mattingley JB, Reinhard J. 2013. Odors enhance the salience of matching images during the attentional blink. Front Integr Neurosci . 7: 77. Google Scholar CrossRef Search ADS PubMed  Schifferstein HN, Verlegh PW. 1996. The role of congruency and pleasantness in odor-induced taste enhancement. Acta Psychol (Amst) . 94: 87– 105. Google Scholar CrossRef Search ADS PubMed  Seigneuric A, Durand K, Jiang T, Baudouin JY, Schaal B. 2010. The nose tells it to the eyes: crossmodal associations between olfaction and vision. Perception . 39: 1541– 1554. Google Scholar CrossRef Search ADS PubMed  Seo HS, Lohse F, Luckett CR, Hummel T. 2014. Congruent sound can modulate odor pleasantness. Chem Senses . 39: 215– 228. Google Scholar CrossRef Search ADS PubMed  Seo HS, Roidl E, Müller F, Negoias S. 2010. Odors enhance visual attention to congruent objects. Appetite . 54: 544– 549. Google Scholar CrossRef Search ADS PubMed  Shepard TG, Veldhuizen MG, Marks LE. 2015. Response Times to Gustatory-Olfactory Flavor Mixtures: Role of Congruence. Chem Senses . 40: 565– 575. Google Scholar CrossRef Search ADS PubMed  Small DM, Voss J, Mak YE, Simmons KB, Parrish T, Gitelman D. 2004. Experience-dependent neural integration of taste and smell in the human brain. J Neurophysiol . 92: 1892– 1903. Google Scholar CrossRef Search ADS PubMed  Stevenson RA, Ghose D, Fister JK, Sarko DK, Altieri NA, Nidiffer AR, Kurela LR, Siemann JK, James TW, Wallace MT. 2014. Identifying and quantifying multisensory integration: a tutorial review. Brain Topogr . 27: 707– 730. Google Scholar CrossRef Search ADS PubMed  Talsma D. 2015. Predictive coding and multisensory integration: an attentional account of the multisensory mind. Front Integr Neurosci . 9: 19. Google Scholar CrossRef Search ADS PubMed  Thorndike EL. 1920. A constant error in psychological ratings. J Appl Psychol . 4: 25– 29. Google Scholar CrossRef Search ADS   Townsend JT, Ashby FG. 1978. Methods of modeling capacity in simple processing systems. In: Castellan J, Restle F, editors. Cognitive theory . Vol. 3. Hillsdale, (NJ): Erlbaum Associates. p. 199–239. Townsend JT, Ashby FG. 1983. Stochastic modeling of elementary psychological processes . New York, NY: Cambridge University Press. Townsend JT, Eidels A. 2011. Workload capacity spaces: a unified methodology for response time measures of efficiency as workload is varied. Psychon Bull Rev . 18: 659– 681. Google Scholar CrossRef Search ADS PubMed  Townsend JT, Nozawa G. 1995. Spatio-temporal properties of elementary perception: An investigation of parallel, serial, and coactive theories. J Math Psychol . 39: 321– 359. Google Scholar CrossRef Search ADS   Townsend JT, Wenger MJ. 2004. A theory of interactive parallel processing: new capacity measures and predictions for a response time inequality series. Psychol Rev . 111: 1003– 1035. Google Scholar CrossRef Search ADS PubMed  Ulrich, R, Miller, J, Schröter, H. 2007. Testing the race model inequality: An algorithm and computer programs. Behav Res Methods. 39:291–302. van den Brink RL, Cohen MX, van der Burg E, Talsma D, Vissers ME, Slagter HA. 2014. Subcortical, modality-specific pathways contribute to multisensory processing in humans. Cereb Cortex . 24: 2169– 2177. Google Scholar CrossRef Search ADS PubMed  Veldhuizen MG, Shepard TG, Wang MF, Marks LE. 2010. Coactivation of gustatory and olfactory signals in flavor perception. Chem Senses . 35: 121– 133. Google Scholar CrossRef Search ADS PubMed  Vincent SBS. 1912. The functions of the vibrissae in the behavior of the white rat. Anim Behav Monogr . 1:1–84. Wallace MT, Stein BE. 1997. Development of multisensory neurons and multisensory integration in cat superior colliculus. J Neurosci . 17: 2429– 2444. Google Scholar CrossRef Search ADS PubMed  Yamada Y, Sasaki K, Kunieda S, Wada Y. 2014. Scents boost preference for novel fruits. Appetite . 81: 102– 107. Google Scholar CrossRef Search ADS PubMed  Zellner DA, Bartoli AM, Eckard R. 1991. Influence of color on odor identification and liking ratings. Am J Psychol . 104: 547– 561. Google Scholar CrossRef Search ADS PubMed  Zhou W, Jiang Y, He S, Chen D. 2010. Olfaction modulates visual perception in binocular rivalry. Curr Biol . 20: 1356– 1358. Google Scholar CrossRef Search ADS PubMed  Zhou W, Zhang X, Chen J, Wang L, Chen D. 2012. Nostril-specific olfactory modulation of visual perception in binocular rivalry. J Neurosci . 32: 17225– 17229. Google Scholar CrossRef Search ADS PubMed  © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Chemical Senses Oxford University Press

Visual–Olfactory Interactions: Bimodal Facilitation and Impact on the Subjective Experience

Loading next page...
 
/lp/ou_press/visual-olfactory-interactions-bimodal-facilitation-and-impact-on-the-YWU7Ky5R5A
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
ISSN
0379-864X
eISSN
1464-3553
D.O.I.
10.1093/chemse/bjy018
Publisher site
See Article on Publisher Site

Abstract

Abstract Odors are inherently ambiguous and therefore susceptible to redundant sensory as well as context information. The identification of an odor object relies largely on visual input. Thus far, it is unclear whether visual and olfactory stimuli are indeed integrated at an early perceptual stage and which role the congruence between the visual and olfactory inputs plays. Previous studies on visual–olfactory interaction used either congruent or incongruent information, leaving it open whether nuances of visual–olfactory congruence shape perception differently. We aimed to answer 1) whether visual–olfactory information is integrated at early stages of processing, 2) whether visual–olfactory congruence is a gradual or dichotomous phenomenon, and 3) whether visual information influences bimodal stimulus evaluation and odor identity. We found a bimodal response time speedup that is consistent with parallel processing according to race models. Subjectively, pleasantness of bimodal stimuli increased with increasing congruence, and orange images biased odor composition toward orange. Visual–olfactory congruence was perceived in gradual and distinct categories, consistent with the notion that congruence is a gradual phenomenon. Together, the data provide evidence for bimodal facilitation consistent with parallel processing of the visual and olfactory stimuli, and that visual–olfactory interactions influence various levels of the subjective experience. congruence, cross-modal interaction, multisensory integration, odor perception, visual–olfactory interaction Introduction The identification of odor objects relies more heavily on input from other sensory channels or context information than the identification of visual stimuli (Cain 1979). For example, it is difficult to identify an object (e.g., a banana) by its smell alone. This inherent ambiguity makes odors particularly prone to benefit from and to integrate additional information. Indeed less effective unisensory stimuli are more likely to be integrated and benefit from multisensory integration (Meredith and Stein 1983) and they are also more tolerant to stimulus asynchronies (Krueger Fister et al. 2015). Cross-modal interactions between visual and olfactory sensory cues have been demonstrated to operate in both directions. For instance, odors can modulate visual perception and performance in visual tasks, particularly by influencing the saliency of concordant visual objects during attentional blink (Robinson et al. 2013), binocular rivalry (Zhou et al. 2010, 2012), visual search (Chen et al. 2013), and eye movements (Seo et al. 2010). Odor perception and identification have also been shown to improve through verbal labels (Herz and von Clef 2001; Bensafi et al. 2014), as well as visual (Morrot et al. 2001; Gottfried and Dolan 2003; Seigneuric et al. 2010), gustatory (Green et al. 2012), and auditory (Seo et al. 2014) sensory information. Visual information can also support the formation of odor object identity for previously unknown odors via cross-modal associative learning (Qu et al. 2016). So far, it remains unknown whether visual and olfactory information are integrated at early sensory stages (bottom up), or if visual–olfactory interactions take place at later evaluative stages, allowing for top-down influences (De Meo et al. 2015). Notably, these models are likely not mutually exclusive (Talsma 2015). Measurement of response times (RTs) has proven to be a useful tool for examining early-stage multisensory interaction. When comparing participants’ RTs in a unisensory with performance in a multisensory setting, one typically finds that RTs are systematically reduced as more target stimuli are added, for example, participants respond faster in a bimodal than in a unimodal setting if instructed to respond to any of the presented stimuli, regardless of modality. However, this multisensory gain can be the product of the temporal overlap between multiple response processes. Assuming that the modality more quickly processed determines the RT, and further assuming that RTs vary randomly from trial to trial, “slow” processing of 1 trial could co-occur with “fast” processing in the other, effectively leading to an observed speedup of mean RT (Raab 1962). Models assuming this kind of competitive processing are typically called “race models,” and propose that whichever channel finishes processing first “wins the race” and determines the observed RT. These models are defined by parallel processing channels that do not pool information at any pre-decisional stages. This model by Raab (1962) demanded stochastically independent channel processing speeds. Miller (1982) later proposed an upper performance bound for parallel processing models such as Raab’s, assuming perfect negative correlation between channel processing speeds, typically called race model inequality (RMI) or Miller bound. Similarly, a lower performance limit assuming perfect positive correlation has been proposed (Grice bound; Grice et al. 1984a, 1984b). Performance in tasks with bimodal sensory input, which exceeds the Miller bound, provides evidence for information pooling across channels, or multimodal integration; performance below the Grice bound suggests debilitation below unimodal performance. Bimodal performance falling between 2 bounds can be accounted for by race models and it is typically assumed to provide neither sufficient proof of integration, nor definite evidence against it. The estimation of a workload capacity coefficient based on RTs provides a measure of a participant’s ability to process multiple stimuli simultaneously (Townsend and Ashby 1978, 1983; Townsend and Eidels 2011), and allows conclusions to be made about performance changes as the task demands are varied, for example, if participants have to respond to a bimodal stimulus compared with responding to unimodal stimuli. While multisensory response facilitation has been studied in depth for bimodal stimuli involving the non-chemical senses—vision, audition, and touch (Gielen et al. 1983; Miller 1986; Forster et al. 2002; Diederich and Colonius 2004), very few studies have investigated bimodal integration of the chemical senses, and these with inconsistent results. Though response facilitation has been observed for congruent olfactory–gustatory stimuli (Veldhuizen et al. 2010), incongruent olfactory–gustatory stimuli yielded response debilitation (Shepard et al. 2015). For example, response facilitations that could be explained by parallel processing were observed in a visual–olfactory object identification task (Höchenberger et al. 2015). Late-stage sensory interactions, sometimes referred to cross-modal interactions (De Meo et al. 2015), reflect top-down cognitive control. Of particular interest for odor-related sensory integration are modulations of the subjective experience and hedonic evaluation given the unique direct link between the olfactory and limbic systems (Courtiol and Wilson 2017). Accordingly, the few studies that have studied hedonic evaluation in the context of multisensory interaction involve olfaction. Among those studies, the semantic congruence, that is how well 2 stimuli match, has been shown to enhance pleasantness for individual components of a bimodal stimulus. For example, sound can enhance odor pleasantness (Seo et al. 2014) and odors can enhance both image (Yamada et al. 2014) and taste (Schifferstein and Verlegh 1996; Small et al. 2004) pleasantness as well as the pleasantness of bimodal olfactory–gustatory stimuli (Amsellem and Ohla 2016). While congruence is commonly approached as a dichotomous phenomenon, and experimental designs focus on comparing congruent and incongruent stimuli (Gottfried and Dolan 2003), congruence perception is more complex, comprising several gradual levels, and congruence influences pleasantness systematically, at least for olfactory–gustatory stimuli (Amsellem and Ohla 2016). Whether this is also the case for visual–olfactory stimuli has yet to be demonstrated. Given the intrinsic ambiguity of odors, visual information helps disambiguate olfactory percepts by biasing odor object perception toward the visual context (Degel and Köster 1998, 1999; Morrot et al. 2001). The role of congruence between the visual and olfactory inputs in these biases remains to be elucidated. The aims of this study were to explore visual–olfactory interactions at early sensory and late perceptual evaluation levels and to answer 1) whether visual–olfactory information is indeed integrated at early stages of processing, 2) whether visual–olfactory congruence is a gradual or dichotomous phenomenon, and 3) whether visual information influences bimodal stimulus evaluation and odor identity. Materials and methods Participants Twenty-one healthy participants reporting no smelling or tasting disorders took part in the study. Two participants’ datasets were excluded due to technical problems during data collection; data from 5 participants were excluded because of 50% or more missing responses in at least 1 condition. Data from 14 participants (2 left-handed, 6 men, mean age = 27.9 years, SD = 2.7) are reported here. Participants gave written informed consent and received compensatory payment. The study was conducted in compliance with the revised declaration of Helsinki and approved by the ethics committee of the German Psychological Society. Stimuli and apparatus Odorants (O) were orange (SAFC, CAS 8008-57-9, 15%) and chicken (Takasago, TAK 120580, 10%) fragrances in mineral oil (Acros Organics, CAS 8042-47-5) and 3 mixtures of both odorants representing a perceptual half–half (60% chicken/40% orange), a dominant orange (30% chicken/70% orange), and a dominant chicken (70% chicken/30% orange) odor. The orange and chicken odors were isointense and mixture proportions were validated in a previous study (Amsellem and Ohla 2016). Odorants were delivered birhinally for 1000 ms at a flow rate of 1.5 L/min with 0.25 L/min unscented background air for each nostril through an olfactometer (Lundström et al. 2010), which embeds the odorants in a stream of unscented air at the same flow rate, thereby minimizing any tactile cue from switching the valves. To reduce irritation of the nasal mucosa, we reduced the flow of unscented air to 0.25 L/min during part of the interstimulus interval, from 500 ms after odor offset to 1000–2000 ms before presentation of the subsequent odor. Visual stimuli (V) were 3 images of chicken and orange from the Food-pics database (Blechert et al. 2014) (no. 200, 245, 301, 315, 365, 546). Orange and chicken images were similarity pleasant (t13 = 2.02, P = 0.064). Images were scaled to gray to remove color cues and presented on a TFT monitor with a resolution of 1680 × 1050 pixels at an eye distance of approximately 60 cm, corresponding to an object size of approximately 12° of visual angle. The monitor refresh rate was 60 Hz. Bimodal stimuli (VO) were all possible bimodal combinations of the 5 odor and 2 image categories (Figure 1B). The combination resulted in 5 levels of VO congruence: incongruent (e.g., orange odor and chicken image), intermediary incongruent, intermediary, intermediary congruent, and congruent (e.g., chicken odor and chicken image; Amsellem and Ohla 2016). In VO trials, the valves of the olfactometer were opened 200 ms before visual stimulation to ensure physical simultaneity of the stimuli. The timing was determined with photo-ionization detection and has been used successfully in previous studies (Höchenberger et al. 2015; Amsellem and Ohla 2016). Figure 1. View largeDownload slide Experimental paradigm and design matrix. (A) Participants viewed a green cross-hair (dashed line) for 1–2 s before the onset of either of the following stimuli: odor (O), image (V), or bimodal odor–image combination (VO), which lasted 1 s. Participants were to perform a speeded detection task and delayed ratings for stimulus intensity, pleasantness, bimodal congruence, and odor composition. (B) Bimodal stimuli were presented with different degrees of congruence. Images were taken from the Food.pics database (Blechert et al. 2014). Figure 1. View largeDownload slide Experimental paradigm and design matrix. (A) Participants viewed a green cross-hair (dashed line) for 1–2 s before the onset of either of the following stimuli: odor (O), image (V), or bimodal odor–image combination (VO), which lasted 1 s. Participants were to perform a speeded detection task and delayed ratings for stimulus intensity, pleasantness, bimodal congruence, and odor composition. (B) Bimodal stimuli were presented with different degrees of congruence. Images were taken from the Food.pics database (Blechert et al. 2014). Participants were presented images alone (V), odors alone (O), and all possible combinations of visual–olfactory stimuli (VO) in pseudo-random order. Each stimulus category was repeated 15 times: 5 odor categories, 2 image categories, and 10 bimodal combinations amounting to a total of 255 trials. Procedure Participants were seated in a sound-attenuated testing booth in front of a 24” TFT screen displaying the instructions and visual analogue rating scales (VAS). Prior to testing, participants received instructions on the use of the response box and the rating scales. Participants were instructed to press the central button of the response box (Serial Response Box, Psychological Software Tools, Inc.) as soon as they detected a stimulus irrespective the modality. For bimodal trials, they were then to report whether the visual, olfactory or both modalities drove the response (driving–modality response); due to technical difficulties, these responses were not prompted or collected for 2 participants. Intensity was described as representing the strength of the stimulus, that is how strong (or weak) the stimulus was. Pleasantness was described as representing how pleasant (or unpleasant or neutral) the stimulus was. Composition was described as representing the perceived proportions of orange and chicken on a continuum ranging from orange only to chicken only. For bimodal VO stimuli, congruence was defined as representing the amount of the visual–olfactory match (from mismatch to perfect match). Participants practiced the task and the ratings and were familiarized with all stimuli before testing. Each trial started with the color change of a cross-hair, from black to green, on the screen 1000–2000 ms before stimulus presentation. The cross-hair cued participants to inhale and to pay close attention to both (olfactory and visual) sensory channels; it remained on the screen during stimulus presentation and was replaced by the rating scales presented in random order 300 ms after the stimulation ended. Participants were to perform the detection task during stimulus presentation. Responses were recorded from stimulus onset to 500 ms after offset. During O and VO trials, participants rated stimulus intensity, pleasantness, and perceived odor composition. During VO trials, participants rated the congruence of the bimodal stimuli. During V trials, participants rated stimulus intensity only. For all ratings, participants were to move the cursor of a computer mouse along a visual analogue scale, which recorded rating values from 0 to 100. The scales were anchored with numeric values and labels, that is, 0 “no sensation” and 10 “extremely intense” for intensity, −5 “very unpleasant” and +5 “extremely pleasant” for pleasantness, and 0 “incongruent” and 10 “congruent” for bimodal congruence. The composition (dominant olfactory tone) of each odorant was rated on a scale spanning from “orange” (left) to “chicken” (right). Each scale was displayed on the screen for up to 4000 ms. The scales appeared in a random order to minimize halo effects (Thorndike 1920) and were interleaved with a blank screen for 800 ms. On average, a trial lasted 20 s. The experiment lasted no more than 2 hours (see Figure 1). In order to minimize auditory cues from the olfactometer and air flow, participants were presented brown noise via isolating in-ear-phones. Analysis First, we removed all trials in which participants failed to provide stimulus ratings. Specifically, we dropped trials without pleasantness rating (28 trials), O trials without intensity rating (3 trials), O and VO trials without composition rating (9 trials), and VO trials without congruence rating (9 trials). Further we assumed that only O trials with intensity ratings greater than 5 had been perceived, and consequently excluded trials with lower ratings from analysis (88 trials). Lastly, trials with RTs less than 150 ms and greater than 1500 ms were removed (324). In total, 461 trials (12.9% of the data) were excluded from analysis. Response times Mean RTs for V (RTV), O (RTO), and VO (RTVO) conditions and corresponding standard deviations (SDs) were calculated separately for each participant. Further, we estimated the respective empirical cumulative RT distribution functions (CDFs; FO, FV, FVO; Ulrich et al. 2007). Based on these data, we derived different performance benchmarks to test for multisensory interaction during bimodal stimulation, namely mean multisensory response enhancement (MRE), upper and lower bounds for the RT distributions under the assumption of parallel processing (Miller and Grice bounds, respectively), RT distributions under the assumption of stochastically independent visual and olfactory channels (UCIP model), and workload capacity coefficients. The MRE measures relative changes in mean processing time of multisensory compared with unisensory stimuli. Originally introduced by Meredith and Stein (1986) for the application with individual neurons, it can also be used with RT data derived from behavioral experiments (Diederich and Colonius 2004). In a visual–olfactory setting, the formula takes the form  MRE=min(RTV, RTO)– RTVOmin(RTV,RTO) Positive values suggest facilitation and negative values imply debilitation. MRE was calculated for each participant separately and submitted to a 1-sample t-test against zero. Next, we calculated model predictions based on RT CDFs. Raab’s (1962) model assumes parallel processing in stochastically independent channels (Stevenson et al. 2014). This model is also known as UCIP model (unlimited capacity, independent, parallel; Townsend and Wenger 2004) and defined as  FVO(t)=FUCIP(t)= FV(t)+ FO(t)−FV(t) × FO(t) The RT CDFs F{V,O,VO}(t) depict the empirical probability of observing a response until time t after stimulus onset. Deviations from this model can be indicative of a violation of the stochastic-independence assumption, and suggest either positive or negative correlation between channel processing speeds. In fact, performance exceeding the model prediction has frequently been witnessed in psychophysical studies. Miller (1982) noted that, “[instead] of independence […] there is a consistent negative correlation between detections of signals on different channels.” He proposed a new upper performance limit, assuming perfect negative correlation between channel processing times, commonly called race model inequality (RMI) or Miller bound (Miller 1982; Colonius 1990). It is calculated as the sum of both unimodal RT CDFs:  FVO(t)≤FMiller(t)=FV(t)+FO(t) The Miller bound is reached only if participants “detect one of the signals but miss the other one” (Miller 1982), yielding a net speedup of observed bimodal responses. If the bound is exceeded, performance cannot be explained with parallel processing anymore, and this finding is typically interpreted as evidence for coactivation or neural integration. Likewise, a lower performance limit was suggested (Grice et al. 1984a, 1984b). It assumes a perfect positive correlation between channel processing times (Colonius 1990). Accordingly, the Grice bound is reached only if processing is either slow or fast in both channels simultaneously. This Grice bound is defined as the faster of the 2 unimodal channels at every time point:  FVO(t)≥FGrice(t)=max[FV(t), FO(t)] All empirical CDFs and derived model predictions were evaluated at 10 equidistantly spaced points (the 5th, 15th, …, 95th percentiles) and averaged across participants (Vincent 1912; Ratcliff 1979). FVO was compared with the Miller and Grice bounds using 1-tailed paired t-tests, and to the UCIP model prediction using 2-tailed paired t-tests at each of the percentiles. The significance level α was set a priori to 0.05, and all P-values were corrected for multiple testing following the Holm–Bonferroni procedure. Capacity coefficients were calculated from the logarithm of the unimodal (SV(t) and SO(t)) and bimodal (SVO(t)) survivor functions, S(t)= 1−F(t) (Townsend and Nozawa 1995):  C(t)= log[SVO(t)]log[SV(t) × SO(t)] We then calculated mean capacity coefficients for all participants and submitted them to a 1-sample t-test against 1 to test for significant deviations from unlimited-capacity processing. Note that this performance benchmark—a capacity coefficient of 1—is predicted if all assumptions of the UCIP model mentioned above are met. Townsend and Eidels (2011) proposed a way to transform the Miller and Grice bounds to capacity space for straightforward model comparisons, yielding the inequalities  C(t)≤CMiller(t)= log[SV(t)−SO(t)−1]log[SV(t)× SO(t)] And  C(t)≥CGrice(t)=log{min[SV(t),SO(t)]}min[SV(t),SO(t)] These boundaries were calculated for all participants separately to help us obtain a more comprehensive impression of individual performance dynamics. Average capacity coefficients were submitted to a 1-sample t-test against 1 to test for significant deviations from the UCIP model. Perceptual ratings For all analyses, parametric statistical tests were applied as ratings were distributed normally according to the Kolmogorov–Smirnov test (all P > 0.05). Greenhouse–Geisser correction was applied for violation of sphericity; uncorrected degrees of freedom and corrected P-values are reported. For all statistical tests, the significance level α was set a priori to 0.05. First, we tested for differences in odor pleasantness and intensity with 2 repeated-measures ANOVAs with the factor odor (5 levels: orange, dominant orange, perceptual half–half, dominant chicken, and chicken) followed with paired-samples Student’s t-tests when a main effect was found. To test whether participants perceived degrees of congruence in the bimodal stimuli and whether congruence modulated the pleasantness of the VO stimuli, we submitted pleasantness ratings to a repeated-measures ANOVA with the factor congruence (5 levels: incongruent, intermediary incongruent, intermediary, intermediary congruent, and congruent) followed by paired-samples Student’s t-tests when a main effect was present. Participants reported the perceived proportions of orange and chicken odor in O and VO trials. The scale ranged from orange (0) to chicken (100); accordingly, low values indicate more orange and larger values indicate more chicken. To test general influence of visual information of reported odor composition, irrespective of odor, we normalized ratings for each odor category and for each participant, collapsed the ratings across all odors for each participant and performed pairwise Student t-tests (orange vs. none, chicken vs. none, orange vs. chicken). Then, to explore specific influences of images on perceived odor composition, we submitted odor composition ratings for each odor category to a 2-way ANOVA with the factors odor (5 levels: orange, dominant orange, perceptual half–half, dominant chicken and chicken) and image (3 levels: orange, none, chicken), followed with paired-samples Student’s t-tests when a main effect was present. Results Response times Mean RTs and corresponding SDs for all participants are summarized in Table 1. On average, RTVO were the fastest (492 ms), RTV were marginally slower (507 ms), and RTO were markedly slower (777 ms). To assess bimodal performance gains based on mean RTs, we calculated MRE, that is, the relative difference between RTVO and the shorter of the mean unimodal RTs. The average MRE was 1.6% and not significantly different from zero (t13 = 0.50, P = 0.63). This finding may be due to the considerable variability across participants (see Table 1), with values ranging from 23.6% (strong facilitation, participant 412) to −30.9% (excessive debilitation, participant 695; Supplementary Figure 1). The differences in RT means, RTV−RTO, was calculated to assess the time difference between perceived V and O stimulus onset. Responses to V were between 52 and 525 ms faster than responses to O stimuli. Table 1. Means and SDs of RTs and capacity coefficients, and MRE Participant  RT (ms)      V  O  VO  V–O    C(t)  Mean  SD  Mean  SD  Mean  SD  Mean  SD  MRE (%)  Mean  SD  412  647  255  773  237  494  160  −126  18  23.6  1.47  0.59  474  486  151  811  192  478  160  −325  −41  1.7  1.07  0.13  500  414  144  858  321  388  139  −443  −177  6.4  1.07  0.18  589  790  248  842  261  684  218  −52  −13  13.4  0.94  0.18  631  561  396  695  276  532  260  −133  120  5.3  0.68  0.15  645  341  87  557  161  367  91  −217  −73  −7.7  0.66  0.15  663  475  255  747  343  453  148  −272  −88  4.8  0.83  0.36  695  493  259  674  287  645  304  −182  −28  −30.9  0.38  0.08  698  480  211  803  285  464  180  −323  −74  3.2  0.92  0.18  707  428  110  769  235  426  120  −341  −125  0.4  1.02  0.13  763  471  177  868  250  469  196  −397  −73  0.4  1.00  0.28  791  413  147  938  321  435  167  −525  −174  −5.2  0.84  0.28  812  554  149  709  325  499  165  −155  −176  10.0  0.89  0.24  828  542  188  833  316  555  207  −291  −128  −2.3  0.71  0.22  Average  507  198  777  272  492  180  −270  −74  1.6  0.89  0.22  Participant  RT (ms)      V  O  VO  V–O    C(t)  Mean  SD  Mean  SD  Mean  SD  Mean  SD  MRE (%)  Mean  SD  412  647  255  773  237  494  160  −126  18  23.6  1.47  0.59  474  486  151  811  192  478  160  −325  −41  1.7  1.07  0.13  500  414  144  858  321  388  139  −443  −177  6.4  1.07  0.18  589  790  248  842  261  684  218  −52  −13  13.4  0.94  0.18  631  561  396  695  276  532  260  −133  120  5.3  0.68  0.15  645  341  87  557  161  367  91  −217  −73  −7.7  0.66  0.15  663  475  255  747  343  453  148  −272  −88  4.8  0.83  0.36  695  493  259  674  287  645  304  −182  −28  −30.9  0.38  0.08  698  480  211  803  285  464  180  −323  −74  3.2  0.92  0.18  707  428  110  769  235  426  120  −341  −125  0.4  1.02  0.13  763  471  177  868  250  469  196  −397  −73  0.4  1.00  0.28  791  413  147  938  321  435  167  −525  −174  −5.2  0.84  0.28  812  554  149  709  325  499  165  −155  −176  10.0  0.89  0.24  828  542  188  833  316  555  207  −291  −128  −2.3  0.71  0.22  Average  507  198  777  272  492  180  −270  −74  1.6  0.89  0.22  View Large Table 1. Means and SDs of RTs and capacity coefficients, and MRE Participant  RT (ms)      V  O  VO  V–O    C(t)  Mean  SD  Mean  SD  Mean  SD  Mean  SD  MRE (%)  Mean  SD  412  647  255  773  237  494  160  −126  18  23.6  1.47  0.59  474  486  151  811  192  478  160  −325  −41  1.7  1.07  0.13  500  414  144  858  321  388  139  −443  −177  6.4  1.07  0.18  589  790  248  842  261  684  218  −52  −13  13.4  0.94  0.18  631  561  396  695  276  532  260  −133  120  5.3  0.68  0.15  645  341  87  557  161  367  91  −217  −73  −7.7  0.66  0.15  663  475  255  747  343  453  148  −272  −88  4.8  0.83  0.36  695  493  259  674  287  645  304  −182  −28  −30.9  0.38  0.08  698  480  211  803  285  464  180  −323  −74  3.2  0.92  0.18  707  428  110  769  235  426  120  −341  −125  0.4  1.02  0.13  763  471  177  868  250  469  196  −397  −73  0.4  1.00  0.28  791  413  147  938  321  435  167  −525  −174  −5.2  0.84  0.28  812  554  149  709  325  499  165  −155  −176  10.0  0.89  0.24  828  542  188  833  316  555  207  −291  −128  −2.3  0.71  0.22  Average  507  198  777  272  492  180  −270  −74  1.6  0.89  0.22  Participant  RT (ms)      V  O  VO  V–O    C(t)  Mean  SD  Mean  SD  Mean  SD  Mean  SD  MRE (%)  Mean  SD  412  647  255  773  237  494  160  −126  18  23.6  1.47  0.59  474  486  151  811  192  478  160  −325  −41  1.7  1.07  0.13  500  414  144  858  321  388  139  −443  −177  6.4  1.07  0.18  589  790  248  842  261  684  218  −52  −13  13.4  0.94  0.18  631  561  396  695  276  532  260  −133  120  5.3  0.68  0.15  645  341  87  557  161  367  91  −217  −73  −7.7  0.66  0.15  663  475  255  747  343  453  148  −272  −88  4.8  0.83  0.36  695  493  259  674  287  645  304  −182  −28  −30.9  0.38  0.08  698  480  211  803  285  464  180  −323  −74  3.2  0.92  0.18  707  428  110  769  235  426  120  −341  −125  0.4  1.02  0.13  763  471  177  868  250  469  196  −397  −73  0.4  1.00  0.28  791  413  147  938  321  435  167  −525  −174  −5.2  0.84  0.28  812  554  149  709  325  499  165  −155  −176  10.0  0.89  0.24  828  542  188  833  316  555  207  −291  −128  −2.3  0.71  0.22  Average  507  198  777  272  492  180  −270  −74  1.6  0.89  0.22  View Large For fast responses, the mean RT distribution functions FVO and FV were largely superimposed (Figure 2A), indicating that the bimodal responses were driven by the visual constituent alone. At around the 45th percentile, FVO shifted left of FV, separating both curves. This diversion occurred slightly after the time at which the fastest olfactory responses were observed; participants started benefiting from the bimodal stimulation, and consistently responded faster than in the unimodal conditions. Figure 2. View largeDownload slide RT distributions and race model predictions. (A) Mean cumulative RT distributions (CDF). The distribution for V distribution was located left of the O distribution, reflecting faster visual than olfactory responses. The VO distribution closely followed V up to the 45th percentile, and shifted left for greater RTs. Note that this diversion occurred slightly after the time the first olfactory responses were observed. (B) Comparison of observed bimodal CDF to model predictions. The bimodal distribution never violated the Miller bound and was always shifted right of the UCIP model assumption, even reaching the Grice bound at some percentiles. Figure 2. View largeDownload slide RT distributions and race model predictions. (A) Mean cumulative RT distributions (CDF). The distribution for V distribution was located left of the O distribution, reflecting faster visual than olfactory responses. The VO distribution closely followed V up to the 45th percentile, and shifted left for greater RTs. Note that this diversion occurred slightly after the time the first olfactory responses were observed. (B) Comparison of observed bimodal CDF to model predictions. The bimodal distribution never violated the Miller bound and was always shifted right of the UCIP model assumption, even reaching the Grice bound at some percentiles. The observed bimodal RT distribution FVO never reached the Miller bound (Figure 2B). Instead, it was shifted right of the UCIP model; however, this shift was not significant at any percentiles (2-tailed t-tests, all P>0.05; Table 2). The Grice bound was violated at the 5th to 45th percentiles; again, these violations were not significant (1-tailed t-tests, all P>0.05; Table 2). This rightward shift relative to the UCIP model could, nevertheless, be indicative of a positive correlation between visual and olfactory channel processing times, and thus limited-capacity processing. Table 2. Paired comparison of observed bimodal CDFs to model predictions at specific percentiles Proportion of responses  VO—Miller bound  VO—UCIP model  VO—Grice bound  t13  P  t13  P  t13  P  0.05  1.58  0.14  1.58  0.76  0.99  1.00  0.15  2.04  0.14  1.98  0.69  0.19  1.00  0.25  1.96  0.14  1.63  0.76  0.02  1.00  0.35  2.13  0.14  1.83  0.73  0.27  1.00  0.45  2.21  0.14  1.81  0.73  0.53  1.00  0.55  2.02  0.14  1.36  0.76  −0.85  1.00  0.65  2.30  0.13  1.30  0.76  −1.01  1.00  0.75  2.70  0.07  1.15  0.76  −2.11  0.24  0.85  3.36  0.02  1.48  0.76  −2.37  0.17  0.95  4.85  < 0.01  1.90  0.72  −1.18  1.00  Proportion of responses  VO—Miller bound  VO—UCIP model  VO—Grice bound  t13  P  t13  P  t13  P  0.05  1.58  0.14  1.58  0.76  0.99  1.00  0.15  2.04  0.14  1.98  0.69  0.19  1.00  0.25  1.96  0.14  1.63  0.76  0.02  1.00  0.35  2.13  0.14  1.83  0.73  0.27  1.00  0.45  2.21  0.14  1.81  0.73  0.53  1.00  0.55  2.02  0.14  1.36  0.76  −0.85  1.00  0.65  2.30  0.13  1.30  0.76  −1.01  1.00  0.75  2.70  0.07  1.15  0.76  −2.11  0.24  0.85  3.36  0.02  1.48  0.76  −2.37  0.17  0.95  4.85  < 0.01  1.90  0.72  −1.18  1.00  Tests against Miller and Grice bounds were 1-tailed. All P-values Holm–Bonferroni corrected for multiple testing. View Large Table 2. Paired comparison of observed bimodal CDFs to model predictions at specific percentiles Proportion of responses  VO—Miller bound  VO—UCIP model  VO—Grice bound  t13  P  t13  P  t13  P  0.05  1.58  0.14  1.58  0.76  0.99  1.00  0.15  2.04  0.14  1.98  0.69  0.19  1.00  0.25  1.96  0.14  1.63  0.76  0.02  1.00  0.35  2.13  0.14  1.83  0.73  0.27  1.00  0.45  2.21  0.14  1.81  0.73  0.53  1.00  0.55  2.02  0.14  1.36  0.76  −0.85  1.00  0.65  2.30  0.13  1.30  0.76  −1.01  1.00  0.75  2.70  0.07  1.15  0.76  −2.11  0.24  0.85  3.36  0.02  1.48  0.76  −2.37  0.17  0.95  4.85  < 0.01  1.90  0.72  −1.18  1.00  Proportion of responses  VO—Miller bound  VO—UCIP model  VO—Grice bound  t13  P  t13  P  t13  P  0.05  1.58  0.14  1.58  0.76  0.99  1.00  0.15  2.04  0.14  1.98  0.69  0.19  1.00  0.25  1.96  0.14  1.63  0.76  0.02  1.00  0.35  2.13  0.14  1.83  0.73  0.27  1.00  0.45  2.21  0.14  1.81  0.73  0.53  1.00  0.55  2.02  0.14  1.36  0.76  −0.85  1.00  0.65  2.30  0.13  1.30  0.76  −1.01  1.00  0.75  2.70  0.07  1.15  0.76  −2.11  0.24  0.85  3.36  0.02  1.48  0.76  −2.37  0.17  0.95  4.85  < 0.01  1.90  0.72  −1.18  1.00  Tests against Miller and Grice bounds were 1-tailed. All P-values Holm–Bonferroni corrected for multiple testing. View Large Workload capacity dynamics varied greatly between participants (Figure 3; Supplementary Figure 2): some appeared to benefit from bimodal stimulation and exhibited super-capacity processing, indicated by capacity coefficients greater than 1 over extended periods of time, while others suffered from debilitation and exposed limited-capacity processing, indicated by capacity coefficients less than 1 over an extended period of time. Most participants, however, exhibited dynamics involving intermittent super- and limited-capacity processing as well as periods of unlimited-capacity processing. Mean capacity ranged from 0.38 to 1.47 with a global average of 0.89, indicating moderately limited-capacity processing (Table 1). However, deviation from the UCIP model was not significant (1-sample t-test against 1; t13=−1.60, P=0.12). Figure 3. View largeDownload slide Workload capacity coefficients. Solid lines represent individual participants, the bold horizontal line depicts the reference UCIP model (unlimited capacity). Values greater than 1 suggest super-capacity processing, and values lesser than 1 indicate limited-capacity processing. Figure 3. View largeDownload slide Workload capacity coefficients. Solid lines represent individual participants, the bold horizontal line depicts the reference UCIP model (unlimited capacity). Values greater than 1 suggest super-capacity processing, and values lesser than 1 indicate limited-capacity processing. Perceptual ratings Odors were overall moderately intense (range: 44.31–68.39) and moderately pleasant (range: 38.83–67.92). Intensity and pleasantness ratings differed significantly between odorants (intensity: F4,52 = 23.06, P < 0.001, ηp2 = 0.639, Figure 4A; pleasantness: F4,52 = 12.38, P < 0.001, ηp2 = 0.488, Figure 4B). Non-mixed odorants (i.e., orange and chicken) were on average more intense than the odor mixtures (t13 = 6.04, P < 0.001). Pleasantness decreased as the proportion of chicken increased as indicated by the significant paired-samples t-tests (Table 3). Table 3. Paired comparisons between odor categories for intensity, pleasantness, and composition ratings Comparison  Intensity  Pleasantness  Composition  t13  P  t13  P  t13  P  Orange  Dom. orange  3.158  0.008  3.026  0.01  −5.037  <0.001  Orange  Percept. half–half  1.016  0.328  4.406  <0.001  −7.833  <0.001  Orange  Dom. chicken  0.306  0.764  4.834  <0.001  −14.507  <0.001  Orange  Chicken  −7.045  <0.001  3.775  0.002  −22.515  <0.001  Dom. orange  Percept. half–half  −2.631  0.021  4.081  0.001  −5.394  <0.001  Dom. orange  Dom. chicken  −2.033  0.063  4.438  <0.001  −11.541  <0.001  Dom. orange  Chicken  −7.129  <0.001  3.269  0.006  −13.975  <0.001  Percept. half–half  Dom. chicken  −0.469  0.647  2.551  0.024  −5.69  <0.001  Percept. half–half  Chicken  −6.608  <0.001  2.165  0.05  −10.391  <0.001  Dom. chicken  Chicken  −5.569  <0.001  1.291  0.219  −6.975  <0.001  Comparison  Intensity  Pleasantness  Composition  t13  P  t13  P  t13  P  Orange  Dom. orange  3.158  0.008  3.026  0.01  −5.037  <0.001  Orange  Percept. half–half  1.016  0.328  4.406  <0.001  −7.833  <0.001  Orange  Dom. chicken  0.306  0.764  4.834  <0.001  −14.507  <0.001  Orange  Chicken  −7.045  <0.001  3.775  0.002  −22.515  <0.001  Dom. orange  Percept. half–half  −2.631  0.021  4.081  0.001  −5.394  <0.001  Dom. orange  Dom. chicken  −2.033  0.063  4.438  <0.001  −11.541  <0.001  Dom. orange  Chicken  −7.129  <0.001  3.269  0.006  −13.975  <0.001  Percept. half–half  Dom. chicken  −0.469  0.647  2.551  0.024  −5.69  <0.001  Percept. half–half  Chicken  −6.608  <0.001  2.165  0.05  −10.391  <0.001  Dom. chicken  Chicken  −5.569  <0.001  1.291  0.219  −6.975  <0.001  View Large Table 3. Paired comparisons between odor categories for intensity, pleasantness, and composition ratings Comparison  Intensity  Pleasantness  Composition  t13  P  t13  P  t13  P  Orange  Dom. orange  3.158  0.008  3.026  0.01  −5.037  <0.001  Orange  Percept. half–half  1.016  0.328  4.406  <0.001  −7.833  <0.001  Orange  Dom. chicken  0.306  0.764  4.834  <0.001  −14.507  <0.001  Orange  Chicken  −7.045  <0.001  3.775  0.002  −22.515  <0.001  Dom. orange  Percept. half–half  −2.631  0.021  4.081  0.001  −5.394  <0.001  Dom. orange  Dom. chicken  −2.033  0.063  4.438  <0.001  −11.541  <0.001  Dom. orange  Chicken  −7.129  <0.001  3.269  0.006  −13.975  <0.001  Percept. half–half  Dom. chicken  −0.469  0.647  2.551  0.024  −5.69  <0.001  Percept. half–half  Chicken  −6.608  <0.001  2.165  0.05  −10.391  <0.001  Dom. chicken  Chicken  −5.569  <0.001  1.291  0.219  −6.975  <0.001  Comparison  Intensity  Pleasantness  Composition  t13  P  t13  P  t13  P  Orange  Dom. orange  3.158  0.008  3.026  0.01  −5.037  <0.001  Orange  Percept. half–half  1.016  0.328  4.406  <0.001  −7.833  <0.001  Orange  Dom. chicken  0.306  0.764  4.834  <0.001  −14.507  <0.001  Orange  Chicken  −7.045  <0.001  3.775  0.002  −22.515  <0.001  Dom. orange  Percept. half–half  −2.631  0.021  4.081  0.001  −5.394  <0.001  Dom. orange  Dom. chicken  −2.033  0.063  4.438  <0.001  −11.541  <0.001  Dom. orange  Chicken  −7.129  <0.001  3.269  0.006  −13.975  <0.001  Percept. half–half  Dom. chicken  −0.469  0.647  2.551  0.024  −5.69  <0.001  Percept. half–half  Chicken  −6.608  <0.001  2.165  0.05  −10.391  <0.001  Dom. chicken  Chicken  −5.569  <0.001  1.291  0.219  −6.975  <0.001  View Large Figure 4. View largeDownload slide Perceptual ratings. Odors intensity (A) and odor pleasantness (B) for each odor category. Congruence (C) and pleasantness (D) ratings for bimodal visual–olfactory stimuli. Box-plots represent the distribution of the means; median and mean of the distributions are represented by the horizontal line and diamond, respectively; whiskers represent the 1.5 times the interquartile range (IQR); dots indicate outliers. Inc., incongruent; Interm. inc., intermediary incongruent; Interm., intermediary; Interm. con., intermediary congruent; Con., congruent. Figure 4. View largeDownload slide Perceptual ratings. Odors intensity (A) and odor pleasantness (B) for each odor category. Congruence (C) and pleasantness (D) ratings for bimodal visual–olfactory stimuli. Box-plots represent the distribution of the means; median and mean of the distributions are represented by the horizontal line and diamond, respectively; whiskers represent the 1.5 times the interquartile range (IQR); dots indicate outliers. Inc., incongruent; Interm. inc., intermediary incongruent; Interm., intermediary; Interm. con., intermediary congruent; Con., congruent. VO stimuli were moderately pleasant (range: 49.18–58.10). Participants perceived congruence differences in the different VO stimuli indicated by a significant main effect of congruence (F4,52 = 95.07, P < 0.001, ηp2 = 0.880; Figure 4C) and significant pairwise tests (Table 4). Furthermore, the designed congruence levels modulated the VO pleasantness (F4,52 = 3.737, P = 0.01, ηp2 = 0.223; Figure 4D). Table 4. Paired comparisons between congruence categories for congruence and pleasantness ratings Comparison  Congruence  Pleasantness  t13  P  t13  P  Inc. vs Interm. Inc.  −9.932  <0.001  −1.296  0.186  Inc. vs Interm.  −11.517  <0.001  −1.489  0.16  Inc. vs Interm. Con.  −9.889  <0.001  −2.323  <0.05  Inc. vs Con.  −12.771  <0.001  −3.002  <0.05  Interm. Inc. vs Interm.  −6.053  <0.001  −0.733  0.476  Interm. Inc. vs Interm. Con.  −6.707  <0.001  −1.372  0.193  Interm. Inc. vs Con.  −9.981  <0.001  −2.183  <0.05  Interm. vs Interm. Con.  −4.694  <0.001  −0.287  0.778  Interm. vs Con.  −9.187  <0.001  −1.558  0.143  Interm. Con. vs Con.  −7.239  <0.001  −1.914  0.078  Comparison  Congruence  Pleasantness  t13  P  t13  P  Inc. vs Interm. Inc.  −9.932  <0.001  −1.296  0.186  Inc. vs Interm.  −11.517  <0.001  −1.489  0.16  Inc. vs Interm. Con.  −9.889  <0.001  −2.323  <0.05  Inc. vs Con.  −12.771  <0.001  −3.002  <0.05  Interm. Inc. vs Interm.  −6.053  <0.001  −0.733  0.476  Interm. Inc. vs Interm. Con.  −6.707  <0.001  −1.372  0.193  Interm. Inc. vs Con.  −9.981  <0.001  −2.183  <0.05  Interm. vs Interm. Con.  −4.694  <0.001  −0.287  0.778  Interm. vs Con.  −9.187  <0.001  −1.558  0.143  Interm. Con. vs Con.  −7.239  <0.001  −1.914  0.078  Inc., Incongruent; Interm. Inc, Intermediary Incongruent; Interm, Intermediary; Interm. Con., Intermediary Congruent; Con., Congruent. View Large Table 4. Paired comparisons between congruence categories for congruence and pleasantness ratings Comparison  Congruence  Pleasantness  t13  P  t13  P  Inc. vs Interm. Inc.  −9.932  <0.001  −1.296  0.186  Inc. vs Interm.  −11.517  <0.001  −1.489  0.16  Inc. vs Interm. Con.  −9.889  <0.001  −2.323  <0.05  Inc. vs Con.  −12.771  <0.001  −3.002  <0.05  Interm. Inc. vs Interm.  −6.053  <0.001  −0.733  0.476  Interm. Inc. vs Interm. Con.  −6.707  <0.001  −1.372  0.193  Interm. Inc. vs Con.  −9.981  <0.001  −2.183  <0.05  Interm. vs Interm. Con.  −4.694  <0.001  −0.287  0.778  Interm. vs Con.  −9.187  <0.001  −1.558  0.143  Interm. Con. vs Con.  −7.239  <0.001  −1.914  0.078  Comparison  Congruence  Pleasantness  t13  P  t13  P  Inc. vs Interm. Inc.  −9.932  <0.001  −1.296  0.186  Inc. vs Interm.  −11.517  <0.001  −1.489  0.16  Inc. vs Interm. Con.  −9.889  <0.001  −2.323  <0.05  Inc. vs Con.  −12.771  <0.001  −3.002  <0.05  Interm. Inc. vs Interm.  −6.053  <0.001  −0.733  0.476  Interm. Inc. vs Interm. Con.  −6.707  <0.001  −1.372  0.193  Interm. Inc. vs Con.  −9.981  <0.001  −2.183  <0.05  Interm. vs Interm. Con.  −4.694  <0.001  −0.287  0.778  Interm. vs Con.  −9.187  <0.001  −1.558  0.143  Interm. Con. vs Con.  −7.239  <0.001  −1.914  0.078  Inc., Incongruent; Interm. Inc, Intermediary Incongruent; Interm, Intermediary; Interm. Con., Intermediary Congruent; Con., Congruent. View Large The perceived composition of the 5 odors, when presented alone, followed the designed proportions and replicated our previous findings (Amsellem and Ohla 2016): participants reported the ratio of the odor components (orange and chicken) to vary gradually, resulting in a main effect of odor (F4,52 = 136.0, P < 0.001, ηp2 = 0.913; Figure 5B, middle bars) and significant pairwise comparisons (Table 3). These effects persisted when odors were presented with any image (all t13 < −4.705, P < 0.001). In order to assess the general effects of visual information on odor composition independent of odor category, we compared composition ratings between each image category, orange and chicken, with the no image control. We found that orange images yielded a significant shift in odor composition toward orange compared with the no image control (orange vs. none: t13 = −2.5958, P < 0.05, Figure 5A). In contrast, chicken images did not influence perceived odor composition compared with the no image control (chicken vs. none: t13 = 0.30, P = 0.77, Figure 5A). Figure 5. View largeDownload slide Odor composition ratings aggregated for odor categories after participant-level z-score normalization (A) and for each odor category separately (B). Box-plots represent distributions of means; median and mean of the distributions are represented by the horizontal line and diamond, respectively; whiskers represent 1.5 times the interquartile range (IQR); dots indicate outliers. DomO, dominant orange; Percept50, perceptual half–half; DomC, dominant chicken. Figure 5. View largeDownload slide Odor composition ratings aggregated for odor categories after participant-level z-score normalization (A) and for each odor category separately (B). Box-plots represent distributions of means; median and mean of the distributions are represented by the horizontal line and diamond, respectively; whiskers represent 1.5 times the interquartile range (IQR); dots indicate outliers. DomO, dominant orange; Percept50, perceptual half–half; DomC, dominant chicken. We then explored these effects for each odor category and observed that the presence of an image modulated odor composition differentially, resulting in a significant odor–image interaction (F8,104 = 4.383, P < 0.001, ηp2 = 0.252, Figure 5B), and a main effect of odor category was observed (F4,52 = 146.935, P < 0.001, ηp2 = 0.919; Figure 5B) without a significant main effect of image category (F2,26 = 2.18, P = 0.13, ηp2 = 0.144, Figure 5B). Pairwise comparisons are summarized in Table 5. Table 5. Paired comparisons between image types for odor composition ratings Comparison  Odor composition  Odor  Image  t13  P  Orange  Orange vs. none  −1.259  0.23  Chicken vs. none  3.357  0.005  Chicken vs. orange  −3.871  0.002  Dominant orange  Orange vs. none  −0.01  0.992  Chicken vs. none  1.657  0.122  Chicken vs. orange  −1.328  0.207  Perceptual half–half  Orange vs. none  −2.896  0.013  Chicken vs. none  −2.748  0.017  Chicken vs. orange  −0.771  0.455  Dominant chicken  Orange vs. none  −0.776  0.452  Chicken vs. none  −1.77  0.1  Chicken vs. orange  0.853  0.409  Chicken  Orange vs. none  −1.717  0.11  Chicken vs. none  −2.506  0.026  Chicken vs. orange  0.629  0.54  Comparison  Odor composition  Odor  Image  t13  P  Orange  Orange vs. none  −1.259  0.23  Chicken vs. none  3.357  0.005  Chicken vs. orange  −3.871  0.002  Dominant orange  Orange vs. none  −0.01  0.992  Chicken vs. none  1.657  0.122  Chicken vs. orange  −1.328  0.207  Perceptual half–half  Orange vs. none  −2.896  0.013  Chicken vs. none  −2.748  0.017  Chicken vs. orange  −0.771  0.455  Dominant chicken  Orange vs. none  −0.776  0.452  Chicken vs. none  −1.77  0.1  Chicken vs. orange  0.853  0.409  Chicken  Orange vs. none  −1.717  0.11  Chicken vs. none  −2.506  0.026  Chicken vs. orange  0.629  0.54  View Large Table 5. Paired comparisons between image types for odor composition ratings Comparison  Odor composition  Odor  Image  t13  P  Orange  Orange vs. none  −1.259  0.23  Chicken vs. none  3.357  0.005  Chicken vs. orange  −3.871  0.002  Dominant orange  Orange vs. none  −0.01  0.992  Chicken vs. none  1.657  0.122  Chicken vs. orange  −1.328  0.207  Perceptual half–half  Orange vs. none  −2.896  0.013  Chicken vs. none  −2.748  0.017  Chicken vs. orange  −0.771  0.455  Dominant chicken  Orange vs. none  −0.776  0.452  Chicken vs. none  −1.77  0.1  Chicken vs. orange  0.853  0.409  Chicken  Orange vs. none  −1.717  0.11  Chicken vs. none  −2.506  0.026  Chicken vs. orange  0.629  0.54  Comparison  Odor composition  Odor  Image  t13  P  Orange  Orange vs. none  −1.259  0.23  Chicken vs. none  3.357  0.005  Chicken vs. orange  −3.871  0.002  Dominant orange  Orange vs. none  −0.01  0.992  Chicken vs. none  1.657  0.122  Chicken vs. orange  −1.328  0.207  Perceptual half–half  Orange vs. none  −2.896  0.013  Chicken vs. none  −2.748  0.017  Chicken vs. orange  −0.771  0.455  Dominant chicken  Orange vs. none  −0.776  0.452  Chicken vs. none  −1.77  0.1  Chicken vs. orange  0.853  0.409  Chicken  Orange vs. none  −1.717  0.11  Chicken vs. none  −2.506  0.026  Chicken vs. orange  0.629  0.54  View Large Discussion The aims of this study were to explore visual–olfactory interactions at early sensory and late perceptual evaluation levels. Specifically, we aimed to answer 1) whether visual–olfactory information is integrated at early stages of processing, 2) whether visual–olfactory congruence is a gradual or dichotomous phenomenon, and 3) whether visual information influences bimodal stimulus evaluation and odor identity. Previous studies on VO interaction employed mostly congruent or congruent and incongruent stimuli (Gottfried and Dolan 2003; Dematte et al. 2006), thereby limiting the concept of congruence to a dichotomy. By combining images with carefully designed odor mixtures, we found that participants perceived different nuances of congruence for bimodal stimuli suggesting that VO congruence is experienced gradually rather than dichotomously, that is, as either congruence or incongruence, in line with our previous finding for odor–taste combinations (Amsellem and Ohla 2016). Congruence between the sensory inputs systematically influenced pleasantness such that the least congruent combinations yielded the lowest and the most congruent combinations yielded the highest pleasantness ratings. Similar findings have been previously observed for bimodal odor–taste stimuli (Schifferstein and Verlegh 1996; Small et al. 2004; Amsellem and Ohla 2016), indicating a modality-independent link between congruence and pleasantness. Participants evaluated odor composition with high accuracy suggesting that they were able to decompose the odorants into their components (Laing et al. 1984; Laing and Francis 1989; Jinks and Laing 2001; Amsellem and Ohla 2016), which resulted in the perception of distinct sensory entities. Odor composition perception was influenced by visual information: in the presence of orange images, odor composition ratings were shifted toward the orange component, an effect that was observed for all odor categories irrespective of VO congruence. This finding is in line with the notion that odor identification critically depends on additional information as odors in isolation are notoriously ambiguous (Cain 1979). Identification and discrimination of odors in the absence of additional information is difficult (Davis 1981), and color cues (Zellner et al. 1991), verbal labels (Cain et al. 1998; Herz and von Clef 2001), and images (Gottfried and Dolan 2003) have been shown to improve odor perception. Surprisingly and in contrast, the presence of chicken images yielded no consistent effects on odor composition ratings. It is possible that the higher intensity and associated reduced pleasantness of chicken odor compared with orange odor contributed to that finding. Although orange and chicken images were iso-pleasant, the corresponding odors were not; thus, it is possible that the “negative” chicken odor is less likely to be enhanced in the odor mixture. Alternatively, the relative ambiguity of chicken compared with orange, as chicken may be interpreted as different meats while orange is more unequivocal, may have reduced the potential of chicken images to enhance chicken odor. Assuming stronger ambiguity of chicken images, one could hypothesize that participants relied more on the odors alone than on the VO combination to assess odor composition. In this case, participants would have already extracted sufficient information from the odors alone to perform the rating. Similarly, the large perceptual distance between orange and chicken rendered the task relatively easy. It remains to be tested whether participants would rely more on visual information when the odorants were perceptually closer, for example, 2 fruits of the same category (e.g., berries). In contrast to our previous findings (Amsellem and Ohla 2016), participants rated orange odor more pleasant and less intense than chicken odor. The greater intrinsic typicality of the orange odor compared with the more ambiguous meat fragrance of chicken could, at least in part, account for the reported differences in pleasantness in line with the notion that familiar odors are more pleasant than unfamiliar ones (Delplanque et al. 2008). Our finding of reduced odor intensity for the mixtures compared with the orange and chicken most likely reflects mixture-related attenuation (Berglund and Olsson 1993; Cain et al. 1995) and replicates previous findings using the same odorants (Amsellem and Ohla 2016). In line with the assumption that a combined sampling for multiple target stimuli leads to speedup and a narrower RT distribution compared with individual sampling for single stimuli (Raab 1962), we found faster responses and smaller RT variability for bimodal stimuli compared with their unimodal constituents. However, comparisons between mean bimodal and the fastest unimodal responses, as indicated by MRE, revealed no significant bimodal speedup, which could have been due to large interindividual differences. Similarly, we found no negative correlation between MRE and perceived V and O stimulus onsets, indicating that perceived onset-asynchronies between V and O stimuli alone cannot fully explain interindividual differences of visual–olfactory MRE in our experimental setting. This argument is further corroborated by the observation that less effective stimuli are more tolerant of stimulus asynchronies (Höchenberger et al. 2015; Krueger Fister et al. 2015). Analysis of RT distributions revealed that VO responses were mostly driven by the visual modality, as indicated by their superimposed CDFs, FVO and FV, until the 45th percentile. After that point, FVO deviated toward faster responses, that is, FV and FO started to overlap supporting the assumption that facilitation is greatest when stimulus constituents are perceived simultaneously, as has already been demonstrated in experiments using visual–olfactory (Höchenberger et al. 2015), audio–visual (Hershenson 1962), and audio–visual–tactile stimuli (Diederich and Colonius 2004). However, the mean bimodal RT CDF did not violate the Miller bound, indicating the observed bimodal improvement could be the result of statistical facilitation alone (Miller 1982). Instead, FVO was shifted right of the UCIP model in the direction of the Grice bound. Although this shift was not statistically significant, it could be indicative of a positive correlation between visual and olfactory channel processing times, as the Grice bound indicates “perfect” positive correlation (Colonius 1990). This suggests that, in each trial, processing speed of either the visual or the olfactory channel determines, to some extent, the processing speed of the other channel. Note that the Miller and Grice bounds merely provide boundaries for an entire group of parallel processing models, called race models. Violation of either boundary leads to a rejection of all possible race models, including the UCIP model, which is a race model assuming stochastically independent channel processing times. Individual statistical comparisons between the experimentally collected data and different models provide useful information to further classify the underlying processing architecture. Our interpretation of dependent processing is further substantiated by the capacity analysis, which revealed slightly limited-capacity processing during bimodal stimulation, that is, performance dropped below a prediction assuming parallel processing in stochastically independent channels (Townsend and Nozawa 1995; Townsend and Wenger 2004; Townsend and Eidels 2011). An alternative, though not mutually exclusive interpretation, would allow for some degree of information exchange (cross talk) between channels according to a dynamic model of “interactive” parallel processing (Townsend and Nozawa 1995). This system may operate at limited capacity in the case of negative channel interactions, while super-capacity is associated with positive interactions. Within this framework, one could speculate that numerous participants were affected by inhibitory cross-channel interactions in bimodal trials. Notably, race models critically rely on the (implicit) assumption of context invariance, which demands that processing in an individual channel is identical in both the unimodal and the multimodal setting (Ashby and Townsend 1986; Colonius 1990; Townsend and Wenger 2004). Therefore, violations of context invariance are a prerequisite for violations of the Miller and Grice bounds. However, context invariance “implies unlimited capacity”, and “violation of stochastic independence in real-time systems often leads to a failure of context invariance” (Townsend and Wenger 2004). Considering the deviations from unlimited-capacity processing and the shift of the bimodal RT distribution in direction of the Grice bound, which is associated with a positive correlation of channel processing speeds and thus implies a violation of stochastic independence, one could speculate that the context invariance assumption was violated by at least some participants. Mordkoff and Yantis (1991) found that presentation statistics (contingencies) favoring redundant-target trials, a visual go/no-go task, lead to violations of the Miller bound. Although contingencies in this study were biased toward bimodal stimuli (approx. 60% bimodal trials), we did not observe race model violations. This further corroborates the absence of facilitatory cross-channel interactions between the visual and olfactory modalities, in line with our previous findings that revealed parallel processing of visual and olfactory information in an object identification task (Höchenberger et al. 2015). Together these findings indicate that bimodal visual–olfactory processing follows the same mechanism at early (detection) and later (identification) perceptual stages. Although we cannot rule out that visual–olfactory integration may have taken place at early levels of sensory processing, as bimodal CDFs were always located between the Miller and Grice bounds, this view is not substantiated by known physiological prerequisites for early sensory integration, that is, monosynaptic connections between the primary visual and olfactory cortices or bimodal visual–olfactory neurons within these areas. For auditory–visual information, early sensory integration is grounded on multisensory neurons in the superior colliculus (Wallace and Stein 1997) along with direct (Falchier et al. 2002; Budinger and Scheich 2009; Cappe et al. 2009) and indirect (van den Brink et al. 2014) connection between auditory and visual sensory cortices. Late auditory–visual interactions, however, are likely established via pooling of sensory information in heteromodal areas such as the superior temporal sulcus, intraparietal sulcus, parieto-occipital cortex, posterior insula, as well as prefrontal and premotor areas (Calvert et al. 2000; Calvert 2001). Similarly, the perirhinal cortex has been proposed a prime candidate and a processing hub for visual–olfactory information exchange (Qu et al. 2016) because of its numerous reciprocal connections, particularly with the inferior temporal cortex, which is involved in object perception (Grill-Spector and Weiner 2014) and associations of sensory representations, and the rhinal cortex, a subdivision, is critical for the association of flavor with visual food objects in monkeys (Parker and Gaffan 1998) and olfactory–visual associative learning in humans (Qu et al. 2016). While activation of this network may not give rise to early sensory integration reflected in simple RTs, it is likely contributing to later visual–olfactory interactions. Conclusion The present data yielded a bimodal VO RT speedup that is consistent with parallel processing according to race models. While these models do not refute the possibility of integration, our data, together with the fact that direct connections between visual and olfactory areas as well as bimodal neurons have not been discovered yet, render VO integration at the earliest level of the processing cascade unlikely. At later, evaluative levels of processing, however, bimodal interactions exhibited effects on the pleasantness of bimodal stimuli as well as odor identity suggesting dual, early and late, visual–olfactory interactions similar to audio–visual interaction. Supplementary data Supplementary data can be found at Chemical Senses online. Funding This research was conducted with institutional budget. Acknowledgments The authors thank Andrea Katschak for help with data acquisition. References Amsellem S, Ohla K. 2016. Perceived odor-taste congruence influences intensity and pleasantness differently. Chem Senses . 41: 677– 684. Google Scholar CrossRef Search ADS PubMed  Ashby FG, Townsend JT. 1986. Varieties of perceptual independence. Psychol Rev . 93: 154– 179. Google Scholar CrossRef Search ADS PubMed  Bensafi M, Croy I, Phillips N, Rouby C, Sezille C, Gerber J, Small DM, Hummel T. 2014. The effect of verbal context on olfactory neural responses. Hum Brain Mapp . 35: 810– 818. Google Scholar CrossRef Search ADS PubMed  Berglund B, Olsson MJ. 1993. Odor-intensity interaction in binary mixtures. J Exp Psychol Hum Percept Perform . 19: 302– 314. Google Scholar CrossRef Search ADS PubMed  Blechert J, Meule A, Busch NA, Ohla K. 2014. Food-pics: an image database for experimental research on eating and appetite. Front Psychol . 5: 617. Google Scholar CrossRef Search ADS PubMed  Budinger E, Scheich H. 2009. Anatomical connections suitable for the direct processing of neuronal information of different modalities via the rodent primary auditory cortex. Hear Res . 258: 16– 27. Google Scholar CrossRef Search ADS PubMed  Cain WS. 1979. To know with the nose: keys to odor identification. Science . 203: 467– 470. Google Scholar CrossRef Search ADS PubMed  Cain WS, de Wijk R, Lulejian C, Schiet F, See LC. 1998. Odor identification: perceptual and semantic dimensions. Chem Senses . 23: 309– 326. Google Scholar CrossRef Search ADS PubMed  Cain WS, Schiet FT, Olsson MJ, de Wijk RA. 1995. Comparison of models of odor interaction. Chem Senses . 20: 625– 637. Google Scholar CrossRef Search ADS PubMed  Calvert GA. 2001. Crossmodal processing in the human brain: insights from functional neuroimaging studies. Cereb Cortex . 11: 1110– 1123. Google Scholar CrossRef Search ADS PubMed  Calvert GA, Campbell R, Brammer MJ. 2000. Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Curr Biol . 10: 649– 657. Google Scholar CrossRef Search ADS PubMed  Cappe C, Morel A, Barone P, Rouiller EM. 2009. The thalamocortical projection systems in primate: an anatomical support for multisensory and sensorimotor interplay. Cereb Cortex . 19: 2025– 2037. Google Scholar CrossRef Search ADS PubMed  Chen A, Deangelis GC, Angelaki DE. 2013. Functional specializations of the ventral intraparietal area for multisensory heading discrimination. J Neurosci . 33: 3567– 3581. Google Scholar CrossRef Search ADS PubMed  Colonius H. 1990. Possibly Dependent Probability Summation of Reaction Time. J Math Psychol . 34: 253– 275. Google Scholar CrossRef Search ADS   Courtiol E, Wilson DA. 2017. The Olfactory Mosaic: Bringing an Olfactory Network Together for Odor Perception. Perception . 46: 320– 332. Google Scholar CrossRef Search ADS PubMed  Davis RG. 1981. The role of nonolfactory context cues in odor identification. Percept Psychophys . 30: 83– 89. Google Scholar CrossRef Search ADS PubMed  Degel J, Köster EP. 1998. Implicit memory for odors: a possible method for observation. Percept Mot Skills . 86: 943– 952. Google Scholar CrossRef Search ADS PubMed  Degel J, Köster EP. 1999. Odors: implicit memory and performance effects. Chem Senses . 24: 317– 325. Google Scholar CrossRef Search ADS PubMed  Delplanque S, Grandjean D, Chrea C, Aymard L, Cayeux I, Le Calvé B, Velazco MI, Scherer KR, Sander D. 2008. Emotional processing of odors: evidence for a nonlinear relation between pleasantness and familiarity evaluations. Chem Senses . 33: 469– 479. Google Scholar CrossRef Search ADS PubMed  Dematte ML, Sanabria D, Spence C. 2006. Cross-modal associations between odors and colors. Chem Senses . 31: 531– 538. Google Scholar CrossRef Search ADS PubMed  De Meo R, Murray MM, Clarke S, Matusz PJ, Soto-Faraco S, Wallace MT. 2015. Top-down control and early multisensory processes: chicken vs. egg. Front Integr Neurosci . 9: 17. Google Scholar CrossRef Search ADS PubMed  Diederich A, Colonius H. 2004. Bimodal and trimodal multisensory enhancement: effects of stimulus onset and intensity on reaction time. Percept Psychophys . 66: 1388– 1404. Google Scholar CrossRef Search ADS PubMed  Falchier A, Clavagnier S, Barone P, Kennedy H. 2002. Anatomical evidence of multimodal integration in primate striate cortex. J Neurosci . 22: 5749– 5759. Google Scholar CrossRef Search ADS PubMed  Forster B, Cavina-Pratesi C, Aglioti SM, Berlucchi G. 2002. Redundant target effect and intersensory facilitation from visual-tactile interactions in simple reaction time. Exp Brain Res . 143: 480– 487. Google Scholar CrossRef Search ADS PubMed  Gielen SC, Schmidt RA, Van den Heuvel PJ. 1983. On the nature of intersensory facilitation of reaction time. Percept Psychophys . 34: 161– 168. Google Scholar CrossRef Search ADS PubMed  Gottfried JA, Dolan RJ. 2003. The nose smells what the eye sees: crossmodal visual facilitation of human olfactory perception. Neuron . 39: 375– 386. Google Scholar CrossRef Search ADS PubMed  Green BG, Nachtigal D, Hammond S, Lim J. 2012. Enhancement of retronasal odors by taste. Chem Senses . 37: 77– 86. Google Scholar CrossRef Search ADS PubMed  Grice GR, Canham L, Boroughs JM. 1984a. Combination rule for redundant information in reaction time tasks with divided attention. Percept Psychophys . 35: 451– 463. Google Scholar CrossRef Search ADS   Grice GR, Canham L, Gwynne JW. 1984b. Absence of a redundant-signals effect in a reaction time task with divided attention. Percept Psychophys . 36: 565– 570. Google Scholar CrossRef Search ADS   Grill-Spector K, Weiner KS. 2014. The functional architecture of the ventral temporal cortex and its role in categorization. Nat Rev Neurosci . 15: 536– 548. Google Scholar CrossRef Search ADS PubMed  Hershenson M. 1962. Reaction time as a measure of intersensory facilitation. J Exp Psychol . 63: 289– 293. Google Scholar CrossRef Search ADS PubMed  Herz RS, von Clef J. 2001. The influence of verbal labeling on the perception of odors: evidence for olfactory illusions? Perception . 30: 381– 391. Google Scholar CrossRef Search ADS PubMed  Höchenberger R, Busch NA, Ohla K. 2015. Nonlinear response speedup in bimodal visual-olfactory object identification. Front Psychol . 6: 1477. Google Scholar CrossRef Search ADS PubMed  Jinks A, Laing DG. 2001. The analysis of odor mixtures by humans: evidence for a configurational process. Physiol Behav . 72: 51– 63. Google Scholar CrossRef Search ADS PubMed  Krueger Fister J, Stevenson RA, Nidiffer AR, Barnett ZP, Wallace MT. 2015. Stimulus intensity modulates multisensory temporal processing. Neuropsychologia . 88:92–100. Laing DG, Francis GW. 1989. The capacity of humans to identify odors in mixtures. Physiol Behav . 46: 809– 814. Google Scholar CrossRef Search ADS PubMed  Laing DG, Panhuber H, Willcox ME, Pittman EA. 1984. Quality and intensity of binary odor mixtures. Physiol Behav . 33: 309– 319. Google Scholar CrossRef Search ADS PubMed  Lundström JN, Gordon AR, Alden EC, Boesveldt S, Albrecht J. 2010. Methods for building an inexpensive computer-controlled olfactometer for temporally-precise experiments. Int J Psychophysiol . 78: 179– 189. Google Scholar CrossRef Search ADS PubMed  Meredith MA, Stein BE. 1983. Interactions among converging sensory inputs in the superior colliculus. Science . 221: 389– 391. Google Scholar CrossRef Search ADS PubMed  Meredith MA, Stein BE. 1986. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. J Neurophysiol . 56: 640– 662. Google Scholar CrossRef Search ADS PubMed  Miller J. 1982. Divided attention: evidence for coactivation with redundant signals. Cogn Psychol . 14: 247– 279. Google Scholar CrossRef Search ADS PubMed  Miller J. 1986. Timecourse of coactivation in bimodal divided attention. Percept Psychophys . 40: 331– 343. Google Scholar CrossRef Search ADS PubMed  Mordkoff JT, Yantis S. 1991. An interactive race model of divided attention. J Exp Psychol Hum Percept Perform . 17: 520– 538. Google Scholar CrossRef Search ADS PubMed  Morrot G, Brochet F, Dubourdieu D. 2001. The color of odors. Brain Lang . 79: 309– 320. Google Scholar CrossRef Search ADS PubMed  Parker A, Gaffan D. 1998. Memory after frontal/temporal disconnection in monkeys: conditional and non-conditional tasks, unilateral and bilateral frontal lesions. Neuropsychologia . 36: 259– 271. Google Scholar CrossRef Search ADS PubMed  Qu LP, Kahnt T, Cole SM, Gottfried JA. 2016. De Novo Emergence of Odor Category Representations in the Human Brain. J Neurosci . 36: 468– 478. Google Scholar CrossRef Search ADS PubMed  Raab DH. 1962. Statistical facilitation of simple reaction times. Trans N Y Acad Sci . 24: 574– 590. Google Scholar CrossRef Search ADS PubMed  Ratcliff R. 1979. Group reaction time distributions and an analysis of distribution statistics. Psychol Bull . 86: 446– 461. Google Scholar CrossRef Search ADS PubMed  Robinson AK, Mattingley JB, Reinhard J. 2013. Odors enhance the salience of matching images during the attentional blink. Front Integr Neurosci . 7: 77. Google Scholar CrossRef Search ADS PubMed  Schifferstein HN, Verlegh PW. 1996. The role of congruency and pleasantness in odor-induced taste enhancement. Acta Psychol (Amst) . 94: 87– 105. Google Scholar CrossRef Search ADS PubMed  Seigneuric A, Durand K, Jiang T, Baudouin JY, Schaal B. 2010. The nose tells it to the eyes: crossmodal associations between olfaction and vision. Perception . 39: 1541– 1554. Google Scholar CrossRef Search ADS PubMed  Seo HS, Lohse F, Luckett CR, Hummel T. 2014. Congruent sound can modulate odor pleasantness. Chem Senses . 39: 215– 228. Google Scholar CrossRef Search ADS PubMed  Seo HS, Roidl E, Müller F, Negoias S. 2010. Odors enhance visual attention to congruent objects. Appetite . 54: 544– 549. Google Scholar CrossRef Search ADS PubMed  Shepard TG, Veldhuizen MG, Marks LE. 2015. Response Times to Gustatory-Olfactory Flavor Mixtures: Role of Congruence. Chem Senses . 40: 565– 575. Google Scholar CrossRef Search ADS PubMed  Small DM, Voss J, Mak YE, Simmons KB, Parrish T, Gitelman D. 2004. Experience-dependent neural integration of taste and smell in the human brain. J Neurophysiol . 92: 1892– 1903. Google Scholar CrossRef Search ADS PubMed  Stevenson RA, Ghose D, Fister JK, Sarko DK, Altieri NA, Nidiffer AR, Kurela LR, Siemann JK, James TW, Wallace MT. 2014. Identifying and quantifying multisensory integration: a tutorial review. Brain Topogr . 27: 707– 730. Google Scholar CrossRef Search ADS PubMed  Talsma D. 2015. Predictive coding and multisensory integration: an attentional account of the multisensory mind. Front Integr Neurosci . 9: 19. Google Scholar CrossRef Search ADS PubMed  Thorndike EL. 1920. A constant error in psychological ratings. J Appl Psychol . 4: 25– 29. Google Scholar CrossRef Search ADS   Townsend JT, Ashby FG. 1978. Methods of modeling capacity in simple processing systems. In: Castellan J, Restle F, editors. Cognitive theory . Vol. 3. Hillsdale, (NJ): Erlbaum Associates. p. 199–239. Townsend JT, Ashby FG. 1983. Stochastic modeling of elementary psychological processes . New York, NY: Cambridge University Press. Townsend JT, Eidels A. 2011. Workload capacity spaces: a unified methodology for response time measures of efficiency as workload is varied. Psychon Bull Rev . 18: 659– 681. Google Scholar CrossRef Search ADS PubMed  Townsend JT, Nozawa G. 1995. Spatio-temporal properties of elementary perception: An investigation of parallel, serial, and coactive theories. J Math Psychol . 39: 321– 359. Google Scholar CrossRef Search ADS   Townsend JT, Wenger MJ. 2004. A theory of interactive parallel processing: new capacity measures and predictions for a response time inequality series. Psychol Rev . 111: 1003– 1035. Google Scholar CrossRef Search ADS PubMed  Ulrich, R, Miller, J, Schröter, H. 2007. Testing the race model inequality: An algorithm and computer programs. Behav Res Methods. 39:291–302. van den Brink RL, Cohen MX, van der Burg E, Talsma D, Vissers ME, Slagter HA. 2014. Subcortical, modality-specific pathways contribute to multisensory processing in humans. Cereb Cortex . 24: 2169– 2177. Google Scholar CrossRef Search ADS PubMed  Veldhuizen MG, Shepard TG, Wang MF, Marks LE. 2010. Coactivation of gustatory and olfactory signals in flavor perception. Chem Senses . 35: 121– 133. Google Scholar CrossRef Search ADS PubMed  Vincent SBS. 1912. The functions of the vibrissae in the behavior of the white rat. Anim Behav Monogr . 1:1–84. Wallace MT, Stein BE. 1997. Development of multisensory neurons and multisensory integration in cat superior colliculus. J Neurosci . 17: 2429– 2444. Google Scholar CrossRef Search ADS PubMed  Yamada Y, Sasaki K, Kunieda S, Wada Y. 2014. Scents boost preference for novel fruits. Appetite . 81: 102– 107. Google Scholar CrossRef Search ADS PubMed  Zellner DA, Bartoli AM, Eckard R. 1991. Influence of color on odor identification and liking ratings. Am J Psychol . 104: 547– 561. Google Scholar CrossRef Search ADS PubMed  Zhou W, Jiang Y, He S, Chen D. 2010. Olfaction modulates visual perception in binocular rivalry. Curr Biol . 20: 1356– 1358. Google Scholar CrossRef Search ADS PubMed  Zhou W, Zhang X, Chen J, Wang L, Chen D. 2012. Nostril-specific olfactory modulation of visual perception in binocular rivalry. J Neurosci . 32: 17225– 17229. Google Scholar CrossRef Search ADS PubMed  © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

Chemical SensesOxford University Press

Published: Mar 8, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off