Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Multisensory mechanisms for perceptual disambiguation. A classification image study on the stream–bounce illusion

Multisensory mechanisms for perceptual disambiguation. A classification image study on the... Sensory information is inherently ambiguous, and a given signal can in principle correspond to infinite states of the world. A primary task for the observer is therefore to disambiguate sensory information and accurately infer the actual state of the world. Here, we take the stream–bounce illusion as a tool to investigate perceptual disambiguation from a cue-integration perspective, and explore how humans gather and combine sensory information to resolve ambiguity. In a classification task, we presented two bars moving in opposite directions along the same trajectory meeting at the centre. We asked observers to classify such ambiguous displays as streaming or bouncing. Stimuli were embedded in dynamic audiovisual noise, so that through a reverse correlation analysis, we could estimate the perceptual templates used for the classification. Such templates, the classification images, describe the spatiotemporal statistical properties of the noise, which are selectively associated to either percept. Our results demonstrate that the features of both visual and auditory noise, and interactions thereof, strongly biased the final percept towards streaming or bouncing. Computationally, participants’ performance is explained by a model involving a matching stage, where the perceptual systems cross-correlate the sensory signals with the internal templates; and an integration stage, where matching estimates are linearly combined to determine the final percept. These results demonstrate that observers use analogous MLE-like integration principles for categorical stimulus properties (stream/bounce decisions) as they do for continuous estimates (object size, position, etc.). Finally, the time-course of the classification images reveal that most of the decisional weight for disambiguation is assigned to information gathered before the physical crossing of the stimuli, thus highlighting a predictive nature of perceptual disambiguation. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Multisensory Research (continuation of Seeing & Perceiving from 2013) Brill

Multisensory mechanisms for perceptual disambiguation. A classification image study on the stream–bounce illusion

Loading next page...
 
/lp/brill/multisensory-mechanisms-for-perceptual-disambiguation-a-classification-aOpBJMceYm
Publisher
Brill
Copyright
© Koninklijke Brill NV, Leiden, The Netherlands
Subject
Oral Presentation
ISSN
2213-4794
eISSN
2213-4808
DOI
10.1163/22134808-000S0068
Publisher site
See Article on Publisher Site

Abstract

Sensory information is inherently ambiguous, and a given signal can in principle correspond to infinite states of the world. A primary task for the observer is therefore to disambiguate sensory information and accurately infer the actual state of the world. Here, we take the stream–bounce illusion as a tool to investigate perceptual disambiguation from a cue-integration perspective, and explore how humans gather and combine sensory information to resolve ambiguity. In a classification task, we presented two bars moving in opposite directions along the same trajectory meeting at the centre. We asked observers to classify such ambiguous displays as streaming or bouncing. Stimuli were embedded in dynamic audiovisual noise, so that through a reverse correlation analysis, we could estimate the perceptual templates used for the classification. Such templates, the classification images, describe the spatiotemporal statistical properties of the noise, which are selectively associated to either percept. Our results demonstrate that the features of both visual and auditory noise, and interactions thereof, strongly biased the final percept towards streaming or bouncing. Computationally, participants’ performance is explained by a model involving a matching stage, where the perceptual systems cross-correlate the sensory signals with the internal templates; and an integration stage, where matching estimates are linearly combined to determine the final percept. These results demonstrate that observers use analogous MLE-like integration principles for categorical stimulus properties (stream/bounce decisions) as they do for continuous estimates (object size, position, etc.). Finally, the time-course of the classification images reveal that most of the decisional weight for disambiguation is assigned to information gathered before the physical crossing of the stimuli, thus highlighting a predictive nature of perceptual disambiguation.

Journal

Multisensory Research (continuation of Seeing & Perceiving from 2013)Brill

Published: Jan 1, 2013

Keywords: Perceptual ambiguity; stream–bounce illusion; classification image; computational modeling

There are no references for this article.