Visual perception as retrospective Bayesian decoding from high- to low-level features

Stephanie Ding, Christopher Cueva, Misha Tsodyks, and Ning Qian, Proc. Natl. Acad. Sci. USA, 2017, 114: E9115-E9124.
Download the full paper (PDF file) Supporting Information (PDF file).

Abstract

When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear and most models assume that decoding follows the same low-to-high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory; yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical and thus easier to specify and maintain in noisy working memory, and that more-reliable higher-level decoding constrains less-reliable lower-level decoding.

Back to Qian Lab Home Page