A supramodal and conceptual representation of subsecond time revealed with perceptual learning of temporal interval discrimination

Ying Zi Xiong, Shu Chen Guan, Cong Yu

Research output: Contribution to journalArticlepeer-review

Abstract

Subsecond time perception has been frequently attributed to modality-specific timing mechanisms that would predict no cross-modal transfer of temporal perceptual learning. In fact, perceptual learning of temporal interval discrimination (TID) reportedly shows either no cross-modal transfer, or asymmetric transfer from audition to vision, but not vice versa. However, here we demonstrate complete cross-modal transfer of auditory and visual TID learning using a double training paradigm. Specifically, visual TID learning transfers to and optimizes auditory TID when the participants also receive exposure to the auditory temporal interval by practicing a functionally orthogonal near-threshold tone frequency discrimination task at the same trained interval. Auditory TID learning also transfers to and optimizes visual TID with additional practice of an orthogonal near-threshold visual contrast discrimination task at the same trained interval. Practicing these functionally orthogonal tasks per se has no impact on TID thresholds. We interpret the transfer results as indications of a supramodal representation of subsecond time. Moreover, because TID learning shows complete transfer between modalities with vastly different temporal precisions, the sub-second time presentation must be conceptual. Double training may refine this supramodal and conceptual subsecond time representation and connect it to a new sense to improve time perception.

Original languageEnglish (US)
Article number10668
JournalScientific reports
Volume12
Issue number1
DOIs
StatePublished - Dec 2022

Bibliographical note

Funding Information:
This research was supported by a Ministry of Science and Technology, China grant 2022ZD0204601, a Natural Science Foundation of China Grant 31230030, and funds from Center for Life Sciences, Peking University.

Publisher Copyright:
© 2022, The Author(s).

PubMed: MeSH publication types

  • Journal Article
  • Research Support, Non-U.S. Gov't

Fingerprint

Dive into the research topics of 'A supramodal and conceptual representation of subsecond time revealed with perceptual learning of temporal interval discrimination'. Together they form a unique fingerprint.

Cite this