How Visual Word Decoding and Context-Driven Auditory Semantic Integration Contribute to Reading Comprehension: A Test of Additive vs. Multiplicative Models

Yu Li, Hongbing Xing, Linjun Zhang, Hua Shu, Yang Zhang

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Theories of reading comprehension emphasize decoding and listening comprehension as two essential components. The current study aimed to investigate how Chinese character decoding and context-driven auditory semantic integration contribute to reading comprehension in Chinese middle school students. Seventy-five middle school students were tested. Context-driven auditory semantic integration was assessed with speech-in-noise tests in which the fundamental frequency (F0) contours of spoken sentences were either kept natural or acoustically flattened, with the latter requiring a higher degree of contextual information. Statistical modeling with hierarchical regression was conducted to examine the contributions of Chinese character decoding and context-driven auditory semantic integration to reading comprehension. Performance in Chinese character decoding and auditory semantic integration scores with the flattened (but not natural) F0 sentences significantly predicted reading comprehension. Furthermore, the contributions of these two factors to reading comprehension were better fitted with an additive model instead of a multiplicative model. These findings indicate that reading comprehension in middle schoolers is associated with not only character decoding but also the listening ability to make better use of the sentential context for semantic integration in a severely degraded speech-in-noise condition. The results add to our better understanding of the multi-faceted reading comprehension in children. Future research could further address the age-dependent development and maturation of reading skills by examining and controlling other important cognitive variables, and apply neuroimaging techniques such as functional magmatic resonance imaging and electrophysiology to reveal the neural substrates and neural oscillatory patterns for the contribution of auditory semantic integration and the observed additive model to reading comprehension.
Original languageEnglish (US)
Article number830
JournalBrain Sciences
Volume11
Issue number7
DOIs
StatePublished - Jun 23 2021

Bibliographical note

Funding Information:
Funding: This research was supported by grants from the Humanities and Social Sciences Fund of the Ministry of Education of China (20YJCZH079) and the United International College Research Foundation (R202011, R72021207) to Y.L., and the Social Science Fund of Beijing (17YYA004), the Discipline Team Support Program (JC201901), and the Science Foundation of Beijing Language and Culture University (Fundamental Research Funds for the Central Universities) (18PT09) to L.Z. Y.Z. was additionally funded by the Brain Imaging Grant from the College of Liberal Arts, University of Minnesota.

Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.

Keywords

  • Chinese character decoding
  • speech-in-noise recognition
  • reading
  • Nature F contours
  • Reading comprehension
  • Flattened F contours
  • Speech-in-noise recognition

PubMed: MeSH publication types

  • Journal Article

Fingerprint

Dive into the research topics of 'How Visual Word Decoding and Context-Driven Auditory Semantic Integration Contribute to Reading Comprehension: A Test of Additive vs. Multiplicative Models'. Together they form a unique fingerprint.

Cite this