Abstract
Just as the visual system parses complex scenes into identifiable objects, the auditory system must organize sound elements scattered in frequency and time into coherent "streams." Current neurocomputational theories of auditory streaming rely on tonotopic organization of the auditory system to explain the observation that sequential spectrally distant sound elements tend to form separate perceptual streams. Here, we show that spectral components that are well separated in frequency are no longer heard as separate streams if presented synchronously rather than consecutively. In contrast, responses from neurons in primary auditory cortex of ferrets show that both synchronous and asynchronous tone sequences produce comparably segregated responses along the tonotopic axis. The results argue against tonotopic separation per se as a neural correlate of stream segregation. Instead we propose a computational model of stream segregation that can account for the data by using temporal coherence as the primary criterion for predicting stream formation.
Original language | English (US) |
---|---|
Pages (from-to) | 317-329 |
Number of pages | 13 |
Journal | Neuron |
Volume | 61 |
Issue number | 2 |
DOIs | |
State | Published - Jan 29 2009 |
Bibliographical note
Funding Information:We thank Cynthia Hunter for assistance in collecting the psychophysical data, and Pingbo Yin and Stephen David for assistance with physiological recordings. We also thank three anonymous reviewers for their valuable comments. This work was supported by grants from the National Institute on Deafness and Other Communication Disorders (R01 DC 07657) and the National Institute on Aging, through the Collaborative Research in Computational Neuroscience program (R01 AG 02757301).