Brain-optimized deep neural network models of human visual areas learn non-hierarchical representations

Ghislain St-Yves, Emily J. Allen, Yihan Wu, Kendrick Kay, Thomas Naselaris

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Deep neural networks (DNNs) optimized for visual tasks learn representations that align layer depth with the hierarchy of visual areas in the primate brain. One interpretation of this finding is that hierarchical representations are necessary to accurately predict brain activity in the primate visual system. To test this interpretation, we optimized DNNs to directly predict brain activity measured with fMRI in human visual areas V1-V4. We trained a single-branch DNN to predict activity in all four visual areas jointly, and a multi-branch DNN to predict each visual area independently. Although it was possible for the multi-branch DNN to learn hierarchical representations, only the single-branch DNN did so. This result shows that hierarchical representations are not necessary to accurately predict human brain activity in V1-V4, and that DNNs that encode brain-like visual representations may differ widely in their architecture, ranging from strict serial hierarchies to multiple independent branches.

Original languageEnglish (US)
Article number3329
JournalNature communications
Volume14
Issue number1
DOIs
StatePublished - Dec 2023

Bibliographical note

Funding Information:
This work was supported by NSF CRCNS grants IIS-1822683 (K.K.) and IIS-1822929 (T.N.).

Publisher Copyright:
© 2023, This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply.

Center for Magnetic Resonance Research (CMRR) tags

  • BFC

PubMed: MeSH publication types

  • Journal Article
  • Research Support, U.S. Gov't, Non-P.H.S.

Fingerprint

Dive into the research topics of 'Brain-optimized deep neural network models of human visual areas learn non-hierarchical representations'. Together they form a unique fingerprint.

Cite this