Humans can see and name thousands of distinct object and action categories, so it is unlikely that each category is represented in a distinct brain area. A more efficient scheme would be to represent categories as locations in a continuous semantic space mapped smoothly across the cortical surface. To search for such a space, we used fMRI to measure human brain activity evoked by natural movies. We then used voxelwise models to examine the cortical representation of 1,705 object and action categories. The first few dimensions of the underlying semantic space were recovered from the fit models by principal components analysis. Projection of the recovered semantic space onto cortical flat maps shows that semantic selectivity is organized into smooth gradients that cover much of visual and nonvisual cortex. Furthermore, both the recovered semantic space and the cortical organization of the space are shared across different individuals.
Bibliographical noteFunding Information:
The work was supported by grants from the National Eye Institute (EY019684) and from the Center for Science of Information (CSoI), an NSF Science and Technology Center, under grant agreement CCF-0939370. A.G.H. was also supported by the William Orr Dingwall Neurolinguistics Fellowship. We thank Natalia Bilenko and Tolga Çukur for helping with fMRI data collection, Neil Thompson for assistance with the WordNet analysis, and Tom Griffiths and Sonia Bishop for discussions regarding the manuscript. A.G.H., S.N., and J.L.G. conceived and designed the experiment. A.G.H., S.N., and A.T.V. collected the fMRI data. A.T.V. and Tolga Çukur customized and optimized the fMRI pulse sequence. A.T.V. did brain flattening and localizer analysis. A.G.H. tagged the movies. S.N. and A.G.H. analyzed the data. A.G.H. and J.L.G. wrote the paper.