Abstract
In some contexts, human learning greatly exceeds what the sparsity of the available data seems to allow, while in others, it can fall short, despite vast amounts of data. This apparent contradiction has led to separate explanations of humans being equipped either with background knowledge that enhances their learning or with suboptimal mechanisms that hinder it. Here, we reconcile these findings by recognising learners can be uncertain about two structural properties of environments: 1) is there only one generative model or are there multiple ones switching across time; 2) how stochastic are the generative models. We show that optimal learning under these conditions of uncertainty results in learning trade-offs: e.g., a prior for determinism fosters fast initial learning but renders learners susceptible to low asymptotic performance, when faced with high model-stochasticity. Our results reveal the existence of optimal-paths-to-not-learning and reconcile within a coherent framework, phenomena previously considered disparate.
Original language | English (US) |
---|---|
Pages | 3117-3124 |
Number of pages | 8 |
State | Published - 2022 |
Externally published | Yes |
Event | 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 - Toronto, Canada Duration: Jul 27 2022 → Jul 30 2022 |
Conference
Conference | 44th Annual Meeting of the Cognitive Science Society: Cognitive Diversity, CogSci 2022 |
---|---|
Country/Territory | Canada |
City | Toronto |
Period | 7/27/22 → 7/30/22 |
Bibliographical note
Funding Information:We want to thank Aaron Cochrane for useful comments on the manuscript, and Amanda Yung for help with programming the tasks. This work was supported and funded by: a Swiss National Science Foundation grant: 100014_15906/1 to DB; the Luxembourg National Research Fund: ATTRACT/2016/ID/11242114/DIGILEARN to PCL; the INTER Mobility/2017-2/ID/11765868/ULALA to PCL and PS. The Office of Naval Research grant N00014-17-1-2049 to CSG. The Office of Naval Research grant MURI GRANT N00014-07-1-0937 to DB.
Publisher Copyright:
© 2022 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY)
Keywords
- background knowledge
- optimal learning
- prior for determinism
- structural uncertainty
- volatility