Structure learning in human sequential decision-making

Daniel Acuña, Paul R Schrater

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations


We use graphical models and structure learning to explore how people learn policies in sequential decision making tasks. Studies of sequential decision-making in humans frequently find suboptimal performance relative to an ideal actor that knows the graph model that generates reward in the environment. We argue that the learning problem humans face also involves learning the graph structure for reward generation in the environment. We formulate the structure learning problem using mixtures of reward models, and solve the optimal action selection problem using Bayesian Reinforcement Learning. We show that structure learning in one and two armed bandit problems produces many of the qualitative behaviors deemed suboptimal in previous studies. Our argument is supported by the results of experiments that demonstrate humans rapidly learn and exploit new reward structure.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 21 - Proceedings of the 2008 Conference
Number of pages8
StatePublished - Dec 1 2009
Event22nd Annual Conference on Neural Information Processing Systems, NIPS 2008 - Vancouver, BC, Canada
Duration: Dec 8 2008Dec 11 2008


Other22nd Annual Conference on Neural Information Processing Systems, NIPS 2008
CityVancouver, BC


Dive into the research topics of 'Structure learning in human sequential decision-making'. Together they form a unique fingerprint.

Cite this