Reward-Mediated, Model-Free Reinforcement-Learning Mechanisms in Pavlovian and Instrumental Tasks Are Related

Neema Moin Afshar, François Cinotti, David Martin, Mehdi Khamassi, Donna J. Calu, Jane R. Taylor, Stephanie M. Groman

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Model-free and model-based computations are argued to distinctly update action values that guide decision-making processes. It is not known, however, if these model-free and model-based reinforcement learning mechanisms recruited in operationally based instrumental tasks parallel those engaged by pavlovian-based behavioral procedures. Recently, computational work has suggested that individual differences in the attribution of incentive salience to reward predictive cues, that is, sign- and goal-tracking behaviors, are also governed by variations in model-free and model-based value representations that guide behavior. Moreover, it is not appreciated if these systems that are characterized computationally using model-free and model-based algorithms are conserved across tasks for individual animals. In the current study, we used a within-subject design to assess sign-tracking and goal-tracking behaviors using a pavlovian conditioned approach task and then characterized behavior using an instrumental multistage decision-making (MSDM) task in male rats. We hypothesized that both pavlovian and instrumental learning processes may be driven by common reinforcement-learning mechanisms. Our data confirm that sign-tracking behavior was associated with greater reward-mediated, model-free reinforcement learning and that it was also linked to model-free reinforcement learning in the MSDM task. Computational analyses revealed that pavlovian model-free updating was correlated with model-free reinforcement learning in the MSDM task. These data provide key insights into the computational mechanisms mediating associative learning that could have important implications for normal and abnormal states.

Original languageEnglish (US)
Pages (from-to)458-471
Number of pages14
JournalJournal of Neuroscience
Volume43
Issue number3
DOIs
StatePublished - Jan 18 2023

Bibliographical note

Funding Information:
This work was supported by National Institutes of Health-National Institute on Drug Abuse Grants DA041480 (J.R.T.), DA043443 (J.R.T.), DA051598 (S.M.G.), and DA043533 (D.J.C.); McKnight Foundation Memory and Cognitive Disorders Award (D.J.C.); and the State of Connecticut Department of Mental Health and Addiction Services through its support of the Ribicoff Laboratories. We thank Matthew Roesch for leading discussions that made this collaborative work a possibility.

Funding Information:
This work was supported by National Institutes of Health–National Institute on Drug Abuse Grants DA041480 (J.R.T.), DA043443 (J.R.T.), DA051598 (S.M.G.), and DA043533 (D.J.C.); McKnight Foundation Memory and Cognitive Disorders Award (D.J.C.); and the State of Connecticut Department of Mental Health and Addiction Services through its support of the Ribicoff Laboratories. We thank Matthew Roesch for leading discussions that made this collaborative work a possibility. The authors declare no competing financial interests. Correspondence should be addressed to Stephanie M. Groman at sgroman@umn.edu. https://doi.org/10.1523/JNEUROSCI.1113-22.2022 Copyright © 2023 the authors

Publisher Copyright:
Copyright © 2023 the authors.

Keywords

  • computational psychiatry
  • decision-making
  • incentive salience
  • model-based learning
  • model-free learning

PubMed: MeSH publication types

  • Journal Article
  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

Fingerprint

Dive into the research topics of 'Reward-Mediated, Model-Free Reinforcement-Learning Mechanisms in Pavlovian and Instrumental Tasks Are Related'. Together they form a unique fingerprint.

Cite this