Two sides of the same coin: Beneficial and detrimental consequences of range adaptation in human reinforcement learning

Sophie Bavard, Aldo Rustichini, Stefano Palminteri

Research output: Contribution to journalArticlepeer-review

21 Scopus citations

Abstract

Evidence suggests that economic values are rescaled as a function of the range of the available options. Although locally adaptive, range adaptation has been shown to lead to suboptimal choices, particularly notable in reinforcement learning (RL) situations when options are extrapolated from their original context to a new one. Range adaptation can be seen as the result of an adaptive coding process aiming at increasing the signal-to-noise ratio. However, this hypothesis leads to a counterintuitive prediction: Decreasing task difficulty should increase range adaptation and, consequently, extrapolation errors. Here, we tested the paradoxical relation between range adaptation and performance in a large sample of participants performing variants of an RL task, where we manipulated task difficulty. Results confirmed that range adaptation induces systematic extrapolation errors and is stronger when decreasing task difficulty. Last, we propose a range-adapting model and show that it is able to parsimoniously capture all the behavioral results.

Original languageEnglish (US)
Article numbereabe0340
JournalScience Advances
Volume7
Issue number14
DOIs
StatePublished - Apr 2021

Bibliographical note

Publisher Copyright:
Copyright © 2021 The Authors, some rights reserved.

Fingerprint

Dive into the research topics of 'Two sides of the same coin: Beneficial and detrimental consequences of range adaptation in human reinforcement learning'. Together they form a unique fingerprint.

Cite this