The Drift-Diffusion Model (DDM) is the prevalent computational model of the speed-accuracy trade-off in decision making. The DDM provides an explanation of behavior by optimally balancing reaction times and error rates. However, when applied to value-based decision making, the DDM makes the stark prediction that reaction times depend only on the relative utility difference between the options and not on absolute utility magnitudes. This prediction runs counter to evidence that reaction times decrease with higher utility magnitude. Here, we ask if and how it could be optimal for reaction times to show this observed pattern. We study an algorithmic framework that balances the cost of delaying rewards against the utility of obtained rewards. We find that the functional form of the cost of delay plays a key role, with the empirically observed pattern becoming optimal under multiplicative discounting. We add to the empirical literature by testing whether utility magnitude affects reaction times using a novel methodology that does not rely on functional form assumptions for the subjects' utilities. Our results advance the understanding of how and why reaction times are sensitive to the magnitude of rewards.
|Original language||English (US)|
|State||Published - Dec 27 2019|
PubMed: MeSH publication types
- Journal Article