Solway, AlecLohrenz, TerryMontague, P. Read2019-09-302019-09-302019-09-06http://hdl.handle.net/10919/94145Reward-based decision making is thought to be driven by at least two different types of decision systems: a simple stimulus–response cache-based system which embodies the common-sense notion of “habit,” for which model-free reinforcement learning serves as a computational substrate, and a more deliberate, prospective, model-based planning system. Previous work has shown that loss aversion, a well-studied measure of how much more on average individuals weigh losses relative to gains during decision making, is reduced when participants take all possible decisions and outcomes into account including future ones, relative to when they myopically focus on the current decision. Model-based control offers a putative mechanism for implementing such foresight. Using a well-powered data set (N = 117) in which participants completed two different tasks designed to measure each of the two quantities of interest, and four models of choice data for these tasks, we found consistent evidence of a relationship between loss aversion and model-based control but in the direction opposite to that expected based on previous work: loss aversion had a positive relationship with model-based control. We did not find evidence for a relationship between either decision system and risk aversion, a related aspect of subjective utility.en-USCreative Commons Attribution 3.0 United Statesreinforcement learningmodel-basedplanningneuroeconomicssubjective utilityloss aversionLoss Aversion Correlates With the Propensity to Deploy Model-Based ControlArticle - RefereedFrontiers in Neurosciencehttps://doi.org/10.3389/fnins.2019.0091513