Virginia TechLohrenz, TerryMcCabe, KevinCamerer, Colin F.Montague, P. Read2017-09-262017-09-262007-04-13http://hdl.handle.net/10919/79420Reinforcement learning models now provide principled guides for a wide range of reward learning experiments in animals and humans. One key learning (error) signal in these models is experiential and reports ongoing temporal differences between expected and experienced reward. However, these same abstract learning models also accommodate the existence of another class of learning signal that takes the form of a fictive error encoding ongoing differences between experienced returns and returns that ‘‘could-have-been-experienced’’ if decisions had been different. These observations suggest the hypothesis that, for all real-world learning tasks, one should expect the presence of both experiential and fictive learning signals. Motivated by this possibility, we used a sequential investment game and fMRI to probe ongoing brain responses to both experiential and fictive learning signals generated throughout the game. Using a large cohort of subjects (n 54), we report that fictive learning signals strongly predict changes in subjects’ investment behavior and correlate with fMRI signals measured in dopaminoceptive structures known to be involved in valuation and choice.en-USIn Copyrightcounterfactual signalsdecision-makingneureconomicsreinforcement learningNeural signature of fictive learning signals in a sequential investment taskArticle - RefereedPNAShttps://doi.org/10.1073/pnas.060884210410422