evidence accumulation; preferential choice ; fMRI; reward; cognitive modeling; EEG; reinforcement learning
Fontanesi Laura, Palminteri Stefano, Lebreton Maël (2019), Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling, in Cognitive, Affective, & Behavioral Neuroscience
, 19(3), 490-502.
Fontanesi Laura, Gluth Sebastian, Spektor Mikhail S., Rieskamp Jörg (2019), A reinforcement learning diffusion decision model for value-based decisions, in Psychonomic Bulletin & Review
Busemeyer Jerome R., Gluth Sebastian, Rieskamp Jörg, Turner Brandon M. (2019), Cognitive and Neural Bases of Multi-Attribute, Multi-Alternative, Value-based Decisions, in Trends in Cognitive Sciences
, 23(3), 251-263.
Spektor Mikhail S., Gluth Sebastian, Fontanesi Laura, Rieskamp Jörg (2019), How similarity between choice options affects decisions from experience: The accentuation-of-differences model., in Psychological Review
, 126(1), 52-88.
Gluth Sebastian, Spektor Mikhail S, Rieskamp Jörg (2018), Value-based attentional capture affects multi-alternative decision making, in eLife
, 7, e39659.
Gluth Sebastian, Rieskamp Jörg (2017), Variability in behavior that cognitive models do not explain can be linked to neuroimaging data, in Journal of Mathematical Psychology
, 76, 104-116.
Gluth Sebastian, Hotaling Jared M., Rieskamp Jörg (2017), The Attraction Effect Modulates Reward Prediction Errors and Intertemporal Choices, in The Journal of Neuroscience
, 37(2), 371-382.
Many decisions can benefit from learning. For instance, physicians improve their diagnoses on the basis of previous experience. Learning can help in avoiding mistakes and is therefore central for advancing personal and societal development. In the last two decades, cognitive neuroscience has greatly promoted our understanding of the neural mechanism underlying human learning and decision making (e.g., Glimcher et al., 2009). However, the two questions of how decisions emerge and how decisions are improved by learning have until now been mostly addressed independently from each other. The proposed research project aims at connecting neuropsychological models of learning and decision making to improve our understanding of both phenomena and to integrate them in a comprehensive theory of decision making.Among psychological and economic theories of decision making, sequential sampling models (SSMs) offer a precise description of the cognitive process underlying decisions and are strongly supported by empirical evidence (Busemeyer and Townsend, 1993; Fehr and Rangel, 2011; Ratcliff and Rouder, 1998). The principle of SSMs is that evidence about available choice options is accumulated over time until an internal threshold (of required evidence) is met and a decision is made. A particular strength of SSMs is their ability to conjointly predict how and how quickly people decide. Neural representations of evidence accumulation have been found in cortical areas including the medial prefrontal cortex (Gluth, Rieskamp, & Büchel, 2012; Hare et al., 2011) and the intraparietal sulcus (Basten et al., 2010; Gold and Shadlen, 2007).The learning-based improvement of decisions, on the other hand, is captured by reinforcement learning (RL) models (Sutton and Barto, 1998). At its core, RL uses the difference between expected and actual outcomes (i.e., the prediction error) to adapt future predictions and choices. On the neural level, the prediction error has been linked to phasic firing of midbrain dopamine neurons (Schultz et al., 1997; Tobler et al., 2005). As outlined above, existing literature targets only one of the two phenomena of learning and decision making at the same time. SSMs are tested under stationary conditions for obtaining stable estimates of model parameters, such as the height of the decision threshold. To derive choice probabilities, RL models use an exponential choice rule which is oblivious to the underlying decision mechanisms and unable to predict response times. The goal of this proposal is therefore to extend the scope of both approaches by using RL to model the change in SSM parameters in dynamic decision environments and by applying SSMs to RL scenarios for predicting both how and when decisions are made. Neuroimaging techniques, including electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), are used to support the cognitive modeling results as well as to understand how adaptive decision processes are implemented in the human brain.