Descripción del proyecto
Training and experience can lead to long-lasting improvements in our ability to make decisions based on either ambiguous sensory or probabilistic information (e.g. learning to diagnose a noisy x-ray image or betting on the stock market). These two processes are referred to as perceptual and probabilistic/reward learning, respectively. Despite considerable efforts to uncover the neural systems involved in these processes, perceptual and reward learning have largely been studied in separate lines of research using divergent learning mechanisms. The primary aim of this proposal is to develop a unified framework for integrating these lines of research and understand the extent to which they share a common computational and neurobiological basis. Specifically, we will test the proposition that both the perceptual and reward systems could be understood in a common framework of reward maximization, whereby a domain-general reinforcement-guided learning mechanism – based on separate prediction error representations – facilitates future actions and adaptive behavior. To offer a comprehensive spatiotemporal characterization of the relevant networks and their computational principles we will adopt a state-of-the-art multimodal neuroimaging approach to fuse simultaneously-acquired EEG and fMRI data, via machine-learning-inspired multivariate single-trial analysis techniques and computational modelling. The project’s ultimate goal is to empower a level of neuronal and mechanistic understanding that extends beyond what could be inferred with each of these modalities in isolation. We will achieve this goal by exploiting endogenous trial-by-trial electrophysiological variability to build parametric fMRI predictors that can offer additional explanatory power than what can already be achieved by stimulus- or behaviorally-derived predictors, allowing us to go over and beyond what has been reported previously in the literature.