Descripción del proyecto
To interact effectively with the complex dynamic and multisensory world (e.g. traffic) the brain needs to transform the barrage of signals into a coherent percept. This requires it to solve the causal inference or binding problem - deciding which signals come from common sources and integrating those accordingly. Doing so exactly (i.e. optimally) is wildly computationally intractable for all but the simplest laboratory scenes. It is unknown how the brain computes approximate solutions for realistic scenes in the face of resource constraints.
This ambitious interdisciplinary project combines statistical, computational, behavioural and neuroimaging (3/7T-fMRI, MEG/EEG, TMS) methods to determine how, and how well, the brain solves the causal inference problem in progressively richer multisensory environments.
The key hypothesis is that observers compute approximate solutions by sequentially selecting subsets of signals for perceptual integration via attentional and active sensing mechanisms guided by the perceptual tasks they are executing, their prior expectations about the world’s causal structure, and bottom-up salience maps. I will build parallel normative/approximate Bayesian and transformer network models of these processes and combine those with behaviour and neuroimaging to unravel the neurocomputational mechanisms.
The project will develop a novel computational and neuromechanistic account of causal inference in more realistic multisensory scenes, addressing fundamental questions about binding, inference and probabilistic computations. By bringing lab research closer to the real world it will radically alter our perspectives - shifting from near-optimal passive perception in simple scenes to active information gathering in the service of approximate solutions in more realistic scenes. It has the potential to inspire new AI algorithms and drive transformative insights into the perceptual difficulties older and clinical populations face in the real-world.