Descripción del proyecto
The process by which humans formulate increasingly complex internal representations from sensory stimuli based on their available knowledge has been investigated traditionally by a somewhat disconnected set of approaches under various labels such as feature detection, grouping, figure-ground segregation, chunking, categorization and learning. We explore the idea that there might exist some general unconscious mechanisms of this pattern formation process that work similarly and repeatedly across different levels of complexity from basic segmentation to complex chunking-based learning and across different modalities. such as vision (typically viewed with spatial processing) and audition (associated with temporal processing). We will follow 4 principles in our work to test this. First, we will focus a) on the auditory and visual modalities, b) with space and time as the two dependent dimensions of comparison, and c) we will test the two endpoints of the complexity scale: at one end, pure tones in audition and single squares in vision easily treated by instantaneous perceptual biases and at the other end, complex time- and space-varying patterns requiring learning. Second, we will perform the equivalent version of each experiment in both modalities to check the generality of the mechanisms that establish the representations of the input. Third, we will use the same testing method across experiments, measuring behavioral sensitivity induced by the resulting representation to quantify the representation’s structure and its EEG correlates. Fourth, we will interpret our results by building a new hypothesis about humans’ internal representation into a Hierarchical Bayesian Model so that we can derive further individual- and population-specific predictions to inspire subsequent empirical work. Our integrated approach will contribute to a deeper understanding of the general principles of representation forming and its malfunctioning in special populations.