Self-interpretability of human cognition: How reportable knowledge emerges in le...
Self-interpretability of human cognition: How reportable knowledge emerges in learning
Current artificial intelligence (AI) surpasses human-level performance in a vast range of tasks. However, its decisional processes are opaque, referred to as the AI interpretability problem. Humans, on the other hand, can verbally...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
FFI2009-13416-C02-01
ANALISIS CORPOREIZADO DE LAS NOCIONES DE COMPUTACION, ALGORI...
69K€
Cerrado
AGERISK
Neurocomputational mechanisms underlying age related perform...
171K€
Cerrado
RYC-2017-23231
Computational approaches to perception and decision-making
309K€
Cerrado
THEMPO
The missing link between Perception and Cognition The case...
1M€
Cerrado
TEC2013-49430-EXP
LA MENTE COLECTIVA: UN ENFOQUE COGNITIVO PARA EL MODELADO Y...
91K€
Cerrado
NEUROABSTRACTION
Abstraction and Generalisation in Human Decision Making
2M€
Cerrado
Información proyecto REPORT-IT
Duración del proyecto: 23 meses
Fecha Inicio: 2024-08-01
Fecha Fin: 2026-07-31
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Current artificial intelligence (AI) surpasses human-level performance in a vast range of tasks. However, its decisional processes are opaque, referred to as the AI interpretability problem. Humans, on the other hand, can verbally describe their decisional processes and strategies. The accuracy of these reports varies, especially in complex environments. Yet, people often come up with reasonably accurate explanations for their decisions, thereby allowing knowledge transfer in society. However, the mechanisms of accurate verbal report generation remain unclear. Therefore, the main research objective of the REPORT-IT project is to study how humans generate adequate reportable knowledge during learning through experience. Inspired by the recent findings from research on metacognition (i.e., insight into one's own cognition) and cognition-emotion interaction, I will test the novel hypothesis that metacognition and learning-related affect support the emergence of reportable knowledge. In two experiments modeling complex learning environments (implicit category learning and probabilistic reward learning tasks), I will track the development of metacognition, affect, and reportable knowledge over time. This will allow me to evaluate the temporal relationships between these components and predict the emergence of reportable knowledge. In the final step of the project, I will study the behavior of deep neural networks (DNNs) in the exact same tasks and test whether DNNs can generate temporal patterns of metacognition and affect, as observed in humans. Thereby, the REPORT-IT project combines my expertise in implicit learning and affective science with expertise in neuroscience of consciousness and DNNs at the host institute (University of Amsterdam). This way, REPORT-IT will contribute to understanding how people generate reportable knowledge and, at the same time, provide new approaches for explainable AI.