acoustic SCene ANalysis for Detecting Living Entities
Analysing fine motor activity in articulatory structures of humans or animals in combination with the sounds they emit yields information about their intentions and likely future actions. In this project we propose to develop a co...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
HI-Audio
Hybrid and Interpretable Deep neural audio machines
2M€
Cerrado
AVISPIRE
Audio VIsual Speech Processing for Interaction in Realistic...
88K€
Cerrado
Información proyecto SCANDLE
Líder del proyecto
UNIVERSITY OF PLYMOUTH
No se ha especificado una descripción o un objeto social para esta compañía.
TRL
4-5
Presupuesto del proyecto
3M€
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Analysing fine motor activity in articulatory structures of humans or animals in combination with the sounds they emit yields information about their intentions and likely future actions. In this project we propose to develop a cognitive acoustic scene analysis system that is able to synthesize composite representations of animate entities and their behaviour by integrating information from active and passive sound signatures; i.e. from actively self-generated (sonar) sounds and from passively received sounds emitted by those entities. The system is an acoustic analogy to a camera-based visual scene analysis system, particularly suited to detecting the presence and characterising the behaviour of living entities in the environment. This highly innovative proposal builds upon fundamental research on perceptual organisation in natural systems, recent advances in models of auditory processing, and technological developments in ultra-low power, distributed neuromorphic systems and state-of-the-art micro-sonar technology. The biologically inspired architecture and processing mechanisms of the proposed system support autonomous real-time context-dependent operation, allowing it to parse complex mixtures of sounds into meaningful units and categorise them. We propose novel methodologies for evaluating the emergence of representations in autonomous systems, and for communicating the ongoing internal state of the system to human observers. Work on the project will significantly advance scientific understanding of auditory perceptual organisation, technological developments in neuromorphic systems, and the potential impact of artificial cognitive systems. Successful achievement of our objectives will result in the development of a ground breaking proof-of-concept cognitive acoustic scene analysis system capable of robust operation in real-world environments and suitable for deployment in situations where visual information may be unavailable, unobtainable or even undesirable.