Engineering voice-based models and interfaces for enhancing the speech therapy o...
Engineering voice-based models and interfaces for enhancing the speech therapy of minimally-verbal children with autism and their communication
About 30% of children with autism are minimally verbal (MV), meaning they can communicate mainly through nonverbal vocalizations (i.e., vocalizations that do not have typical verbal content). Vocalizations often have self-consiste...
About 30% of children with autism are minimally verbal (MV), meaning they can communicate mainly through nonverbal vocalizations (i.e., vocalizations that do not have typical verbal content). Vocalizations often have self-consistent phonetic content and vary in tone, pitch, and duration depending on the individual's emotional state or intended communication. While vocalizations contain important affective and communicative information and are comprehensible by close caregivers, they are often poorly understood by those who do not know the communicator well. An improved understanding of nonverbal vocalizations could pave the way for a better understanding of the cognitive, social, and emotional mechanisms associated with MV children with autism. Moreover, it could lead to new therapeutic interventions for these subjects based on advanced voice-based technology for Augmented Alternative Communication (AAC), i.e., for communicating without using words.
This MSCA touches on the research fields of audio signal processing (in particular, children's vocalizations perception) and human-computer interaction (in particular, voice-based interfaces for speech therapy). It aims to advance the understanding of MV autistic children's vocalizations and exploit the obtained knowledge to create advanced voice-based interfaces to enhance their therapeutic interventions. During the outgoing phase at MIT, the work will be about identifying and implementing machine learning algorithms for classifying children's vocalizations. The core strategy is to leverage the unique knowledge provided by caregivers who have long-term acquaintance with MV children with autism and can recognize the meaning of their vocalizations. In the return phase at POLIMI, the project will be about designing, developing, and empirically validating a voice-based AAC prototype for children's speech therapy through a participatory-design process involving end users, their caregivers, and autism experts.ver más
Seleccionando "Aceptar todas las cookies" acepta el uso de cookies para ayudarnos a brindarle una mejor experiencia de usuario y para analizar el uso del sitio web. Al hacer clic en "Ajustar tus preferencias" puede elegir qué cookies permitir. Solo las cookies esenciales son necesarias para el correcto funcionamiento de nuestro sitio web y no se pueden rechazar.
Cookie settings
Nuestro sitio web almacena cuatro tipos de cookies. En cualquier momento puede elegir qué cookies acepta y cuáles rechaza. Puede obtener más información sobre qué son las cookies y qué tipos de cookies almacenamos en nuestra Política de cookies.
Son necesarias por razones técnicas. Sin ellas, este sitio web podría no funcionar correctamente.
Son necesarias para una funcionalidad específica en el sitio web. Sin ellos, algunas características pueden estar deshabilitadas.
Nos permite analizar el uso del sitio web y mejorar la experiencia del visitante.
Nos permite personalizar su experiencia y enviarle contenido y ofertas relevantes, en este sitio web y en otros sitios web.