Innovating Works

SensifAI

Financiado
Understanding Videos Automatically with the SensifAI Deep Learning Technology
Google has a value of $1 trillion because it has managed to make texts searchable. However, 80% of internet traffic is videos, audios and images and they are not searchable. Making videos searchable is extremely challenging. This... Google has a value of $1 trillion because it has managed to make texts searchable. However, 80% of internet traffic is videos, audios and images and they are not searchable. Making videos searchable is extremely challenging. This is why most of the video tagging is done manually and the results in automated video recognition are still limited. Mobile video recognition is also starting to emerge. SensifAI has developed a cutting-edge audio-visual deep-learning technology trained on millions of videos to recognize audio and video content and to tag them accurately. SensifAI automatically tags videos, images and audio, which makes them searchable and can be customized for a range of use cases. We believe our approach to contextual video analysis is unique and on the leading edge as it recognizes, scenes, actions, celebrities, landmarks, logos, music genre, moods and emotion and speech. SensifAI delivers the video recognition technology on the cloud on the Amazon Web Services Marketplace and can be embedded on devices such as smartphones (by OEM’s). Our software just became available on the Amazon Web Services Marketplace where we follow a unit-based pricing model ranging from €0.01/minute for recognizing landmark images/objects/celebrities/unsafe contents to €0.05/minute for general tagging and action/sport recognition. SensifAI bvba was founded by three alumni and scientists from MIT, ETH Zurich and KU Leuven, who acquired an accumulative experience in audio-visual data processing through involvement in many international projects. Imagine a day when the 30 million visually impaired Europeans use a wearable camera equipped with a software describing them the surrounding environment automatically by recognizing the semantic concept of the captured video. includes the description of the scene, objects, and activities. Similarly, imaging a technology when the 119 million aurally impaired people use a wearable microphone equipped with a software helping them. ver más
31/05/2019
71K€
Duración del proyecto: 6 meses Fecha Inicio: 2018-11-28
Fecha Fin: 2019-05-31

Línea de financiación: concedida

El organismo H2020 notifico la concesión del proyecto el día 2019-05-31
Línea de financiación objetivo El proyecto se financió a través de la siguiente ayuda:
Presupuesto El presupuesto total del proyecto asciende a 71K€
Líder del proyecto
SENSIFAI No se ha especificado una descripción o un objeto social para esta compañía.
Perfil tecnológico TRL 4-5