Understanding human action from unstructured 3D point clouds using deep learning...
Understanding human action from unstructured 3D point clouds using deep learning methods
Human action recognition and forecasting is an integral part of autonomous robotic systems that require human-robot interaction as well as other engineering problems. Action recognition is typically achieved using video data and d...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
DynAI
Omni-Supervised Learning for Dynamic Scene Understanding
2M€
Cerrado
VUAD
Video Understanding for Autonomous Driving
145K€
Cerrado
PID2019-108398GB-I00
INTEGRACION DE MODELOS Y DATOS PARA SLAM ACTIVO ROBUSTO EN E...
108K€
Cerrado
TIN2011-28151
SEGUIMIENTO ACELERADO DE MULTIPLES OBJETOS ARTICULADOS EN 3D
27K€
Cerrado
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Human action recognition and forecasting is an integral part of autonomous robotic systems that require human-robot interaction as well as other engineering problems. Action recognition is typically achieved using video data and deep learning methods. However, other tasks, e.g. classification, showed that it is often beneficial to additionally use 3D data. Namely, 3D point clouds that are sampled on the surfaces of objects and agents in the scene. Unfortunately, existing human action recognition methods are somewhat limited, motivating the following research. In this action, we describe a new class of algorithms for 3D human action recognition and forecasting using a deep learning-based approach. Our approach is novel in that it extends a recent body of work on action recognition from 2D to the 3D domain which is particularly challenging due to the unstructured, unordered and permutation invariant nature of 3D point clouds. Our algorithms use the global and local statistical properties of 3D point clouds along with a 3D convolutional neural network to devise novel multi-modal representation of human action. It is inherently robust to spatial changes in the 3D domain, unlike previous works which rely on the 2D projections. In practice, deep learning methods allow us to learn an inference model from real-world examples. A common methodology for action recognition includes creating an annotated dataset, training an inference model and testing its generalization. Our research objectives cover all of these tasks and suggest novel methods to tackle them. Overall, the proposed research offers a new point of view for these long-standing problems, and with the vast related work in other domains, it may bridge the gap to arrive at a generalizable, effective and efficient 3D human action recognition and forecasting machinery. The resulting algorithms may be used in several scientific and engineering domains such as human-robot interaction among other applications.