Descripción del proyecto
"Deep learning approaches, mostly in the form of convolutional neural networks (CNNs), have taken the field of computer vision by storm. While the progress in recent years has been astounding, it would be dangerous to believe that important problems in computer vision are close to being solved. Many canonical deep networks for vision tasks ranging from image understanding to 3D reconstruction or motion estimation perform incredibly well ""on dataset"", i.e.~in the very setting in which they have been trained. The generalization to novel, related scenarios is still lacking, however. Moreover, large amounts of labeled data are required for training, which are not available in all potential application areas. In addition, the majority of deep networks in computer vision show deficiencies in terms of explainability. That is, the role of network components is often opaque and most deep networks in vision do not output reliable quantifications of the uncertainty of the prediction, limiting the comprehension by users. In this project, we aim to significantly advance deep networks in computer vision toward improved robustness and explainability. To that end, we will investigate structured network architectures, probabilistic methods, and hybrid generative/discriminative models, all with the goal of increasing robustness and gaining explainability. This is accompanied by research on how to assess robustness and aspects of explainability via appropriate datasets and metrics. While we aim to develop a toolbox that is as independent of specific tasks as possible, the work program is grounded in concrete vision problems to monitor progress. We specifically consider the challenges of 3D scene analysis from images and video, including tasks such as panoptic segmentation, 3D reconstruction, and motion estimation. We expect the project to have significant impact in applications of computer vision where robustness is key, data is limited, and user trust is paramount."