Unsupervised visual inference can often be performed by exploiting the internal redundancy inside a single visual datum (an image or a video). The strong repetition of patches inside a single image/video provides a powerful data-s...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
PID2021-125711OB-I00
COMBINANDO TECNICAS DE MODELIZACION Y APRENDIZAJE AUTOMATICO...
145K€
Cerrado
EMERGE
Geometry Processing as Inference
2M€
Cerrado
IT-DNN
INFORMATION THEORETIC LIMITS FOR DEEP NEURAL NETWORKS
204K€
Cerrado
PID2019-110977GA-I00
MAS ALLA DE LAS REDES NEURONALES BASADAS EN GRAFOS: APRENDIZ...
133K€
Cerrado
COGNIMUND
Cognitive Image Understanding Image representations and Mul...
2M€
Cerrado
MoonVision 2.0
Using AI Computer Vision 2.0 for Visual Inspection in the In...
71K€
Cerrado
Información proyecto DeepInternal
Duración del proyecto: 66 meses
Fecha Inicio: 2018-04-24
Fecha Fin: 2023-10-31
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Unsupervised visual inference can often be performed by exploiting the internal redundancy inside a single visual datum (an image or a video). The strong repetition of patches inside a single image/video provides a powerful data-specific prior for solving a variety of vision tasks in a blind manner: (i) Blind in the sense that sophisticated unsupervised inferences can be made with no prior examples or training; (ii) Blind in the sense that complex ill-posed Inverse-Problems can be solved, even when the forward degradation is unknown.
While the above fully unsupervised approach achieved impressive results, it relies on internal data alone, hence cannot enjoy the wisdom of the crowd which Deep-Learning (DL) so wisely extracts from external collections of images, yielding state-of-the-art (SOTA) results. Nevertheless, DL requires huge amounts of training data, which restricts its applicability. Moreover, some internal image-specific information, which is clearly visible, remains unexploited by today's DL methods. One such example is shown in Fig.1.
We propose to combine the power of these two complementary approaches – unsupervised Internal Data Recurrence, with Deep Learning, to obtain the best of both worlds. If successful, this will have several important outcomes including:
• A wide range of low-level & high-level inferences (image & video).
• A continuum between Internal & External training – a platform to explore theoretical and practical tradeoffs between amount of available training data and optimal Internal-vs-External training.
• Enable totally unsupervised DL when no training data are available.
• Enable supervised DL with modest amounts of training data.
• New applications, disciplines and domains, which are enabled by the unified approach.
• A platform for substantial progress in video analysis (which has been lagging behind so far due to the strong reliance on exhaustive supervised training data).