Descripción del proyecto
Image captioning is the process of mapping a visual scene to a short textual description. Automating this process is vital for many computer applications, including information retrieval from visual data, computerized assistance to visually impaired people, and automatic tour guiding. State-of-the-art captioning systems are limited by their heavy reliance on visual contents. As a result, generated captions are often purely descriptive and miss important information that is needed in order to understand the image. This PoC project develops a captioning tool that will be useful for knowledge-intensive areas like Geography, Radiology or Art History, where captions need to include information that cannot be extracted from images alone. It builds on results of the ROCKY ERC AdG project, whose innovative captioning system integrates external knowledge into the captioning process. This allowed the ROCKY project to employ standard methods of image captioning, with a deep convolutional neural network (CNN) for image understanding and a Transformer network for language generation. Thanks to the external knowledge integration, the ROCKY captioning prototype gets substantially closer to human-generated captions than standard captioning systems that do not take external knowledge into account. This PoC project will use this result by implementing a knowledge-aware captioning system that is scalable for practical purposes. The project examines the feasibility of the ROCKY captioning method for Medical Imaging and Art History and implements it for one of these domain as a use case. The project will engage with experts in these domains, specify a practical captioning system, implement it as an open-source tool and test it in realistic situations. The anticipated value of this effort is in the development of a general method that would allow one open-source platform to be multi-purpose, thereby cost-effectively adjustable to needs of different domains.