Innovating Works

MultiMT

Financiado
Multi modal Context Modelling for Machine Translation
Automatically translating human language has been a long sought-after goal in the field of Natural Language Processing (NLP). Machine Translation (MT) can significantly lower communication barriers, with enormous potential for pos... Automatically translating human language has been a long sought-after goal in the field of Natural Language Processing (NLP). Machine Translation (MT) can significantly lower communication barriers, with enormous potential for positive social and economic impact. The dominant paradigm is Statistical Machine Translation (SMT), which learns to translate from human-translated examples. Human translators have access to a number of contextual cues beyond the actual segment to translate when performing translation, for example images associated with the text and related documents. SMT systems, however, completely disregard any form of non-textual context and make little or no reference to wider surrounding textual content. This results in translations that miss relevant information or convey incorrect meaning. Such issues drastically affect reading comprehension and may make translations useless. This is especially critical for user-generated content such as social media posts -- which are often short and contain non-standard language -- but applies to a wide range of text types. The novel and ambitious idea in this proposal is to devise methods and algorithms to exploit global multi-modal information for context modelling in SMT. This will require a significantly disruptive approach with new ways to acquire multilingual multi-modal representations, and new machine learning and inference algorithms that can process rich context models. The focus will be on three context types: global textual content from the document and related texts, visual cues from images and metadata including topic, date, author, source. As test beds, two challenging user-generated datasets will be used: Twitter posts and product reviews. This highly interdisciplinary research proposal draws expertise from NLP, Computer Vision and Machine Learning and claims that appropriate modelling of multi-modal context is key to achieve a new breakthrough in SMT, regardless of language pair and text type. ver más
31/12/2021
1M€
Duración del proyecto: 69 meses Fecha Inicio: 2016-03-10
Fecha Fin: 2021-12-31

Línea de financiación: concedida

El organismo H2020 notifico la concesión del proyecto el día 2021-12-31
Línea de financiación objetivo El proyecto se financió a través de la siguiente ayuda:
ERC-StG-2015: ERC Starting Grant
Cerrada hace 9 años
Presupuesto El presupuesto total del proyecto asciende a 1M€
Líder del proyecto
IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND ME... No se ha especificado una descripción o un objeto social para esta compañía.
Perfil tecnológico TRL 4-5