Multi modal Context Modelling for Machine Translation
Automatically translating human language has been a long sought-after goal in the field of Natural Language Processing (NLP). Machine Translation (MT) can significantly lower communication barriers, with enormous potential for pos...
Automatically translating human language has been a long sought-after goal in the field of Natural Language Processing (NLP). Machine Translation (MT) can significantly lower communication barriers, with enormous potential for positive social and economic impact. The dominant paradigm is Statistical Machine Translation (SMT), which learns to translate from human-translated examples.
Human translators have access to a number of contextual cues beyond the actual segment to translate when performing translation, for example images associated with the text and related documents. SMT systems, however, completely disregard any form of non-textual context and make little or no reference to wider surrounding textual content. This results in translations that miss relevant information or convey incorrect meaning. Such issues drastically affect reading comprehension and may make translations useless. This is especially critical for user-generated content such as social media posts -- which are often short and contain non-standard language -- but applies to a wide range of text types.
The novel and ambitious idea in this proposal is to devise methods and algorithms to exploit global multi-modal information for context modelling in SMT. This will require a significantly disruptive approach with new ways to acquire multilingual multi-modal representations, and new machine learning and inference algorithms that can process rich context models. The focus will be on three context types: global textual content from the document and related texts, visual cues from images and metadata including topic, date, author, source. As test beds, two challenging user-generated datasets will be used: Twitter posts and product reviews.
This highly interdisciplinary research proposal draws expertise from NLP, Computer Vision and Machine Learning and claims that appropriate modelling of multi-modal context is key to achieve a new breakthrough in SMT, regardless of language pair and text type.ver más
Seleccionando "Aceptar todas las cookies" acepta el uso de cookies para ayudarnos a brindarle una mejor experiencia de usuario y para analizar el uso del sitio web. Al hacer clic en "Ajustar tus preferencias" puede elegir qué cookies permitir. Solo las cookies esenciales son necesarias para el correcto funcionamiento de nuestro sitio web y no se pueden rechazar.
Cookie settings
Nuestro sitio web almacena cuatro tipos de cookies. En cualquier momento puede elegir qué cookies acepta y cuáles rechaza. Puede obtener más información sobre qué son las cookies y qué tipos de cookies almacenamos en nuestra Política de cookies.
Son necesarias por razones técnicas. Sin ellas, este sitio web podría no funcionar correctamente.
Son necesarias para una funcionalidad específica en el sitio web. Sin ellos, algunas características pueden estar deshabilitadas.
Nos permite analizar el uso del sitio web y mejorar la experiencia del visitante.
Nos permite personalizar su experiencia y enviarle contenido y ofertas relevantes, en este sitio web y en otros sitios web.