Multimodal Machine Translation through Visuals and Speech

Umut Sulubacak, Ozan Caglayan, Stig-Arne Grönroos, Aku Rouhe, Desmond Elliott, Lucia Specia, Jörg Tiedemann

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Multimodal machine translation involves drawing information from more than one modality, based on the assumption that the additional modalities will contain useful alternative views of the input data. The most prominent tasks in this area are spoken language translation, image-guided translation, and video-guided translation, which exploit audio and visual modalities, respectively. These tasks are distinguished from their monolingual counterparts of speech recognition, image captioning, and video captioning by the requirement of models to generate outputs in a different language. This survey reviews the major data resources for these tasks, the evaluation campaigns concentrated around them, the state of the art in end-to-end and pipeline approaches, and also the challenges in performance evaluation. The paper concludes with a discussion of directions for future research in these areas: the need for more expansive and challenging datasets, for targeted evaluations of model performance, and for multimodality in both the input and output space.
Original languageEnglish
JournalMachine Translation
Number of pages34
ISSN0922-6567
Publication statusSubmitted - 5 Dec 2019
MoE publication typeA1 Journal article-refereed

Fields of Science

  • 113 Computer and information sciences
  • Natural language processing
  • Machine translation
  • Multimodal

Cite this

Sulubacak, U., Caglayan, O., Grönroos, S-A., Rouhe, A., Elliott, D., Specia, L., & Tiedemann, J. (2019). Multimodal Machine Translation through Visuals and Speech. Manuscript submitted for publication.
Sulubacak, Umut ; Caglayan, Ozan ; Grönroos, Stig-Arne ; Rouhe, Aku ; Elliott, Desmond ; Specia, Lucia ; Tiedemann, Jörg. / Multimodal Machine Translation through Visuals and Speech. In: Machine Translation. 2019.
@article{22a0e9a4a13141adbbd89f10ac4ad23d,
title = "Multimodal Machine Translation through Visuals and Speech",
abstract = "Multimodal machine translation involves drawing information from more than one modality, based on the assumption that the additional modalities will contain useful alternative views of the input data. The most prominent tasks in this area are spoken language translation, image-guided translation, and video-guided translation, which exploit audio and visual modalities, respectively. These tasks are distinguished from their monolingual counterparts of speech recognition, image captioning, and video captioning by the requirement of models to generate outputs in a different language. This survey reviews the major data resources for these tasks, the evaluation campaigns concentrated around them, the state of the art in end-to-end and pipeline approaches, and also the challenges in performance evaluation. The paper concludes with a discussion of directions for future research in these areas: the need for more expansive and challenging datasets, for targeted evaluations of model performance, and for multimodality in both the input and output space.",
keywords = "113 Computer and information sciences, Natural language processing, Machine translation, Multimodal",
author = "Umut Sulubacak and Ozan Caglayan and Stig-Arne Gr{\"o}nroos and Aku Rouhe and Desmond Elliott and Lucia Specia and J{\"o}rg Tiedemann",
year = "2019",
month = "12",
day = "5",
language = "English",
journal = "Machine Translation",
issn = "0922-6567",
publisher = "Springer Netherlands",

}

Sulubacak, U, Caglayan, O, Grönroos, S-A, Rouhe, A, Elliott, D, Specia, L & Tiedemann, J 2019, 'Multimodal Machine Translation through Visuals and Speech', Machine Translation.

Multimodal Machine Translation through Visuals and Speech. / Sulubacak, Umut; Caglayan, Ozan; Grönroos, Stig-Arne; Rouhe, Aku; Elliott, Desmond; Specia, Lucia; Tiedemann, Jörg.

In: Machine Translation, 05.12.2019.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - Multimodal Machine Translation through Visuals and Speech

AU - Sulubacak, Umut

AU - Caglayan, Ozan

AU - Grönroos, Stig-Arne

AU - Rouhe, Aku

AU - Elliott, Desmond

AU - Specia, Lucia

AU - Tiedemann, Jörg

PY - 2019/12/5

Y1 - 2019/12/5

N2 - Multimodal machine translation involves drawing information from more than one modality, based on the assumption that the additional modalities will contain useful alternative views of the input data. The most prominent tasks in this area are spoken language translation, image-guided translation, and video-guided translation, which exploit audio and visual modalities, respectively. These tasks are distinguished from their monolingual counterparts of speech recognition, image captioning, and video captioning by the requirement of models to generate outputs in a different language. This survey reviews the major data resources for these tasks, the evaluation campaigns concentrated around them, the state of the art in end-to-end and pipeline approaches, and also the challenges in performance evaluation. The paper concludes with a discussion of directions for future research in these areas: the need for more expansive and challenging datasets, for targeted evaluations of model performance, and for multimodality in both the input and output space.

AB - Multimodal machine translation involves drawing information from more than one modality, based on the assumption that the additional modalities will contain useful alternative views of the input data. The most prominent tasks in this area are spoken language translation, image-guided translation, and video-guided translation, which exploit audio and visual modalities, respectively. These tasks are distinguished from their monolingual counterparts of speech recognition, image captioning, and video captioning by the requirement of models to generate outputs in a different language. This survey reviews the major data resources for these tasks, the evaluation campaigns concentrated around them, the state of the art in end-to-end and pipeline approaches, and also the challenges in performance evaluation. The paper concludes with a discussion of directions for future research in these areas: the need for more expansive and challenging datasets, for targeted evaluations of model performance, and for multimodality in both the input and output space.

KW - 113 Computer and information sciences

KW - Natural language processing

KW - Machine translation

KW - Multimodal

M3 - Article

JO - Machine Translation

JF - Machine Translation

SN - 0922-6567

ER -

Sulubacak U, Caglayan O, Grönroos S-A, Rouhe A, Elliott D, Specia L et al. Multimodal Machine Translation through Visuals and Speech. Machine Translation. 2019 Dec 5.