Transfer learning and subword sampling for asymmetric-resource one-to-many neural translation

Stig-Arne Gronroos, Sami Virpioja, Mikko Kurimo

Research output: Contribution to journalArticleScientificpeer-review

Abstract

There are several approaches for improving neural machine translation for low-resource languages: monolingual data can be exploited via pretraining or data augmentation; parallel corpora on related language pairs can be used via parameter sharing or transfer learning in multilingual models; subword segmentation and regularization techniques can be applied to ensure high coverage of the vocabulary. We review these approaches in the context of an asymmetric-resource one-to-many translation task, in which the pair of target languages are related, with one being a very low-resource and the other a higher-resource language. We test various methods on three artificially restricted translation tasks—English to Estonian (low-resource) and Finnish (high-resource), English to Slovak and Czech, English to Danish and Swedish—and one real-world task, Norwegian to North Sámi and Finnish. The experiments show positive effects especially for scheduled multi-task learning, denoising autoencoder, and subword sampling.
Original languageEnglish
JournalMachine Translation
Volume34
Pages (from-to)251-286
Number of pages36
ISSN0922-6567
DOIs
Publication statusPublished - 30 Jan 2021
MoE publication typeA1 Journal article-refereed

Fields of Science

  • 113 Computer and information sciences
  • 6121 Languages

Cite this