The Tatoeba Translation Challenge - Realistic Data Sets for Low Resource and Multilingual MT

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

This paper describes the development of a new benchmark for machine translation that provides training and test data for thousands of language pairs covering over 500 languages and tools for creating state-of-the-art translation models from that collection. The main goal is to trigger the development of open translation tools and models with a much broader coverage of the World's languages. Using the package it is possible to work on realistic low-resource scenarios avoiding artificially reduced setups that are common when demonstrating zero-shot or few-shot learning. For the first time, this package provides a comprehensive collection of diverse data sets in hundreds of languages with systematic language and script annotation and data splits to extend the narrow coverage of existing benchmarks. Together with the data release, we also provide a growing number of pre-trained baseline models for individual language pairs and selected language groups.
Original languageEnglish
Title of host publicationProceedings of the Fifth Conference on Machine Translation
EditorsLoïc Barrault [et al.]
Number of pages9
Place of PublicationStroudsburg
PublisherThe Association for Computational Linguistics
Publication date1 Nov 2020
Pages1174-1182
ISBN (Electronic)978-1-948087-81-0
Publication statusPublished - 1 Nov 2020
MoE publication typeA4 Article in conference proceedings
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - [Virtual conference]
Duration: 16 Nov 202020 Nov 2020
https://2020.emnlp.org/

Fields of Science

  • 6121 Languages
  • 113 Computer and information sciences

Cite this