The Tatoeba Translation Challenge - Realistic Data Sets for Low Resource and Multilingual MT

Forskningsoutput: Kapitel i bok/rapport/konferenshandlingKonferensbidragVetenskapligPeer review

Sammanfattning

This paper describes the development of a new benchmark for machine translation that provides training and test data for thousands of language pairs covering over 500 languages and tools for creating state-of-the-art translation models from that collection. The main goal is to trigger the development of open translation tools and models with a much broader coverage of the World's languages. Using the package it is possible to work on realistic low-resource scenarios avoiding artificially reduced setups that are common when demonstrating zero-shot or few-shot learning. For the first time, this package provides a comprehensive collection of diverse data sets in hundreds of languages with systematic language and script annotation and data splits to extend the narrow coverage of existing benchmarks. Together with the data release, we also provide a growing number of pre-trained baseline models for individual language pairs and selected language groups.
Originalspråkengelska
Titel på gästpublikationProceedings of the Fifth Conference on Machine Translation
RedaktörerLoïc Barrault [et al.]
Antal sidor9
UtgivningsortStroudsburg
FörlagThe Association for Computational Linguistics
Utgivningsdatum1 nov 2020
Sidor1174-1182
ISBN (elektroniskt)978-1-948087-81-0
StatusPublicerad - 1 nov 2020
MoE-publikationstypA4 Artikel i en konferenspublikation
EvenemangThe 2020 Conference on Empirical Methods in Natural Language Processing - [Virtual conference]
Varaktighet: 16 nov 202020 nov 2020
https://2020.emnlp.org/

Vetenskapsgrenar

  • 6121 Språkvetenskaper
  • 113 Data- och informationsvetenskap

Citera det här