Dialect-to-Standard Normalization: A Large-Scale Multilingual Evaluation

Forskningsoutput: Kapitel i bok/rapport/konferenshandlingKonferensbidragVetenskapligPeer review


Text normalization methods have been commonly applied to historical language or user-generated content, but less often to dialectal transcriptions. In this paper, we introduce dialect-to-standard normalization - i.e., mapping phonetic transcriptions from different dialects to the orthographic norm of the standard variety - as a distinct sentence-level character transduction task and provide a large-scale analysis of dialect-to-standard normalization methods. To this end, we compile a multilingual dataset covering four languages: Finnish, Norwegian, Swiss German and Slovene. For the two biggest corpora, we provide three different data splits corresponding to different use cases for automatic normalization. We evaluate the most successful sequence-to-sequence model architectures proposed for text normalization tasks using different tokenization approaches and context sizes. We find that a character-level Transformer trained on sliding windows of three words works best for Finnish, Swiss German and Slovene, whereas the pre-trained byT5 model using full sentences obtains the best results for Norwegian. Finally, we perform an error analysis to evaluate the effect of different data splits on model performance.
Titel på värdpublikationFindings of the Association for Computational Linguistics : EMNLP 2023
RedaktörerHouda Bouamor, Juan Pino, Kalika Bali
Antal sidor15
FörlagThe Association for Computational Linguistics
Utgivningsdatum1 dec. 2023
ISBN (elektroniskt)979-8-89176-061-5
StatusPublicerad - 1 dec. 2023
MoE-publikationstypA4 Artikel i en konferenspublikation
EvenemangConference on Empirical Methods in Natural Language Processing - , Singapore
Varaktighet: 6 dec. 202310 dec. 2023


  • 113 Data- och informationsvetenskap
  • 6121 Språkvetenskaper

Citera det här