Dialect-to-Standard Normalization: A Large-Scale Multilingual Evaluation

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


Text normalization methods have been commonly applied to historical language or user-generated content, but less often to dialectal transcriptions. In this paper, we introduce dialect-to-standard normalization - i.e., mapping phonetic transcriptions from different dialects to the orthographic norm of the standard variety - as a distinct sentence-level character transduction task and provide a large-scale analysis of dialect-to-standard normalization methods. To this end, we compile a multilingual dataset covering four languages: Finnish, Norwegian, Swiss German and Slovene. For the two biggest corpora, we provide three different data splits corresponding to different use cases for automatic normalization. We evaluate the most successful sequence-to-sequence model architectures proposed for text normalization tasks using different tokenization approaches and context sizes. We find that a character-level Transformer trained on sliding windows of three words works best for Finnish, Swiss German and Slovene, whereas the pre-trained byT5 model using full sentences obtains the best results for Norwegian. Finally, we perform an error analysis to evaluate the effect of different data splits on model performance.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics : EMNLP 2023
EditorsHouda Bouamor, Juan Pino, Kalika Bali
Number of pages15
Place of PublicationStroudsburg
PublisherThe Association for Computational Linguistics
Publication date1 Dec 2023
ISBN (Electronic)979-8-89176-061-5
Publication statusPublished - 1 Dec 2023
MoE publication typeA4 Article in conference proceedings
EventConference on Empirical Methods in Natural Language Processing - , Singapore
Duration: 6 Dec 202310 Dec 2023

Fields of Science

  • 113 Computer and information sciences
  • 6121 Languages

Cite this