Morphologically motivated word classes for very large vocabulary speech recognition of Finnish and Estonian

Matti Varjokallio, Sami Virpioja, Mikko Kurimo

Research output: Contribution to journalArticleScientificpeer-review


We study class-based n-gram and neural network language models for very large vocabulary speech recognition of two morphologically rich languages: Finnish and Estonian. Due to morphological processes such as derivation, inflection and compounding, the models need to be trained with vocabulary sizes of several millions of word types. Class-based language modelling is in this case a powerful approach to alleviate the data sparsity and reduce the computational load. For a very large vocabulary, bigram statistics may not be an optimal way to derive the classes. We thus study utilizing the output of a morphological analyzer to achieve efficient word classes. We show that efficient classes can be learned by refining the morphological classes to smaller equivalence classes using merging, splitting and exchange procedures with suitable constraints. This type of classification can improve the results, particularly when language model training data is not very large. We also extend the previous analyses by rescoring the hypotheses obtained from a very large vocabulary recognizer using class-based neural network language models. We show that despite the fixed vocabulary, carefully constructed classes for word-based language models can in some cases result in lower error rates than subword-based unlimited vocabulary language models.
Original languageEnglish
Article number101141
JournalComputer Speech and Language
Number of pages19
Publication statusPublished - Mar 2021
MoE publication typeA1 Journal article-refereed

Fields of Science

  • Language modelling
  • Class-based language models
  • Morphologically rich languages
  • 6121 Languages

Cite this