On the differences between BERT and MT encoder spaces and how to address them in translation tasks

Tutkimustuotos: Artikkeli kirjassa/raportissa/konferenssijulkaisussaKonferenssiartikkeliTieteellinenvertaisarvioitu

Abstrakti

Various studies show that pretrained language models such as BERT cannot straightforwardly replace encoders in neural machine translation despite their enormous success in other tasks. This is even more astonishing considering the similarities between the architectures. This paper sheds some light on the embedding spaces they create, using average cosine similarity, contextuality metrics and measures for representational similarity for comparison, revealing that BERT and NMT encoder representations look significantly different from one another. In order to address this issue, we propose a supervised transformation from one into the other using explicit alignment and fine-tuning. Our results demonstrate the need for such a transformation to improve the applicability of BERT in MT.
Alkuperäiskielienglanti
OtsikkoProceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing : Student Research Workshop
ToimittajatJad Kabbara, Haitao Lin, Amandalynne Paullada, Jannis Vamvas
Sivumäärä11
JulkaisupaikkaStroudsburg
KustantajaThe Association for Computational Linguistics
Julkaisupäiväelok. 2021
Sivut337-347
ISBN (painettu)978-1-954085-55-8
DOI - pysyväislinkit
TilaJulkaistu - elok. 2021
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisuussa
TapahtumaThe Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021) - Bangkok [Online event]
Kesto: 5 elok. 20216 elok. 2021
Konferenssinumero: 59/11

Tieteenalat

  • 113 Tietojenkäsittely- ja informaatiotieteet
  • 6121 Kielitieteet

Siteeraa tätä