Human Evaluation of Creative NLG Systems: An Interdisciplinary Survey on Recent Papers

Tutkimustuotos: Artikkeli kirjassa/raportissa/konferenssijulkaisussaKonferenssiartikkeliTieteellinenvertaisarvioitu

Abstrakti

We survey human evaluation in papers presenting work on creative natural language generation that have been published in INLG 2020 and ICCC 2020. The most typical human evaluation method is a scaled survey, typically on a 5 point scale, while many other less common methods exist. The most commonly evaluated parameters are meaning, syntactic correctness, novelty, relevance and emotional value, among many others. Our guidelines for future evaluation include clearly defining the goal of the generative system, asking questions as concrete as possible, testing the evaluation setup, using multiple different evaluation setups, reporting the entire evaluation process and potential biases clearly, and finally analyzing the evaluation results in a more profound way than merely reporting the most typical statistics.
Alkuperäiskielienglanti
OtsikkoProceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)
ToimittajatAntoine Bosselut [et al.]
Sivumäärä12
JulkaisupaikkaStroudsburg
KustantajaThe Association for Computational Linguistics
Julkaisupäivä2021
Sivut84–95
ISBN (elektroninen)978-1-954085-67-1
DOI - pysyväislinkit
TilaJulkaistu - 2021
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisuussa
TapahtumaWorkshop on Natural Language Generation, Evaluation, and Metrics - [Online event], Bangkok, Thaimaa
Kesto: 6 elok. 20216 elok. 2021
Konferenssinumero: 1
https://gem-benchmark.com/workshop

Tieteenalat

  • 6121 Kielitieteet
  • 113 Tietojenkäsittely- ja informaatiotieteet

Siteeraa tätä