Using Crowdsourced Exercises for Vocabulary Training to Expand ConceptNet

Christos Rodosthenous, Verena Lyding, Federico Sangati, Alexander König, Umair Ul Hassan, Lionel Nicolas, Jolita Horbacauskiene, Anisia Katinskaia, Lavinia Aparaschivei

Research output: Conference materialsPaperpeer-review


In this work, we report on a crowdsourcing experiment conducted using the V-TREL vocabulary trainer which is accessed via a Telegram chatbot interface to gather knowledge on word relations suitable for expanding ConceptNet. V-TREL is built on top of a generic architecture implementing the implicit crowdsourding paradigm in order to offer vocabulary training exercises generated from the commonsense knowledge-base ConceptNet and -- in the background -- to collect and evaluate the learners' answers to extend ConceptNet with new words. In the experiment about 90 university students learning English at C1 level, based on Common European Framework of Reference for Languages (CEFR), trained their vocabulary with V-TREL over a period of 16 calendar days. The experiment allowed to gather more than 12,000 answers from learners on different question types. In this paper we present in detail the experimental setup and the outcome of the experiment, which indicates the potential of our approach for both crowdsourcing data as well as fostering vocabulary skills.
Original languageEnglish
Number of pages316
Publication statusPublished - 2020
MoE publication typeNot Eligible
EventLanguage Resources and Evaluation Conference - [LREC 2020 was cancelled]
Duration: 11 May 202016 May 2020
Conference number: 12


ConferenceLanguage Resources and Evaluation Conference
Abbreviated titleLREC 2020
Other12th Edition of its Language Resources and Evaluation Conference was cancelled due to Covid 19 pandemic.
Internet address

Fields of Science

  • 113 Computer and information sciences

Cite this