ORCID | DBLP | Scopus | Google Scholar | ResearchGate | Web Of Science | Semantic Scholar | MathSciNet
2020 |
|
![]() | Rodosthenous, Christos; Lyding, Verena; Sangati, Federico; König, Alexander; ul Hassan, Umair; Nicolas, Lionel; Horbacauskiene, Jolita; Katinskaia, Anisia; Aparaschivei, Lavinia Using Crowdsourced Exercises for Vocabulary Training to Expand ConceptNet Proceedings Article In: Proceedings of The 12th Language Resources and Evaluation Conference, pp. 307–316, European Language Resources Association, Marseille, France, 2020, ISBN: 979-10-95546-34-4. Abstract | BibTeX | Tags: ConceptNet, enetCollect, Language learning | Links: @inproceedings{rodosthenous-etal-2020-using, In this work, we report on a crowdsourcing experiment conducted using the V-TREL vocabulary trainer which is accessed via a Telegram chatbot interface to gather knowledge on word relations suitable for expanding ConceptNet. V-TREL is built on top of a generic architecture implementing the implicit crowdsourding paradigm in order to offer vocabulary training exercises generated from the commonsense knowledge-base ConceptNet and -- in the background -- to collect and evaluate the learners' answers to extend ConceptNet with new words. In the experiment about 90 university students learning English at C1 level, based on Common European Framework of Reference for Languages (CEFR), trained their vocabulary with V-TREL over a period of 16 calendar days. The experiment allowed to gather more than 12,000 answers from learners on different question types. In this paper we present in detail the experimental setup and the outcome of the experiment, which indicates the potential of our approach for both crowdsourcing data as well as fostering vocabulary skills. |
2019 |
|
![]() | Lyding, Verena; Rodosthenous, Christos; Sangati, Federico; ul Hassan, Umair; Nicolas, Lionel; König, Alexander; Horbacauskiene, Jolita; Katinskaia, Anisia v-trel: Vocabulary Trainer for Tracing Word Relations - An Implicit Crowdsourcing Approach Proceedings Article In: Angelova, Galia; Mitkov, Ruslan; Nikolova, Ivelina; Temnikova, Irina (Ed.): Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2019, pp. 675-684, Varna, Bulgaria, 2019. Abstract | BibTeX | Tags: ConceptNet, crowdsourcing, Language learning, Natural Language Processing | Links: @inproceedings{lyding:2019:RANLP, In this paper, we present our work on developing a vocabulary trainer that uses exercises generated from language resources such as ConceptNet and crowdsources the responses of the learners to enrich the language resource. We performed an empirical evaluation of our approach with 60 non-native speakers over two days, which shows that new entries to expand ConceptNet can efficiently be gathered through vocabulary exercises on word relations. We also report on the feedback gathered from the users and an expert from language teaching, and discuss the potential of the vocabulary trainer application from the user and language learner perspective. The feedback suggests that v-trel has educational potential, while in its current state some shortcomings could be identified. |
![]() | Rodosthenous, Christos; Lyding, Verena; König, Alexander; Horbacauskiene, Jolita; Katinskaia, Anisia; ul Hassan, Umair; Isaak, Nicos; Sangati, Federico; Nicolas, Lionel Designing a Prototype Architecture for Crowdsourcing Language Resources Proceedings Article In: Declerck, Thierry; McCrae, John P. (Ed.): Proceedings of the Poster Session of the 2nd Conference on Language, Data and Knowledge (LDK 2019), pp. 17–23, CEUR, 2019. Abstract | BibTeX | Tags: Commonsense Knowledge, ConceptNet, crowdsourcing, enetCollect, Knowledge Bases, Language learning, Language Resources, Lexicon | Links: @inproceedings{enetcollect1, We present an architecture for crowdsourcing language resources from language learners and a prototype implementation of it as a vocabulary trainer. The vocabulary trainer relies on lexical resources from the ConceptNet semantic network to generate exercises while using the learners' answers to improve the resources used for the exercise generation. |