Publication | Open Access
Evaluation of automatically generated English vocabulary questions
22
Citations
17
References
2017
Year
PsycholinguisticsLanguage LearningCorpus LinguisticsSocial SciencesNatural Language ProcessingComputational LinguisticsLanguage AcquisitionLanguage EngineeringEnglish LearnersLanguage StudiesAutomated AssessmentEvaluation ExperimentsLexiconCognitive ScienceQuestion AnsweringLanguage TechnologyTarget WordLexical ResourceEnglish Vocabulary QuestionsLanguage ComprehensionLinguistics
This paper describes details of the evaluation experiments for questions created by an automatic question generation system. Given a target word and one of its word senses, the system generates a multiple-choice English vocabulary question asking for the closest in meaning to the target word in the reading passage. Two kinds of evaluation were conducted considering two aspects: (1) measuring English learners' proficiency and (2) their similarity to the human-made questions. The first evaluation is based on the responses from English learners obtained through administering the machine-generated and human-made questions to them, and the second is based on the subjective judgement by English teachers. Both evaluations showed that the machine-generated questions were able to achieve a comparable level with the human-made questions in both measuring English proficiency and similarity.
| Year | Citations | |
|---|---|---|
Page 1
Page 1