An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses

An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses

Minoru Nakayama, Filippo Sciarrone, Marco Temperini, Masaki Uto
Copyright: © 2022 |Volume: 20 |Issue: 1 |Pages: 19
ISSN: 1539-3100|EISSN: 1539-3119|EISBN13: 9781799893424|DOI: 10.4018/IJDET.313639
Cite Article Cite Article

MLA

Nakayama, Minoru, et al. "An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses." IJDET vol.20, no.1 2022: pp.1-19. http://doi.org/10.4018/IJDET.313639

APA

Nakayama, M., Sciarrone, F., Temperini, M., & Uto, M. (2022). An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses. International Journal of Distance Education Technologies (IJDET), 20(1), 1-19. http://doi.org/10.4018/IJDET.313639

Chicago

Nakayama, Minoru, et al. "An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses," International Journal of Distance Education Technologies (IJDET) 20, no.1: 1-19. http://doi.org/10.4018/IJDET.313639

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Massive open on-line courses (MOOCs) are effective and flexible resources to educate, train, and empower populations. Peer assessment (PA) provides a powerful pedagogical strategy to support educational activities and foster learners' success, also where a huge number of learners is involved. Item response theory (IRT) can model students' features, such as the skill to accomplish a task, and the capability to mark tasks. In this paper the authors investigate the applicability of IRT models to PA, in the learning environments of MOOCs. The main goal is to evaluate the relationships between some students' IRT parameters (ability, strictness) and some PA parameters (number of graders per task, and rating scale). The authors use a data-set simulating a large class (1,000 peers), built by a Gaussian distribution of the students' skill, to accomplish a task. The IRT analysis of the PA data allow to say that the best estimate for peers' ability is when 15 raters per task are used, with a [1,10] rating scale.