Group peer assessment for summative evaluation in a graduate-level statistics course for ecologists

Althea ArchMiller, John Fieberg, J. D. Walker, Noah Holm

Research output: Contribution to journalArticle

3 Scopus citations

Abstract

Peer assessment is often used for formative learning, but few studies have examined the validity of group-based peer assessment for the summative evaluation of course assignments. The present study contributes to the literature by using online technology (the course management system Moodle) to implement structured, summative peer review based on an anchored rubric in an ecological statistics course taught to graduate students. We found that grade discrepancies between students and the instructor were fairly common (60% of assignments), relatively low in value (mean = 3.3 ± 2.5% on assignments that had discrepancies) and proportionally higher for criteria related to interpretation of statistical results and code quality and organisation than for criteria related to the successful completion of analysis or instructional tasks (e.g. fitting particular statistical methods, de-identification of one’s submission). Students reported that the peer assessment process increased their exposure to alternative ways of approaching statistical and computational problem-solving, but there were concerns raised about the fairness of the process and the effectiveness of the group component. We conclude with some recommendations for implementing peer assessment to maximise student learning and satisfaction.

Original languageEnglish (US)
Pages (from-to)1208-1220
Number of pages13
JournalAssessment and Evaluation in Higher Education
Volume42
Issue number8
DOIs
StatePublished - Nov 17 2017

    Fingerprint

Keywords

  • Moodle
  • Peer assessment
  • rubrics
  • statistics education

Cite this