Joyner, D., Ashby, W., Irish, L., Lam, Y., Langston, J., Lupiani, I., Lustig, M., Pettoruto, P., Sheahen, D., Smiley, A., Bruckman, A., & Goel, A. (2016). Graders as Meta-Reviewers: Simultaneously Scaling and Improving Expert Evaluation for Large Online Classrooms. In Proceedings of the Third Annual Conference on Learning at Scale. Edinburgh, United Kingdom.
Large classes, both online and residential, typically demand many graders for evaluating students’ written work. Some classes attempt to use autograding or peer grading, but these both present challenges to assigning grades at for-credit institutions, such as the difficulty of autograding to evaluate free-response answers and the lack of expert oversight in peer grading. In a large, online class at Georgia Tech in Summer 2015, we experimented with a new approach to grading: framing graders as meta-reviewers, charged with evaluating the original work in the context of peer reviews. To evaluate this approach, we conducted a pair of controlled experiments and a handful of qualitative analyses. We found that having access to peer reviews improves the perceived quality of feedback provided by graders without decreasing the graders’ efficiency and with only a small influence on the grades assigned.