I use peer review a lot in my classes. There are lots of tools out there dedicated to facilitating peer review. Canvas, Coursera, edX, and other learning management systems have their own in-built tools. I use a tool called Peer Feedback, but there’s also PeerStudio, TurnItIn’s PeerMark, Peerceptiv, Peergrade, and probably a half-dozen more platforms that have launched between when I wrote this and when you’re reading this.
First, to get this out of the way: I only use peer review for review, not for grading. There’s significant literature out there on how to design peer review activities that can generate reliable grades, but the require a ton of effort and oversight—not to mention the generally only work at specific fields and levels. So, this isn’t about using peer review to generate grades, but rather just as a learning activity for writers and recipients.
In my experience, peer review in general has a mixed reputation. On the one hand, in more systematic surveys, students in my classes generally reflect positively on it. One of my end-of-course survey questions is “On a scale of 1 to 7, please rate your agreement with the statement: “The Peer Feedback system has improved my experience in this class.” In my three classes that used peer review last summer, 69%, 54%, and 59% of students selected 5 (“Agree Slightly”) or higher (“Agree” or “Strongly Agree”). Those numbers appear consistent over time: in summer 2020, they were 69%, 55%, and 58%.
At the same time, though, when you look around reddit and other places students share their feedback, there’s a general feeling that peer review isn’t helpful. The evidence cited for that is that students don’t feel their classmates are putting much effort into the peer reviews they write. This has become more pronounced recently as students who previously might have written a short throwaway review instead submit a long, vague, clearly AI-generated review.
Now, first, every semester I go through all the peer reviews I see submitted in my class. I generally see them break down into thirds: one-third are truly insightful and substantive, with multiple takeaways for the reader; one-third are a bit light, but with at least one piece of actionable feedback; and one-third are either entirely or borderline non-substantive. A portion of this last third (typically around a third of this third) aren’t even given credit for participating. So, I do see that on average, every student receives some good feedback every assignment (although that fraction diminishes a bit over time).
But that aside: embedded in this critique is the assumption that the chief value of peer review is in the feedback one receives from one’s classmates. Personally, though, I see four primary pedagogical roles for peer review:
- To see others’ approach to a similar problem. There is pedagogical value in learning from others’ examples, in seeing how others would approach the same task and comparing it to one’s own approach, and in seeing the space of possible answers to a question or task.
- To alter one’s mindset from the generator of content to the critic of content. There is a fundamentally different thought process at work when one looks at existing work and tries to identify its weaknesses than when one generates some content on one’s own. This different process yields differently learning outcomes.
- To remind one that they are surrounded by a community of peers. The weekly reminders that there are dozens—or hundreds, in my case—of other students going through the same assignments yields some sense of connection. We can see the converse of this in MOOCs: when I complete a MOOC, I’ll often submit an assignment and then have to wait months to receive a peer review. It’s a lonely feeling. Seeing classmates’ work weekly is a reminder that one is part of a broader community.
- To get additional feedback on one’s own work from one’s peers.
This breakdown is why I see tremendous value in using peer review in education even when one might not receive great feedback from one’s peers: three of the four pedagogical benefits of peer review come from the act of giving a peer review, not from the act of receiving one. In fact, in my upcoming book, I even recommend teachers use generative AI to generate text solely for students to review and critique—not because the AI will learn from their criticism, but because critiquing AI-generated text still captures the first and second pedagogical benefit above.
I can completely understand the frustration that can come from feeling like one is giving great feedback but getting little in return, but the act of giving that good feedback is such a valuable learning activity on its own. This is, similarly, why I give grades on the peer reviews one submits: not just because they give credit for helping one’s classmates, but because a good peer review is a proxy for what one learned from the process of giving such a review.