Joyner, D. A. (2016). Scaling the human expert’s role in large online programs. In Proceedings of Learning with MOOCs III. Philadelphia, PA.
Eighteen months ago, Udacity transitioned from emphasizing standalone MOOCs to developing comprehensive vocational project-based “Nanodegree” programs. With these Nanodegree programs came a challenge: Nanodegree programs were constructed around series of partially open-ended projects with considerable leeway for student creativity and exploration. Such projects could not be graded automatically. Peer grading, a staple of many MOOC providers, presented similar issues: while reliable in some contexts (Falchikov & Goldfinch 2000) and a useful learning activity (Li, Liu, & Steckelberg 2010), it fails to provide expert-level formative feedback (Joyner 2016), presents challenges for reputation and accreditation of for-credit programs (Joyner et al. 2016), and struggles with advanced material.
To address these challenges, Udacity leveraged the freelance economy to inject expert-level feedback in two places throughout Nanodegree programs: through qualified forum mentors and through expert project evaluators. The success of this program has demonstrated the unique role human educators play in delivering education at scale.
Forum mentors are qualified experts in the area of the Nanodegree program, most commonly experienced professionals in the field or exemplary Nanodegree graduates. Mentors are freelancers, paid to assist students via classroom forums. The forum mentor program launched in August 2015, and as a result, the capacity of the Nanodegree programs to supply expert feedback to students has increased tremendously. More importantly, students are exposed to a more varied audience of experienced professionals in their target fields.
Like forum mentors, project reviewers are typically experienced professionals or exemplary Nanodegree graduates who are paid per project they evaluate. After acceptance into the project reviewer pool and training on a set of previously-evaluated projects, reviewers may begin selecting projects to review. Reviewers assess projects on a number of criteria, and award students passing evaluation if their submissions meet expectations. If a submission does not meet expectations, the expert provides written feedback for the student to incorporate before resubmitting. A full description is in Joyner 2016. At present, this process evaluates thousands of open-ended projects a month, with an average student satisfaction rating of 4.9/5.0 and a median turnaround time of less than two hours.
These systems have allowed Udacity’s Nanodegree programs to scale to over 10,000 students enrolled while still providing every student access to human mentors and evaluators. More importantly, these approaches have led to rich pedagogical benefits. The availability of freelance coaches and reviewers mean students receive feedback quickly enough to engage in rapid revision and learning cycles. The expert-level quality of this feedback individualizes the experience for every students. The human element personalizes the experience, emphasizing to students the present of a real person who cares about their progress. Thus, direct human involvement may play a critical role in scaling online education.
David Joyner is a course developer with Udacity and an instructor with the Georgia Tech Online MS in CS program. Although he works for Udacity, he was not involved in the development or administration of Udacity’s project review system, and instead has evaluated it independently. We are grateful to Yael Goshen, Christine Hall, Oliver Cameron, Shernaz Daver, and Jeannie Hornung for information supplied in support of this work.
Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287-322.
Joyner, D. A. (2016). Expert Evaluation of 300 Projects Per Day. In Proceedings of the Third Annual ACM Conference on Learning at Scale. Edinburgh, Scotland.
Joyner, D. A., Ashby, W., Irish, L., Lam, Y., Langston, J., Lupiani, I., Lustig, M., Pettoruto, P., Sheahen, D., Smiley, A., Bruckman, A., & Goel, A. (2016). Graders as Meta-Reviewers: Simultaneously Scaling and Improving Expert Evaluation for Large Online Classrooms. In Proceedings of the Third Annual ACM Conference on Learning at Scale. Edinburgh, Scotland.
Li, L., Liu, X., & Steckelberg, A. L. (2010). Assessor or assessee: How student learning improves by giving and receiving peer feedback. British Journal of Educational Technology, 41(3), 525-536.