Four Papers Accepted to Learning @ Scale 2016

I am pleased to announce that all four of our submissions to Learning @ Scale 2016 were accepted, including one full paper and three short papers. The full abstracts for the papers are available after the jump. Three of the papers are based on our work in the Georgia Tech OMSCS program:

  • A full paper, titled “Graders as Meta-Reviewers: Simultaneously Scaling and Improving Expert Evaluation for Large Online Classrooms”, on leveraging peer reviews to improve the efficiency and usefulness of expert review and grading. This paper is co-authored by nine OMSCS students (Wade Ashby, Liam Irish, Yeeling Lam, Jacob Langston, Isabel Lupiani, Mike Lustig, Paige Pettoruto, Dana Sheahen, Angela Smiley), as well as myself, Amy Bruckman, and Ashok Goel.
  • A short paper, titled “The Unexpected Pedagogical Benefits of Making Higher Education Accessible”, on the positive effect accessibility has had on the pedagogy in OMSCS classes. This paper is co-authored by myself, Ashok Goel, and Charles Isbell.
  • A short paper, titled “Designing Videos with Pedagogical Strategies: Online Students’ Perceptions of Their Effectiveness”, on effective strategies for designing videos to use in online courses. This paper is co-authored by myself, Chaohua Ou, Ashok Goel, and Daniel Haynes.

A fourth paper, a short paper titled “Expert Evaluation of 300 Projects per Day”, was accepted as well, based solely on Udacity’s evaluation of Nanodegree projects. I am the sole author on this paper.

For the abstracts of the papers, check after the jump. Full versions will be available here after the conference.

Graders as Meta-Reviewers: Simultaneously Scaling and Improving Expert Evaluation for Large Online Classrooms

Large classes, both online and residential, typically demand many graders for evaluating students’ written work. Some classes attempt to use autograding or peer grading, but these both present challenges to assigning grades at for-credit institutions, such as the difficulty of autograding to evaluate free-response answers and the lack of expert oversight in peer grading. In a large, online class at Georgia Tech in Summer 2015, we experimented with a new approach to grading: framing graders as meta-reviewers, charged with evaluating the original work in the context of peer reviews. To evaluate this approach, we conducted a pair of controlled experiments and a handful of qualitative analyses. We found that having access to peer reviews improves the perceived quality of feedback provided by graders without decreasing the graders’ efficiency and with only a small influence on the grades assigned.

Full Version (available after April 24th, 2016)

The Unexpected Pedagogical Benefits of Making Higher Education Accessible

Many ongoing efforts in online education aim to increase accessibility through affordability and flexibility, but some critics have noted that pedagogy often suffers during these efforts. In contrast, in the low-cost for-credit Georgia Tech Online Masters of Science in Computer Science (OMSCS) program, we have observed that the features that make the program accessible also lead to pedagogical benefits. In this paper, we discuss the pedagogical benefits, and draw a causal link between those benefits and the factors that increase the program’s accessibility.

Full Version (available after April 24th, 2016)

Designing Videos with Pedagogical Strategies: Online Students’ Perceptions of Their Effectiveness

Despite the ubiquitous use of videos in online learning and enormous literature on designing online learning, there has been relatively little research on what pedagogical strategies should be used to make the most of video lessons and what constitutes an effective video for student learning. We experimented with a model of incorporating four pedagogical strategies, four instructional phases, and four production guidelines-in designing and developing video lessons for an online graduate course. In this paper, we share our experience as well as students’ perceptions of their effectiveness. We also discuss what needs to be done for future research.

Full Version (available after April 24th, 2016)

Expert Evaluation of 300 Projects per Day

In October 2014, one-time MOOC developer Udacity completed its transition from primarily producing massive, open online courses to producing job-focused, project-based microcredentials called “Nanodegree” programs. With this transition came a challenge: whereas MOOCs focus on automated assessment and peer-to-peer grading, project-based microcredentials would only be feasible with expert evaluation. With dreams of enrolling tens of thousands of students at a time, the major obstacle became project evaluation. To address this, Udacity developed a system for hiring external experts as project reviewers. A year later, this system has supported project evaluation on a massive scale: 61,000 projects have been evaluated in 12 months, with 50% evaluated within 2.5 hours (and 88% within 24 hours) of submission. More importantly, students rate the feedback they receive very highly at 4.8/5.0. In this paper, we discuss the structure of the project review system, including the nature of the projects, the structure of the feedback, and the data described above.

Full Version (available after April 24th, 2016)

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

One thought on “Four Papers Accepted to Learning @ Scale 2016”

Leave a Reply

Your email address will not be published. Required fields are marked *