One of the common criticisms of MOOCs is that they only cater to self-driven, self-motivated, self-regulated learners. To alleviate this, we focus on developing courses that address the rest of learners. But are we focusing on the wrong side of the issue? Instead of developing courses that cater to non-autodidacts, can we instead create more autodidacts in the first place?
This morning, the Bill & Melinda Gates Foundation posted this article.
The article gives the demographics of college students in the United States and paints a very different picture than the popular conception. Some interesting takeaways:
- 25% of students now receive part of their college education online.
- 26% already have children.
- 83% receive some kind of financial aid, and 72% are employed while pursuing their degree.
To keep up with these portions of the college population, it’s clear that higher education needs to get cheaper, more flexible, and more accessible. Massive numbers of students can’t afford to take years off from work to pursue a higher degree, and the inability to do so should not preclude them from world-class education.
It’s important to remember that college is not just a place for students to learn from professors; it is also a place for students to learn from one another. The unique perspective, industry experience, and life experience of these non-traditional students should not simply be accommodated; it should be leveraged.
Educause Review Online posted an analysis from Justin Reich looking at what predicts MOOC completion rates. Several people have criticized MOOCs for their low completion rates, while others have suggested completion rates aren’t the best metric by which to evaluate MOOCs’ success. This information puts a spotlight on this question: what does predict MOOC completion rate, and what does it tell us about the value of this metric?
The results support both sides. On the one hand, student intent is a powerful predictor of course completion:
The study found that, on average among survey respondents, 22 percent of students who intended to complete a course earned a certificate, compared with 6 percent of students who intended to browse a course.
It stands to reason that when evaluating completion rates, we shouldn’t be including those students who never intended to complete the course in the first place. This supports the argument of those who suggest completion rates are too simple and inaccurate a metric: few if any traditional students start a college course without intending to finish it, yet around half of MOOC students start a course without any firm intent to complete the course.
At the same time, however, 22% is still a rather low percentage. To draw a crude comparison, 75% of college freshman return for their sophomore year. It’s reasonable to speculate, then, that at least 75% of college students complete the courses they begin, while 22% of MOOC students do so.
Either way, this information gives us some powerful insights into the way we ought to be evaluating MOOC success. Completion rate out of all students who begin a course is clearly not a good metric, but proponents of MOOCs would likely argue that completion rate simply out of students intending to complete a course is not sufficient either. MOOCs typically lack a pay gate: there is less risk to starting a MOOC you may not finish, and less sunk cost to dropping a MOOC you’ve already started.
This morning, Ashok and I presented our talk at the GVU Brown Bag titled “Putting Online Learning and Learning Sciences Together”. You can watch the talk in its entirety here.
In the talk, one of my main reflections is that the student body of the OMS are incredibly engaged, driven, and invested. They’ve really taken ownership over the success of the program in a way that, in my opinion, in-person students don’t. Perhaps I’m wrong about the students, but I can earnestly say interacting with them has been the most satisfying and stimulating experience of my educational career.
In the Q&A afterward, however, we received a good question: the current OMS students are a highly self-selected group. They were willing to apply and pay to join an experimental (although accredited) program. It’s not surprising that a group of students that is willing to be the “early adopters” are more engaged and driven to participate.
Over time, however, will this change? On the one hand, it is easy to see how the shine of the early adopters could fade as the program becomes more routine and respected. In my opinion, the early adopters feel (and rightfully so) that they reflect on the program as much as the program reflects on them because they’re the first cohort of students that will advertise the OMSCS’s greatness to the world. Five years from now, however, will the 10,000th student feel as much ownership of the program as the 10th student did? It’s reasonable to think they won’t, and that the students in the OMS will come to resemble in-person students over time.
On the other hand, could it be that the excellence of the OMS students is not due to their status as early adopters, but rather due to their demographics and industry experience? The OMS students are largely professionals with families, and it is reasonable that this will remain the target audience as time goes on as well given the undergraduate prerequisite for the program. This audience, with its superior industry and life experience, may be what is leading to this increased engagement and sense of ownership. These are students that are used to have responsibility and influence rather than just being pushed through a system to get a degree, and it is entirely possible that this audience will remain this invested as time goes on.
But there are other possibilities as well. Maybe the excellence of students in the program is actually a product of the medium itself. The asynchronous online interface provides some rich opportunities for students to ownership over the class. In person, for example, it’s rare for a student to pose a discussion that dominates the class period; after all, the professor typically enters the class with a lesson plan in mind that ought not to be entirely derailed by student questions. Online, proposing a discussion does not take away class time from the lessons. Thus, perhaps it’s simply the online medium that brings out the best in the students. If that’s the case, then the online environment may have a powerful opportunity to not only be a useful alternative to traditional education, but may even provide some advantages over it.
In our efforts to scale up education to reach more students without investing more resources, we often make the mistake of trading scale for quality. This was one of the initial major criticisms of MOOCs: sure, they’re massive and accessible and free, but they aren’t actually good educational experiences. There’s often no feedback, no assessment, and no interaction. MOOCs might appeal to the most self-driven, self-regulated learners, but to the vast majority of students their usefulness is limited.
One of the things we’ve experimented with in trying to improve the scale of our OMS class is peer-to-peer feedback. Students receive the assignments of their classmates and evaluate them, sending their feedback back. Primarily, this helps scale by providing students an alternate source of feedback that doesn’t take away from our limited grading resources. It can also help scale by giving graders some starting information on what to look for in incoming assignments, directing graders’ attention to those students that need more support, and even offloading parts of the grading process onto the students altogether if peer feedback becomes peer grading.
However, the more interesting thing I’ve observed is that while peer feedback may help with scale, it is also a useful activity in and of itself. We’ve observed four pedagogical benefits to peer-to-peer feedback in our OMS class:
- Increased feedback. The most obvious benefit of peer-to-peer feedback is that students receive additional feedback beyond what they would have received from graders. This feedback often also brings other perspectives, views, and ideas, especially in the OMS class where students come from such diverse professional backgrounds. But increased feedback is the least surprising of these benefits; what else do we gain from peer-to-peer feedback?
- Learning by example. We’ve asked students for a lot of feedback in our course. On the topic of peer-to-peer feedback, the top piece of positive commentary we’ve gotten has had nothing to do with the feedback students receive or given, but rather the value of simply seeing others’ work. Students comment that seeing the way their classmates approach an assignment helps them understand the strengths, weaknesses, and assumptions of their own answer. This provides some implicit feedback without using a second of grader time.
- Learning by teaching. Asking students to give each other feedback also leads to a bit of a role reversal. Students are no longer merely students; they are also teachers, asked to help one another out. This means students must read assignments with an analytical, critical eye, and picking out the flaws, assumptions, and strengths of others’ assignments is a powerful learning exercise in its own right.
- Authenticity. Generally, students in the OMS program are looking for careers in software development or some kind. Nearly all software development involves working on a team. Team members are constantly asked to evaluate and use one another’s work, as well as take critique and feedback on their own work from other team members. The peer-to-peer feedback exercise is an authentic replication of one of the higher-order skills students in our program will be asked to do professionally.
There are, of course, challenges to overcome. Students reflected in our class that they often do not get good feedback from their peers. The amount of time students actually spent giving feedback to their classmates was often dismal. However, the fundamental structure of peer-to-peer feedback has some powerful pedagogical advantages that are worth pursuing; and, as a happy side effect, it may also help us address the pervasive question of scale.
Quick: what’s the hardest thing you’ve ever learned to do?
I’ll speculate that for many of you, the answer was something that you learned in college or in your career: perhaps fully solving advanced differential equations or articulating the complex nuances of deconstructionist literary analysis. For others, it may be something earlier that posed a severe challenge to you at an earlier level of education; many students struggle with logarithms, comma splices, or Newton’s laws of motions. From my experience as a tutor, I can say with confidence that many difficulties I see with advanced math can be traced back to lingering difficulties students encountered with fractions back in middle school.