A few months ago, I started writing course reviews of the various MOOCs I’ve been taking through Coursera and edX. I had a few goals, including helping students find courses to take, helping employers or admissions offices understand the value of these courses, and helping other educators learn and apply lessons to their own classes.
One of my guiding principles, however, was to stay positive: highlight the good things about each course, not the bad. If a course did something particularly poorly, I’d find a different course that addressed that problem well. I wanted these to be positive experiences for all involved, reviews that the individual course developers and instructors would happily share as critical but overall positive descriptions of their course.
I’ve written four of these reviews so far, and I have near-complete drafts of five more, but I can’t bring myself to post them because, simply, I can’t stay positive. The majority of the MOOCs I’ve taken have not been good.
It came to a head with a MOOC I started and finished earlier today. You read that correctly: it was a five-week MOOC that I started and completed in a 45-minute sitting. My usual workflow is to open the assessment, read the questions, make educated guesses about the answers, then watch the lessons to fill in the picture. I use the quizzes to prime myself on what the lessons will be about. But in this recent MOOC, the assessment questions were terrible. There were ten questions, four multiple-choice and six true/false. The answer to every single multiple-choice question was ‘All of the Above’, and the true/false included questions as obvious as, “True/False: One benefit of online education is lower academic integrity” (note: this isn’t an actual question, but rather a paraphrasing of the underlying message of the question).
I went on to complete every assessment in the course — 80 total questions — in 37 minutes. I retried one quiz once to get a 10/10 instead of a 9/10. I received a 38.25/40 on the final exam. Admittedly, I have a background in this course, so I’m at an advantage. However, during one of the quizzes, I read my wife, who has no background in the course’s material, every question. She got every one right, too. And what’s remarkable is that while this was the most clear example I’ve seen of the lack of rigor behind quizzes in most Coursera courses, it isn’t the exception by any means. With a couple exceptions (Nanotechnology and Nanosensors, Astronomy: Exploring Time & Space, Poetry in America), every course I’ve taken has been largely populated by rather trivial assessment questions that do not encourage nor test learning. And even in those situations where the answers to the assessments are not obvious, the instant retake function prevents any real learning and error correction from taking place.
All those problems, however, only address one function of MOOCs. Without solid assessments, we can say that you’ll get out of a MOOC what you put into a MOOC. Verified Certificates are only valuable as a forcing function to make you do the work. Even if this was the case, one could still see some value in MOOCs: they’re making the material available online. Completing a MOOC has no value to an employer or admissions office, but it still may be useful to students.
However, a second disappointing trend has emerged: I have taken a couple MOOCs with radically, and even dangerously, inaccurate information. This occurred particularly sharply in an education-oriented MOOC I took recently. This course talked about how it is incredibly valuable to have your students take a test to discover their learning style, whether visual, auditory, or kinesthetic. It talked about the importance of providing strong extrinsic motivators to students in every class, no matter the subject, no matter the student. It talked about the value of using the Myers-Briggs test. It talked about helping students identify whether they are more left-brained or right-brained.
Any learning scientists reading this can likely share my rage. There is absolutely no literature to support the existence of these learning styles. The Myers-Briggs is not used in any research settings because, similarly, there is no evidence supporting its validity. Significant research has shown that intrinsic motivation is a more effective motivator than extrinsic motivation, and that providing extrinsic motivators decreases intrinsic motivation. There exists no evidence to support the popular conception of left-brained and right-brained thinking. This course, aimed at training teachers to teach, repeatedly advocated unproven, invalid, and even counterproductive methods.
It occurred to me afterwards, however, that I was only able to identify those problems because of my background in educational research. If I lacked that background, I’d be sending out Myers-Briggs tests and learning styles surveys to my students this semester because I wouldn’t know any better; I would assume that the people putting together a MOOC on teaching actually know about teaching. Clearly, that assumption is false. So, what about the other MOOCs I’ve worked on, in which I don’t have any prior background? They could be similarly delivering falsehoods as facts and I would never even know the difference.
This doubt throws out the idea that you get out of a MOOC what you put into it. It is possible to put a lot of work into taking a MOOC and get nothing but false understandings and misconceptions out of it because the trust we put in the developers is misplaced.
This is troubling to me for a number of reasons, but the major reason is that it doesn’t have to be this way. I would argue that many of the criticisms launched at MOOCs in the past are cases of overgeneralization: they are not inherent flaws with MOOCs as a concept, but rather flaws with the way MOOCs are designed and delivered today. With so many failed MOOCs, it is tempting to jump to the conclusion that MOOCs are a failure, not that those MOOCs are failures.
I refuse to go that far, however. It’s worth reiterating that I don’t work on MOOCs; our OMSCS program at Georgia Tech involves large online classes, but they are neither massive nor open. Udacity famously moved away from MOOCs a couple years ago. I maintain, however, that the potential remains for creating MOOCs that are as rigorous, comprehensive, and challenging as traditional college classes that nonetheless improve feedback, collaboration, and community at scale. The problems are not in MOOCs themselves, but simply in the way MOOCs have been designed so far.
There are solutions to these problems. Write better, more rigorous assessments. Vary the questions students receive on retakes. Limit the frequency of retakes. Deliver accurate course material (this one should go without saying). Leverage peer feedback and hybrid grading approaches to break out of the overly objective test structures. All the tools are already available, and it’s entirely possible I’ve just chosen poorly in the MOOCs I’ve chosen to take.
For that reason, though, I’m shifting away from writing a general, informational review about every MOOC I take. Instead, in the future, I’ll be specifically highlighting two things: (a) good MOOCs, and (b) specific strong elements of MOOCs. I hope to still highlight the good things that certain courses are doing, as well as which courses are particularly worth taking, but I can’t say anymore that I can find something good to say about every MOOC I take. When I can complete a “5-week course” in 37 minutes without loading a single course material, it’s safe to say there’s not going to be much positive to say about the course.