I can’t say anything good about most MOOCs.

A few months ago, I started writing course reviews of the various MOOCs I’ve been taking through Coursera and edX. I had a few goals, including helping students find courses to take, helping employers or admissions offices understand the value of these courses, and helping other educators learn and apply lessons to their own classes.

One of my guiding principles, however, was to stay positive: highlight the good things about each course, not the bad. If a course did something particularly poorly, I’d find a different course that addressed that problem well. I wanted these to be positive experiences for all involved, reviews that the individual course developers and instructors would happily share as critical but overall positive descriptions of their course.

I’ve written four of these reviews so far, and I have near-complete drafts of five more, but I can’t bring myself to post them because, simply, I can’t stay positive. The majority of the MOOCs I’ve taken have not been good.

It came to a head with a MOOC I started and finished earlier today. You read that correctly: it was a five-week MOOC that I started and completed in a 45-minute sitting. My usual workflow is to open the assessment, read the questions, make educated guesses about the answers, then watch the lessons to fill in the picture. I use the quizzes to prime myself on what the lessons will be about. But in this recent MOOC, the assessment questions were terrible. There were ten questions, four multiple-choice and six true/false. The answer to every single multiple-choice question was ‘All of the Above’, and the true/false included questions as obvious as, “True/False: One benefit of online education is lower academic integrity” (note: this isn’t an actual question, but rather a paraphrasing of the underlying message of the question).

I went on to complete every assessment in the course — 80 total questions — in 37 minutes. I retried one quiz once to get a 10/10 instead of a 9/10. I received a 38.25/40 on the final exam. Admittedly, I have a background in this course, so I’m at an advantage. However, during one of the quizzes, I read my wife, who has no background in the course’s material, every question. She got every one right, too. And what’s remarkable is that while this was the most clear example I’ve seen of the lack of rigor behind quizzes in most Coursera courses, it isn’t the exception by any means. With a couple exceptions (Nanotechnology and Nanosensors, Astronomy: Exploring Time & Space, Poetry in America), every course I’ve taken has been largely populated by rather trivial assessment questions that do not encourage nor test learning. And even in those situations where the answers to the assessments are not obvious, the instant retake function prevents any real learning and error correction from taking place.

All those problems, however, only address one function of MOOCs. Without solid assessments, we can say that you’ll get out of a MOOC what you put into a MOOC. Verified Certificates are only valuable as a forcing function to make you do the work. Even if this was the case, one could still see some value in MOOCs: they’re making the material available online. Completing a MOOC has no value to an employer or admissions office, but it still may be useful to students.

However, a second disappointing trend has emerged: I have taken a couple MOOCs with radically, and even dangerously, inaccurate information. This occurred particularly sharply in an education-oriented MOOC I took recently. This course talked about how it is incredibly valuable to have your students take a test to discover their learning style, whether visual, auditory, or kinesthetic. It talked about the importance of providing strong extrinsic motivators to students in every class, no matter the subject, no matter the student. It talked about the value of using the Myers-Briggs test. It talked about helping students identify whether they are more left-brained or right-brained.

Any learning scientists reading this can likely share my rage. There is absolutely no literature to support the existence of these learning styles. The Myers-Briggs is not used in any research settings because, similarly, there is no evidence supporting its validity. Significant research has shown that intrinsic motivation is a more effective motivator than extrinsic motivation, and that providing extrinsic motivators decreases intrinsic motivation. There exists no evidence to support the popular conception of left-brained and right-brained thinking. This course, aimed at training teachers to teach, repeatedly advocated unproven, invalid, and even counterproductive methods.

It occurred to me afterwards, however, that I was only able to identify those problems because of my background in educational research. If I lacked that background, I’d be sending out Myers-Briggs tests and learning styles surveys to my students this semester because I wouldn’t know any better; I would assume that the people putting together a MOOC on teaching actually know about teaching. Clearly, that assumption is false. So, what about the other MOOCs I’ve worked on, in which I don’t have any prior background? They could be similarly delivering falsehoods as facts and I would never even know the difference.

This doubt throws out the idea that you get out of a MOOC what you put into it. It is possible to put a lot of work into taking a MOOC and get nothing but false understandings and misconceptions out of it because the trust we put in the developers is misplaced.

This is troubling to me for a number of reasons, but the major reason is that it doesn’t have to be this way. I would argue that many of the criticisms launched at MOOCs in the past are cases of overgeneralization: they are not inherent flaws with MOOCs as a concept, but rather flaws with the way MOOCs are designed and delivered today. With so many failed MOOCs, it is tempting to jump to the conclusion that MOOCs are a failure, not that those MOOCs are failures.

I refuse to go that far, however. It’s worth reiterating that I don’t work on MOOCs; our OMSCS program at Georgia Tech involves large online classes, but they are neither massive nor open. Udacity famously moved away from MOOCs a couple years ago. I maintain, however, that the potential remains for creating MOOCs that are as rigorous, comprehensive, and challenging as traditional college classes that nonetheless improve feedback, collaboration, and community at scale. The problems are not in MOOCs themselves, but simply in the way MOOCs have been designed so far.

There are solutions to these problems. Write better, more rigorous assessments. Vary the questions students receive on retakes. Limit the frequency of retakes. Deliver accurate course material (this one should go without saying). Leverage peer feedback and hybrid grading approaches to break out of the overly objective test structures. All the tools are already available, and it’s entirely possible I’ve just chosen poorly in the MOOCs I’ve chosen to take.

For that reason, though, I’m shifting away from writing a general, informational review about every MOOC I take. Instead, in the future, I’ll be specifically highlighting two things: (a) good MOOCs, and (b) specific strong elements of MOOCs. I hope to still highlight the good things that certain courses are doing, as well as which courses are particularly worth taking, but I can’t say anymore that  I can find something good to say about every MOOC I take. When I can complete a “5-week course” in 37 minutes without loading a single course material, it’s safe to say there’s not going to be much positive to say about the course.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

4 thoughts on “I can’t say anything good about most MOOCs.”

  1. These are important issues in online education, especially relevant to OMSCS – Do you know of other places that these topics are being discussed?

    We will have had two years of classes in OMSCS by the end of the upcoming semester – what level of course review are you aware of – either by Udacity or by GT?

    So many questions! I know that many are not appropriate for a blog comment – what is the best way to have these discussions in order to proactively enhance the GT OMSCS program?

    1. To the first: there’s been discussion on hosting a workshop or discussion at Georgia Tech to bring together the people working on these directions. I think that would be incredibly valuable — the space, at present, is fragmented and dispersed across numerous units around campus, and I’m not aware of much knowledge transfer at all.

      Regarding reviewing the OMSCS courses, it has been, in my experience, very strong. In addition to the typical CIOS surveys (which I think we’ve examined more closely in the OMS program than in on-campus classes), we’ve also reviewed the withdrawal rates and causes for withdrawal, the effectiveness of the different teaching styles used in the classes, and a bit more program-wide analysis of what predicts student satisfaction and success. So far, unsurprisingly, the chief predictor appears to be (more rigorous analysis is forthcoming) instructor and TA engagement — but what is interesting about this is that multiple instructors, TAs, and students have commented on feeling a greater connection to one another in online classes, which is fascinating and encouraging.

      We’ve also been actively looking for sources of external validity for these analyses — some classes use identical assessments between on-campus and OMS classes. In other places, student work in the classes has led to research papers or projects. Those both, to me, speak to an external confirmation of the courses’ quality, something that goes beyond simply analyzing the courses within themselves.

      To the third, good question! We typically have once-per-semester luncheons with the current OMS faculty, but that’s a bit of a closed environment — it would be great to have an opportunity to discuss this with a bigger audience of people that aren’t within the team or program. I think I’ll pitch this as a GVU Brown Bag topic for next semester — our talk last December led to some interesting conversations, and I think that big audience would be a good platform for that.

  2. Regarding MOOCs, I’ve always been a fan of Udacity, but I myself had rarely finished an entire course there, until I started the OMSCS program.

    First I’d like to say, while I didn’t finish the classes, they were often jumping off points for me to go off and learn about particular things more in depth, that the lectures perhaps didn’t go into. (And I’m sure I’m not alone, but also not the majority).

    Second, I’ve found when it comes to finishing a course, it’s really a matter of Laser Focus, and well money. Once money became a part of the equation all of a sudden I was hammering out all the lectures as fast as I could so I had the exposure, then often coming back through a second time. (I’m currently enrolled in KBAI, we started mid January, I had almost all the lectures completed before February, and have gone on to watch almost all the AI lectures at MIT and a few others on Youtube from Berkley?).

    Third, for me the opportunity to be enrolled at Georgia Tech is also a strong motivator (for me). I consider it a once in a lifetime opportunity and I am doing every thing possible not to waste it, I often have very high level of engagement on the forums, and even with other students outside of the forums.

    While MOOCs may not be ideal overall, there are some benefits (I need to learn about topic X for reason Y), but I hadn’t thought about the fact there may be incorrect statements made in the learnings that folks could leave with taking action on.

    Do you have any proposals on how to improve that? Did you contact the course designers/instructors to discuss the concerns of their material? Should Courses be peer reviewed by subject matter experts before passing off these courses as “factual” to the public? What kind of “regulation” should be in place (I hate to say regulation, but when there are concerns of blatantly false concepts this has got to be a thought).

    1. In many ways, those are challenging questions because they’re part of a much bigger narrative. More generally, how do colleges ensure that what they’re teaching is accurate? Presumably, the people teaching the education class I criticized here also taught it on campus. With such blatant falsehoods or unsubstantiated ideas, why were they able to continue teaching it on campus?

      MOOCs exacerbate the problem by opening these up to much larger audiences, most often novice audiences that are not able to ascertain the accuracy of the material for themselves. However, the problem is more fundamental than that, and not an easy one to solve.

Leave a Reply

Your email address will not be published. Required fields are marked *