I can’t say anything good about most MOOCs.

A few months ago, I started writing course reviews of the various MOOCs I’ve been taking through Coursera and edX. I had a few goals, including helping students find courses to take, helping employers or admissions offices understand the value of these courses, and helping other educators learn and apply lessons to their own classes.

One of my guiding principles, however, was to stay positive: highlight the good things about each course, not the bad. If a course did something particularly poorly, I’d find a different course that addressed that problem well. I wanted these to be positive experiences for all involved, reviews that the individual course developers and instructors would happily share as critical but overall positive descriptions of their course.

I’ve written four of these reviews so far, and I have near-complete drafts of five more, but I can’t bring myself to post them because, simply, I can’t stay positive. The majority of the MOOCs I’ve taken have not been good.

It came to a head with a MOOC I started and finished earlier today. You read that correctly: it was a five-week MOOC that I started and completed in a 45-minute sitting. My usual workflow is to open the assessment, read the questions, make educated guesses about the answers, then watch the lessons to fill in the picture. I use the quizzes to prime myself on what the lessons will be about. But in this recent MOOC, the assessment questions were terrible. There were ten questions, four multiple-choice and six true/false. The answer to every single multiple-choice question was ‘All of the Above’, and the true/false included questions as obvious as, “True/False: One benefit of online education is lower academic integrity” (note: this isn’t an actual question, but rather a paraphrasing of the underlying message of the question).

I went on to complete every assessment in the course — 80 total questions — in 37 minutes. I retried one quiz once to get a 10/10 instead of a 9/10. I received a 38.25/40 on the final exam. Admittedly, I have a background in this course, so I’m at an advantage. However, during one of the quizzes, I read my wife, who has no background in the course’s material, every question. She got every one right, too. And what’s remarkable is that while this was the most clear example I’ve seen of the lack of rigor behind quizzes in most Coursera courses, it isn’t the exception by any means. With a couple exceptions (Nanotechnology and Nanosensors, Astronomy: Exploring Time & Space, Poetry in America), every course I’ve taken has been largely populated by rather trivial assessment questions that do not encourage nor test learning. And even in those situations where the answers to the assessments are not obvious, the instant retake function prevents any real learning and error correction from taking place.

All those problems, however, only address one function of MOOCs. Without solid assessments, we can say that you’ll get out of a MOOC what you put into a MOOC. Verified Certificates are only valuable as a forcing function to make you do the work. Even if this was the case, one could still see some value in MOOCs: they’re making the material available online. Completing a MOOC has no value to an employer or admissions office, but it still may be useful to students.

However, a second disappointing trend has emerged: I have taken a couple MOOCs with radically, and even dangerously, inaccurate information. This occurred particularly sharply in an education-oriented MOOC I took recently. This course talked about how it is incredibly valuable to have your students take a test to discover their learning style, whether visual, auditory, or kinesthetic. It talked about the importance of providing strong extrinsic motivators to students in every class, no matter the subject, no matter the student. It talked about the value of using the Myers-Briggs test. It talked about helping students identify whether they are more left-brained or right-brained.

Any learning scientists reading this can likely share my rage. There is absolutely no literature to support the existence of these learning styles. The Myers-Briggs is not used in any research settings because, similarly, there is no evidence supporting its validity. Significant research has shown that intrinsic motivation is a more effective motivator than extrinsic motivation, and that providing extrinsic motivators decreases intrinsic motivation. There exists no evidence to support the popular conception of left-brained and right-brained thinking. This course, aimed at training teachers to teach, repeatedly advocated unproven, invalid, and even counterproductive methods.

It occurred to me afterwards, however, that I was only able to identify those problems because of my background in educational research. If I lacked that background, I’d be sending out Myers-Briggs tests and learning styles surveys to my students this semester because I wouldn’t know any better; I would assume that the people putting together a MOOC on teaching actually know about teaching. Clearly, that assumption is false. So, what about the other MOOCs I’ve worked on, in which I don’t have any prior background? They could be similarly delivering falsehoods as facts and I would never even know the difference.

This doubt throws out the idea that you get out of a MOOC what you put into it. It is possible to put a lot of work into taking a MOOC and get nothing but false understandings and misconceptions out of it because the trust we put in the developers is misplaced.

This is troubling to me for a number of reasons, but the major reason is that it doesn’t have to be this way. I would argue that many of the criticisms launched at MOOCs in the past are cases of overgeneralization: they are not inherent flaws with MOOCs as a concept, but rather flaws with the way MOOCs are designed and delivered today. With so many failed MOOCs, it is tempting to jump to the conclusion that MOOCs are a failure, not that those MOOCs are failures.

I refuse to go that far, however. It’s worth reiterating that I don’t work on MOOCs; our OMSCS program at Georgia Tech involves large online classes, but they are neither massive nor open. Udacity famously moved away from MOOCs a couple years ago. I maintain, however, that the potential remains for creating MOOCs that are as rigorous, comprehensive, and challenging as traditional college classes that nonetheless improve feedback, collaboration, and community at scale. The problems are not in MOOCs themselves, but simply in the way MOOCs have been designed so far.

There are solutions to these problems. Write better, more rigorous assessments. Vary the questions students receive on retakes. Limit the frequency of retakes. Deliver accurate course material (this one should go without saying). Leverage peer feedback and hybrid grading approaches to break out of the overly objective test structures. All the tools are already available, and it’s entirely possible I’ve just chosen poorly in the MOOCs I’ve chosen to take.

For that reason, though, I’m shifting away from writing a general, informational review about every MOOC I take. Instead, in the future, I’ll be specifically highlighting two things: (a) good MOOCs, and (b) specific strong elements of MOOCs. I hope to still highlight the good things that certain courses are doing, as well as which courses are particularly worth taking, but I can’t say anymore that  I can find something good to say about every MOOC I take. When I can complete a “5-week course” in 37 minutes without loading a single course material, it’s safe to say there’s not going to be much positive to say about the course.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

Introducing MOOC Reviews

In keeping with my prior question on accreditation in MOOCs, I’m starting my own one-person MOOC accreditation service. Well, not really. But I am going to start publishing reviews of the online courses I finish to try to help develop accurate reputations for MOOCs and other online courses.

First of all, there are two types of information I hope to disseminate through these course reviews:

  • Descriptions: fact-based descriptions of the structure and content of the course or MOOC.
  • Analysis: opinion-based impressions on the value and quality of the course or MOOC.

There are three audiences for these course reviews:

  • Prospective students: What will you learn? How much time will it take? How deeply will you learn it? How strong are the assessments?
  • Employers and admissions administrators: How valuable is a certificate from this course? What content can I assume the student has mastered? How certain can I be that the student completed the MOOC themselves?
  • Educators and course developers: How did the course approach grading and assignments? How was the MOOC structured? How well did the structure work? What could be improved?

Toward these ends, each review will be structured according to several categories, each with a short description of the course’s approach:

  • Structure: How is the course structured? Is the course time-locked or open? Is the course on-demand or traditional? What are the due dates?
  • Content: What topics are covered in the course? What are the units and sub-units?
  • Identity: What steps are taken to ensure that the student receiving the credit for completing the course really is the student claiming credit for the course?
  • Assessments: What assessments are required to complete the course? To what extent do those assessments demonstrate real learning?
  • Prerequisites: What prior knowledge is required to succeed in the course? What general level of education is required to be prepared for the course?

Based on these five categories, as well as other elements, I’ll then give a brief analysis of what the course means to students, prospective employers, and other educators, touching as well on the overall course experience.

Note that although some of these analyses can get into the realm of good practices and bad practices, nothing here is ever meant to negatively reflect on the courses or MOOCs themselves. Every course has strengths, every course has room for improvement, and different courses are better-suited for different kinds of learners. Not every course is intended to give students a credential that will guarantee value to a prospective school or employer: many courses are meant to be surveys or introductions, not rigorous capstones. Thus, when I say that an employer shouldn’t put much stock in a particular course appearing on an applicant’s resume, that is not a criticism of the course. These reviews are meant to describe and inform, not criticize and judge.

You can find all my course reviews under the course reviews tag on this blog.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

Taking stock of the MOOCs landscape: past, present, future

I found myself thinking out loud about the evolution of the MOOC landscape so far. To get feedback and share my view on the past, present, and future of the landscape, I’ll share this.

In the beginning, MOOCs were focused on openness: make the content available and the people will come. And the people did come, in thousands and hundreds of thousands, but we quickly found that the experience in these MOOCs was not actually replacing the experience of the in-person class. No one would argue that taking a MOOC was actually equivalent to taking the comparable college course.

So, MOOCs evolved. Some of the directions in which they reinvented themselves were aimed more at differentiation: MOOCs became shorter, less directly analogous to comparable college classes, more varied in the range of topics covered, and more specific in the number of topics covered in any one MOOC. As an analogy, instead of a poetry MOOC, we’d have a series of MOOCs each focused on individual poets or movements in poetry. These changes were driven by market demand, even though they took MOOCs away from their original position as open-access college courses.

However, other MOOCs evolved in the opposite direction. In other places, MOOCs recognized the difference between traditional college classes and MOOCs, and they moved to rectify these differences. I, personally, would identify three key areas in which the early MOOCs differed from traditional classes: feedback, accreditation, and context. A major criticism of MOOCs has been the lack of interaction. It’s been suggested that MOOCs are fine for autodidacts, but the majority of learners need feedback to improve; the learning sciences supports this as well. Students also do not take college courses simply to learn; the valuable diploma or credential at the end of the journey is one, if not the, major motivating factor behind participation in traditional college. Students also rarely take classes in vacuums: each class builds on previous classes, and in turn becomes the foundation for future classes.

Initially, MOOCs missed these elements of traditional college courses. There was little feedback due to the difficulties in scaling personalized feedback. MOOCs were meaningless on resumes and in portfolios; anyone could claim to have completed any MOOC, and there was little knowledge as to what completing a particular MOOC actually meant. MOOCs were mostly one-off courses, absent the context of a broader unit.

In contrast to the differentiation mentioned previously, MOOCs have also evolved to address these initial weaknesses. This is clearly evident in three of the most prominent MOOC providers: Coursera, edX, and Udacity (note my prior disclaimer).

Coursera’s Verified Certificates and specializations address all three of these issues. A Verified Certificate carries with it an assertion that the work was, in fact, completed by the individual, mimicking the need for a form of accreditation. Specializations are series of courses developed to provide a broader view of a particular topic, injecting the context that is missing from one-off courses. Specializations culminate in a capstone project, where (I presume; I haven’t completed one yet) the student receives the feedback that is lacking in traditional MOOCs.

Udacity’s Nanodegree credentials are similar. A human coach interacts with the student, interviewing them live on the skills they have obtained, allowing the assertion that the individual receiving the credential really mastered the skills. Nanodegrees focus on jobs’ entire skillsets rather than individual skills, providing critical context. Throughout the program, students receive feedback from coaches, peers, and external code reviewers on their work, allowing for a true iterative feedback and refinement cycle that one would expect in a traditional classroom.

Of course, Coursera and Udacity differ in critical ways. Coursera maintains (in my opinion) openness as the guiding principle; Verified Certificates are kept relatively cheap ($50 per course at present, leading to ~$300 for a specialization depending on the number of courses). Although these changes inch toward the functions of traditional college courses, they remain distant: the identity verification is easy to fool, and the amount of feedback still pales in comparison to traditional courses. Udacity moves to more closely address those traditional college functions: identity is asserted through real-time face-to-face interaction with a coach rather than an automatic system, and feedback is available constantly from coaches, peers, and external reviewers (both of which connect to the higher cost of a Nanodegree credential than a Coursera specialization). However, the underlying nature of the moves is the same: both have moved to provide a form of accreditation (or at least identity verification), individualized feedback, and broader context.

These changes should not be overly surprising. MOOCs have followed a traditional Gartner hype cycle: the initial hype was overblown, but MOOCs are starting to find their place. I would argue we’re at the beginning of the slope of enlightenment with regard to MOOCs, well on our way to the plateau of productivity. Some might argue we’re further along; students are already getting jobs based on completion of these online programs, which can certainly be argued to be indicative of a productive industry.

All that begs the question: what’s next? What’s next for the MOOC landscape? For me, the elephant in the room goes back to accreditation. Academic honesty is a major problem for traditional education, and it is exacerbated in online education. When you can’t see your student, how do you guarantee the student is, indeed, responsible for the work? Coursera addresses this through pictures after each assessment and keyboard pattern matching, but these only guarantee that the student was at the computer when the work was submitted. It’s trivial to have a friend send you answers. Until this can be resolved, I do not foresee Coursera’s certificates having much value in the market. Udacity’s measures are effectively impossible to circumvent, but they present a challenge for scale: there is a linear relationship between the number of students and the amount of employee time necessary to verify them.

Accreditation is not as simple as identity verification, however. Even if I knew that a person I was interviewing had personally completed a particular Coursera course, I still would likely not put much stock in it. The majority of Coursera courses I’ve taken can be passed without learning anything simply by gaming the quizzes and assessments. Accreditation of these courses and programs needs to not only focus on identity verification, but also the strength of the material and the accompanying assessments. What can we actually assert that a student completing a given class has learned? Until that element of accreditation comes along, I do not foresee the simply identity verification approach as sufficient to make these credentials worthwhile. (Sidenote: I do not mean the above as a general critique of Coursera; as far as learning is concerned, I appreciate that iterative improvement on assessments is permitted, and it connects strongly to Coursera’s emphasis on openness and accessibility.)

In the absence of accreditation, validity can be built in other ways. Graduates from the Nanodegree program at Udacity, for example, do not need to rely solely on the value of the program name because they also leave with a portfolio of projects demonstrating their knowledge. Some Coursera specializations mimic this as well, and it is possible that reputations may build up organically over time based on the strength of past program graduates. However, I feel there is a more efficient possibility for MOOCs and other online classes to go through a true accreditation process to verify the value and reliability of a given course. With such a process, demand for these credentials would rise as their value on a resume or application would rise as well. This would also open up demand for MOOCs to much broader populations: schools and universities could supplement their course catalogs with accredited MOOCs, entire degree programs could be constructed based on MOOCs from numerous different universities, and current classes could benefit from global audiences working in tandem with traditional students. In my opinion, accreditation is the chasm standing between the present state of MOOCs and the promise of MOOCs.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

Are open online education and quality online education mutually exclusive?

In the past, I’ve touched on a distinction I see in the landscape of higher education. It is this distinction that leads me to say that programs like Coursera and edX and programs like Udacity and the Georgia Tech OMS are not competitors, but rather represent two largely different goals of education: openness and quality.

Of course, I hate using the word ‘quality’ because it implies that open education cannot be high-quality, which is not what I mean to suggest. Rather, what I mean to suggest is that openness and quality often get in the way of one another. Developing open courses for a platform like Coursera almost inherently dictates that costs must be extremely limited. Offering a course through Coursera does not bring in a tremendous amount of money; even the Verified Signature track, I would speculate, barely pays for the human effort required to grade assignments and verify identities. Developing open courses can be an act of either marketing or altruism, but in either case, there is a natural impetus to keep costs low. The outcome, of course, is nonetheless fantastic: the world’s knowledge presented by the world’s experts on that knowledge in a venue that everyone can access. Even if the cost pressure demands that this information can only be presented in the traditional lecture model, the outcome is nonetheless incredibly desirable.

That openness is largely driven by the internet’s ability to deliver content to massive audiences for low costs. However, that’s not the only thing that the internet can do in service of education. The internet also has features and frameworks that can create educational experiences that go beyond what we can do in traditional classrooms. Many traditional college classes are delivered in the same lecture model as the aforementioned Coursera courses, but pedagogically we know that this model is largely ineffective. It is not chosen because it is effective, however; it is chosen because professors’ time is valuable, professors are very often experts in the subject matter rather than in teaching itself, and the lecture model is arguably the easiest way to present material. There are exceptions, of course, but I don’t think I’m being controversial in suggesting these ideas as generally true.

What the internet gives us, however, is a mechanism by which content can be produced once to be consumed by millions. This is part of the reason the openness initiatives work: professors can film the course once and make it available to the masses rather than having to reteach it semester to semester. But while in some places that is an impetus for openness, we may also use that as an impetus for quality. Let’s invent some numbers to make it clearer. Let’s imagine that a class of 50 students are each paying $100 to take a class; this means that the class must cost no more than $5,000 to deliver each semester. However, if the class could be developed once and re-used ten semesters in a row, that means that the same class now can cost up to $50,000 to develop, allowing for much more investment into the quality of the class.

This, of course, is a gross simplification, but it is intended to portray an elegant truth: when we use the internet to deliver content to a much larger population with the same amount of work, we can either pass on the savings to the students (the openness route), or we can reinvest the money into the development of the courses themselves (the quality route). We can ask less investment of the students, or we can give the students more for the same price.

Coursera, edX, and the traditional MOOC community take the former, providing content for a fraction of the cost because it can be delivered to so many people. Udacity, the Georgia Tech OMS, and other more expensive programs take the latter approach, reinvesting that money into creating higher-quality programs in the first place. Both these sides are critical. I don’t like living in a world where education is gated by such a massive monetary investment, and MOOC services are doing a world of good to reduce the barriers to education. At the same time, I love education itself, and I recognize that there are phenomenal things that the internet can do to improve education — but they come with a significant development cost.

Of course, this hasn’t actually answered the question: I’ve shown how openness and quality are distinct and often conflicting goals in online education, but can we accomplish both? Is it possible to create high-quality education that is also openly available for little to no monetary cost? It may be. At present, this is in some ways what the Georgia Tech OMS is doing: nine Georgia Tech courses are available for free to the world, and they are infused with a more significant initial investment that pays significant dividends in the quality of the instruction. This is accomplished because, in some ways, this free offering is “subsidized” by the students taking the actual Masters. This model is incomplete, however, as there is still valuable education locked within the for-cost program. OMS students are not paying for the videos; they are paying for the access to professors and TAs, the access to projects and assignments, and the ultimate “verified certificate”: the Masters diploma at the end of the program. However, this direction at least illustrates that it may be possible to use one offering in service of the other and improve both openness and quality at the same time. For now, however, I regard the two as distinct, exclusive, and desirable goals.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

Should there exist accreditation for independent online courses?

One of the earliest takeaways from the handful of online courses I’m taking at the moment (Emerging Trends & Technologies in the Virtual K-12 Classroom, Learning How to Learn, and Astronomy: Exploring Space & Time, as well as a few days in Planet Earth and You) is that there exists a radical difference in the scope of different courses. Emerging Trends can be completed in a day if desired. Astronomy: Exploring Space & Time requires a much more significant time investment for the videos, but the interactive elements are relatively light, restricted to short quizzes and writing assignments. Planet Earth and You was much more significant, and decently approximated the amount of time I recall dedicating to traditional on-campus courses.

On the one hand, this is fantastic. Online learning has previously had the strength of not having to arbitrarily fit lessons to pre-determined time slot: if a topic takes more than a class period to instruct, it’s not necessary to arbitrarily break it up halfway through, and if it takes less than a class period to instruct, it’s not necessary to pad it out or randomly combine it with another topic. These courses reflect how that same idea can be expanded to an entire course. Not every course needs to be a three-hours-per-week 16-weeks-per-semester course. If a topic can be learned in 10 to 20 hours total (as Emerging Trends’ “2-4 hours per week, 5 weeks of study” guideline indicates), then let it be learned in that time frame.

On the other side, however, what does that say about how the world interprets these courses? A Bachelors is a Bachelors, a Masters is a Masters, and there exists some general understanding of what those degrees mean. The school from which a degree came has some influence, but universities have spent decades of time building up reputations to differentiate a Stanford Bachelors from a Samford Bachelors. Moreover, there are around 2500 four-year universities in the United States, and while that’s a large number, it isn’t intractable as far as developing an understanding of the different equivalence classes of universities.

There are currently around 1500 Udacity, Coursera, and edX courses combined, just to take three of the biggest organizations as examples. In the three years since these three organizations launched, they’ve developed almost as many courses as there are four-year institutions in the country. If the scope, depth, and rigor of individual open courses on these platforms is going to vary this much, how then will the world learn to interpret what it means to have a credential or certificate from these courses?

To put myself in the position of an employer, if a prospective employee had a verified course certificate from Coursera on their resume that I had not yet heard of, that would presently be somewhat meaningless to me; this is not because the certificate has no value, but just because I have no hope of knowing the certificate’s value at a glance, nor is it feasible to maintain a comprehensive knowledge of all the open courses I might see. This is a tremendous challenge to the value of these programs. Paying for a certified certificate is, for many, based on the belief that the ability to prove you completed a course is powerful. But if the people to whom you would offer that proof have no knowledge of what kind of knowledge and achievement that certificate represents, it remains somewhat meaningless.

This discussion comes dangerously close to the general discussion of accreditation. How does an employer know a certain college degree is valuable? Because it has been accredited by an independent organization. That knowledge of the program’s value and rigor has been offloaded to an external group for assessment. Just like a bank checking with a credit agency before deciding whether to give a person a loan, so also a business implicitly checks with an accreditation group before offering a graduate a job.

I wouldn’t argue that we need accreditation in the traditional sense for online courses; after all, many courses that wouldn’t pass a pass/fail accreditation process are nonetheless very valuable, even if they don’t necessarily demonstrate anything reliable about the students themselves. Based on my first impressions, I can’t say that hearing that a teacher has taken Emerging Trends & Technologies would mean much to my impression of them, but there is still lots they may have learned in the course. Rather, I feel what might be necessary is just a somewhat standardized classification system. Just like on-campus classes are assigned a number of credit hours based on the amount of work they require, so also online classes could be assigned a number or classification of virtual credits based on the rigor, reliability, and scope of the course itself. That, in turn, might lead to even greater programs: instead of a single university building up a Coursera specialization, it could instead be assembled from multiple universities’ courses on the basis of their virtual credits.

But apart from the solution, I feel the problem nonetheless exists and is waiting to be addressed: accreditation fulfills a function in traditional education; should online education have something to fulfill the same function?

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone