One of the earliest takeaways from the handful of online courses I’m taking at the moment (Emerging Trends & Technologies in the Virtual K-12 Classroom, Learning How to Learn, and Astronomy: Exploring Space & Time, as well as a few days in Planet Earth and You) is that there exists a radical difference in the scope of different courses. Emerging Trends can be completed in a day if desired. Astronomy: Exploring Space & Time requires a much more significant time investment for the videos, but the interactive elements are relatively light, restricted to short quizzes and writing assignments. Planet Earth and You was much more significant, and decently approximated the amount of time I recall dedicating to traditional on-campus courses.
On the one hand, this is fantastic. Online learning has previously had the strength of not having to arbitrarily fit lessons to pre-determined time slot: if a topic takes more than a class period to instruct, it’s not necessary to arbitrarily break it up halfway through, and if it takes less than a class period to instruct, it’s not necessary to pad it out or randomly combine it with another topic. These courses reflect how that same idea can be expanded to an entire course. Not every course needs to be a three-hours-per-week 16-weeks-per-semester course. If a topic can be learned in 10 to 20 hours total (as Emerging Trends’ “2-4 hours per week, 5 weeks of study” guideline indicates), then let it be learned in that time frame.
On the other side, however, what does that say about how the world interprets these courses? A Bachelors is a Bachelors, a Masters is a Masters, and there exists some general understanding of what those degrees mean. The school from which a degree came has some influence, but universities have spent decades of time building up reputations to differentiate a Stanford Bachelors from a Samford Bachelors. Moreover, there are around 2500 four-year universities in the United States, and while that’s a large number, it isn’t intractable as far as developing an understanding of the different equivalence classes of universities.
There are currently around 1500 Udacity, Coursera, and edX courses combined, just to take three of the biggest organizations as examples. In the three years since these three organizations launched, they’ve developed almost as many courses as there are four-year institutions in the country. If the scope, depth, and rigor of individual open courses on these platforms is going to vary this much, how then will the world learn to interpret what it means to have a credential or certificate from these courses?
To put myself in the position of an employer, if a prospective employee had a verified course certificate from Coursera on their resume that I had not yet heard of, that would presently be somewhat meaningless to me; this is not because the certificate has no value, but just because I have no hope of knowing the certificate’s value at a glance, nor is it feasible to maintain a comprehensive knowledge of all the open courses I might see. This is a tremendous challenge to the value of these programs. Paying for a certified certificate is, for many, based on the belief that the ability to prove you completed a course is powerful. But if the people to whom you would offer that proof have no knowledge of what kind of knowledge and achievement that certificate represents, it remains somewhat meaningless.
This discussion comes dangerously close to the general discussion of accreditation. How does an employer know a certain college degree is valuable? Because it has been accredited by an independent organization. That knowledge of the program’s value and rigor has been offloaded to an external group for assessment. Just like a bank checking with a credit agency before deciding whether to give a person a loan, so also a business implicitly checks with an accreditation group before offering a graduate a job.
I wouldn’t argue that we need accreditation in the traditional sense for online courses; after all, many courses that wouldn’t pass a pass/fail accreditation process are nonetheless very valuable, even if they don’t necessarily demonstrate anything reliable about the students themselves. Based on my first impressions, I can’t say that hearing that a teacher has taken Emerging Trends & Technologies would mean much to my impression of them, but there is still lots they may have learned in the course. Rather, I feel what might be necessary is just a somewhat standardized classification system. Just like on-campus classes are assigned a number of credit hours based on the amount of work they require, so also online classes could be assigned a number or classification of virtual credits based on the rigor, reliability, and scope of the course itself. That, in turn, might lead to even greater programs: instead of a single university building up a Coursera specialization, it could instead be assembled from multiple universities’ courses on the basis of their virtual credits.
But apart from the solution, I feel the problem nonetheless exists and is waiting to be addressed: accreditation fulfills a function in traditional education; should online education have something to fulfill the same function?