I’ve won the College of Computing Outstanding Graduate Teaching Assistant Award

This was announced a few weeks ago, but since I received the award itself today, I figured I’d wait until now to announce it here: I’ve been selected as the recipient of the 2015 Georgia Tech College of Computing Outstanding Graduate Teaching Assistant Award for my work on the Fall 2014 Knowledge-Based AI class in the OMSCS. This post is half news post, half blog post.

First of all, I’m grateful to Ashok, the professor for the course (and my PhD adviser) and to the students in the course for their unbelievably kind words during the nominations. Thank you as well to the College of Computing for ultimately selecting me.

This experience has inspired a number of thoughts in my mind, though, that I hope to explore in the coming weeks. One major question I have is: is the role of the teaching assistant more significant in online classes than in on-campus classes? So far, the key determining factor I’ve observed in how well a class is run in the OMS is how involved the head TA is. I could see how that may be true in on-campus classes as well, but in an online class, an enormous responsibility falls on the TA to manually recreate elements of the class experience that happen naturally on-campus. Do we need to re-explore how we choose head TAs based on this potential increased importance? Do we need to reconsider what the responsibilities of a head TA in an online class even ought to be? Do we need to re-examine the qualifications a head TA for an online class ought to have?

The second question I have is a little arrogant, but I’ll mention it anyway. It’s not surprising that I was a good TA in Fall: I have ten years of teaching experience, ten years of experience interacting on online forums (an underrated skill in this setting), ten years of experience constructing assessments in other domains. On top of that, I had more time to devote to TAing the class than most TAs have: my dissertation work was done, and my primary responsibility for the semester was TAing the class. Under those circumstances, it’s no surprise that I did a pretty good job.

We can’t hire someone like me to be a TA every semester. The skillset I just described commands a high salary. Even if it didn’t, simply finding people with that skillset is difficult. Finding 30 people with that skillset, each to run a class? That’s absolutely impossible. We can’t just have a bunch of me TAing every course every semester. So the question then becomes: how do we replicate some of what went well in our course in classes that don’t have the same resources?

That second question is what I’ve been exploring in my new role working with all the classes. That second question is also what I hope to explore soon in this space. Hopefully I’ll catch BuzzFeed’s eye with “10 Mind-Blowing Tricks Your TA Should Use In Your Class!”

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

Taking stock of the MOOCs landscape: past, present, future

I found myself thinking out loud about the evolution of the MOOC landscape so far. To get feedback and share my view on the past, present, and future of the landscape, I’ll share this.

In the beginning, MOOCs were focused on openness: make the content available and the people will come. And the people did come, in thousands and hundreds of thousands, but we quickly found that the experience in these MOOCs was not actually replacing the experience of the in-person class. No one would argue that taking a MOOC was actually equivalent to taking the comparable college course.

So, MOOCs evolved. Some of the directions in which they reinvented themselves were aimed more at differentiation: MOOCs became shorter, less directly analogous to comparable college classes, more varied in the range of topics covered, and more specific in the number of topics covered in any one MOOC. As an analogy, instead of a poetry MOOC, we’d have a series of MOOCs each focused on individual poets or movements in poetry. These changes were driven by market demand, even though they took MOOCs away from their original position as open-access college courses.

However, other MOOCs evolved in the opposite direction. In other places, MOOCs recognized the difference between traditional college classes and MOOCs, and they moved to rectify these differences. I, personally, would identify three key areas in which the early MOOCs differed from traditional classes: feedback, accreditation, and context. A major criticism of MOOCs has been the lack of interaction. It’s been suggested that MOOCs are fine for autodidacts, but the majority of learners need feedback to improve; the learning sciences supports this as well. Students also do not take college courses simply to learn; the valuable diploma or credential at the end of the journey is one, if not the, major motivating factor behind participation in traditional college. Students also rarely take classes in vacuums: each class builds on previous classes, and in turn becomes the foundation for future classes.

Initially, MOOCs missed these elements of traditional college courses. There was little feedback due to the difficulties in scaling personalized feedback. MOOCs were meaningless on resumes and in portfolios; anyone could claim to have completed any MOOC, and there was little knowledge as to what completing a particular MOOC actually meant. MOOCs were mostly one-off courses, absent the context of a broader unit.

In contrast to the differentiation mentioned previously, MOOCs have also evolved to address these initial weaknesses. This is clearly evident in three of the most prominent MOOC providers: Coursera, edX, and Udacity (note my prior disclaimer).

Coursera’s Verified Certificates and specializations address all three of these issues. A Verified Certificate carries with it an assertion that the work was, in fact, completed by the individual, mimicking the need for a form of accreditation. Specializations are series of courses developed to provide a broader view of a particular topic, injecting the context that is missing from one-off courses. Specializations culminate in a capstone project, where (I presume; I haven’t completed one yet) the student receives the feedback that is lacking in traditional MOOCs.

Udacity’s Nanodegree credentials are similar. A human coach interacts with the student, interviewing them live on the skills they have obtained, allowing the assertion that the individual receiving the credential really mastered the skills. Nanodegrees focus on jobs’ entire skillsets rather than individual skills, providing critical context. Throughout the program, students receive feedback from coaches, peers, and external code reviewers on their work, allowing for a true iterative feedback and refinement cycle that one would expect in a traditional classroom.

Of course, Coursera and Udacity differ in critical ways. Coursera maintains (in my opinion) openness as the guiding principle; Verified Certificates are kept relatively cheap ($50 per course at present, leading to ~$300 for a specialization depending on the number of courses). Although these changes inch toward the functions of traditional college courses, they remain distant: the identity verification is easy to fool, and the amount of feedback still pales in comparison to traditional courses. Udacity moves to more closely address those traditional college functions: identity is asserted through real-time face-to-face interaction with a coach rather than an automatic system, and feedback is available constantly from coaches, peers, and external reviewers (both of which connect to the higher cost of a Nanodegree credential than a Coursera specialization). However, the underlying nature of the moves is the same: both have moved to provide a form of accreditation (or at least identity verification), individualized feedback, and broader context.

These changes should not be overly surprising. MOOCs have followed a traditional Gartner hype cycle: the initial hype was overblown, but MOOCs are starting to find their place. I would argue we’re at the beginning of the slope of enlightenment with regard to MOOCs, well on our way to the plateau of productivity. Some might argue we’re further along; students are already getting jobs based on completion of these online programs, which can certainly be argued to be indicative of a productive industry.

All that begs the question: what’s next? What’s next for the MOOC landscape? For me, the elephant in the room goes back to accreditation. Academic honesty is a major problem for traditional education, and it is exacerbated in online education. When you can’t see your student, how do you guarantee the student is, indeed, responsible for the work? Coursera addresses this through pictures after each assessment and keyboard pattern matching, but these only guarantee that the student was at the computer when the work was submitted. It’s trivial to have a friend send you answers. Until this can be resolved, I do not foresee Coursera’s certificates having much value in the market. Udacity’s measures are effectively impossible to circumvent, but they present a challenge for scale: there is a linear relationship between the number of students and the amount of employee time necessary to verify them.

Accreditation is not as simple as identity verification, however. Even if I knew that a person I was interviewing had personally completed a particular Coursera course, I still would likely not put much stock in it. The majority of Coursera courses I’ve taken can be passed without learning anything simply by gaming the quizzes and assessments. Accreditation of these courses and programs needs to not only focus on identity verification, but also the strength of the material and the accompanying assessments. What can we actually assert that a student completing a given class has learned? Until that element of accreditation comes along, I do not foresee the simply identity verification approach as sufficient to make these credentials worthwhile. (Sidenote: I do not mean the above as a general critique of Coursera; as far as learning is concerned, I appreciate that iterative improvement on assessments is permitted, and it connects strongly to Coursera’s emphasis on openness and accessibility.)

In the absence of accreditation, validity can be built in other ways. Graduates from the Nanodegree program at Udacity, for example, do not need to rely solely on the value of the program name because they also leave with a portfolio of projects demonstrating their knowledge. Some Coursera specializations mimic this as well, and it is possible that reputations may build up organically over time based on the strength of past program graduates. However, I feel there is a more efficient possibility for MOOCs and other online classes to go through a true accreditation process to verify the value and reliability of a given course. With such a process, demand for these credentials would rise as their value on a resume or application would rise as well. This would also open up demand for MOOCs to much broader populations: schools and universities could supplement their course catalogs with accredited MOOCs, entire degree programs could be constructed based on MOOCs from numerous different universities, and current classes could benefit from global audiences working in tandem with traditional students. In my opinion, accreditation is the chasm standing between the present state of MOOCs and the promise of MOOCs.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

Are open online education and quality online education mutually exclusive?

In the past, I’ve touched on a distinction I see in the landscape of higher education. It is this distinction that leads me to say that programs like Coursera and edX and programs like Udacity and the Georgia Tech OMS are not competitors, but rather represent two largely different goals of education: openness and quality.

Of course, I hate using the word ‘quality’ because it implies that open education cannot be high-quality, which is not what I mean to suggest. Rather, what I mean to suggest is that openness and quality often get in the way of one another. Developing open courses for a platform like Coursera almost inherently dictates that costs must be extremely limited. Offering a course through Coursera does not bring in a tremendous amount of money; even the Verified Signature track, I would speculate, barely pays for the human effort required to grade assignments and verify identities. Developing open courses can be an act of either marketing or altruism, but in either case, there is a natural impetus to keep costs low. The outcome, of course, is nonetheless fantastic: the world’s knowledge presented by the world’s experts on that knowledge in a venue that everyone can access. Even if the cost pressure demands that this information can only be presented in the traditional lecture model, the outcome is nonetheless incredibly desirable.

That openness is largely driven by the internet’s ability to deliver content to massive audiences for low costs. However, that’s not the only thing that the internet can do in service of education. The internet also has features and frameworks that can create educational experiences that go beyond what we can do in traditional classrooms. Many traditional college classes are delivered in the same lecture model as the aforementioned Coursera courses, but pedagogically we know that this model is largely ineffective. It is not chosen because it is effective, however; it is chosen because professors’ time is valuable, professors are very often experts in the subject matter rather than in teaching itself, and the lecture model is arguably the easiest way to present material. There are exceptions, of course, but I don’t think I’m being controversial in suggesting these ideas as generally true.

What the internet gives us, however, is a mechanism by which content can be produced once to be consumed by millions. This is part of the reason the openness initiatives work: professors can film the course once and make it available to the masses rather than having to reteach it semester to semester. But while in some places that is an impetus for openness, we may also use that as an impetus for quality. Let’s invent some numbers to make it clearer. Let’s imagine that a class of 50 students are each paying $100 to take a class; this means that the class must cost no more than $5,000 to deliver each semester. However, if the class could be developed once and re-used ten semesters in a row, that means that the same class now can cost up to $50,000 to develop, allowing for much more investment into the quality of the class.

This, of course, is a gross simplification, but it is intended to portray an elegant truth: when we use the internet to deliver content to a much larger population with the same amount of work, we can either pass on the savings to the students (the openness route), or we can reinvest the money into the development of the courses themselves (the quality route). We can ask less investment of the students, or we can give the students more for the same price.

Coursera, edX, and the traditional MOOC community take the former, providing content for a fraction of the cost because it can be delivered to so many people. Udacity, the Georgia Tech OMS, and other more expensive programs take the latter approach, reinvesting that money into creating higher-quality programs in the first place. Both these sides are critical. I don’t like living in a world where education is gated by such a massive monetary investment, and MOOC services are doing a world of good to reduce the barriers to education. At the same time, I love education itself, and I recognize that there are phenomenal things that the internet can do to improve education — but they come with a significant development cost.

Of course, this hasn’t actually answered the question: I’ve shown how openness and quality are distinct and often conflicting goals in online education, but can we accomplish both? Is it possible to create high-quality education that is also openly available for little to no monetary cost? It may be. At present, this is in some ways what the Georgia Tech OMS is doing: nine Georgia Tech courses are available for free to the world, and they are infused with a more significant initial investment that pays significant dividends in the quality of the instruction. This is accomplished because, in some ways, this free offering is “subsidized” by the students taking the actual Masters. This model is incomplete, however, as there is still valuable education locked within the for-cost program. OMS students are not paying for the videos; they are paying for the access to professors and TAs, the access to projects and assignments, and the ultimate “verified certificate”: the Masters diploma at the end of the program. However, this direction at least illustrates that it may be possible to use one offering in service of the other and improve both openness and quality at the same time. For now, however, I regard the two as distinct, exclusive, and desirable goals.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

What’s the difference between online learning and distance learning?

At the Georgia Tech OMSCS, we talk a lot about how our program is the first of its kind. The homepage for the OMSCS states, “the first accredited Master of Science in Computer Science that students can earn exclusively through the Massive Open Online Course (MOOC) delivery format and for a fraction of the cost of traditional, on-campus programs.”

However, that description might not be entirely accurate. Georgia Tech OMS courses are neither massive (although they’re getting there) nor open (admission is still required). At the same time, the OMSCS is far from the only online Masters offered at Georgia Tech. So is all the hype just hype?

In my opinion, this speaks to the difference between distance learning and online learning, and the difference is critical. Distance learning has been around for ages through correspondence programs and other similar structures. Some of them are very good. Most online Masters programs today are simple extensions of distance learning programs. In previous years, one would receive course materials in the mail, and mail completed schoolwork back; now, students receive course materials over the internet, and upload completed assignments back. The internet makes distance learning easier, and at times can improve the experience through features like forums, but it does not inherently fundamentally change its structure.

The majority of online Masters programs are distance learning programs of this kind. With Georgia Tech’s online Masters programs, students in the distance learning sections view live or filmed lectures, upload the same assignments, and are graded by the same TAs. The only major difference is geographic: rather than being physically in the room of the lecture, the students are distributed. This is, in my mind, the heart of the distinction between distance learning and online learning: distance learning as nearly as possible identically recreates the in-person process. It may use the internet to do so, but the fundamental structure between distance learning and in-person learning remains the same.

Online learning, on the other hand, aims to leverage the internet not to duplicate the in-person experience, but rather to improve it. Improvement, of course, can come in many ways. Online education can be developed to reduce costs by leveraging MOOC principles, and in fact, this is one of the general guiding principles of the OMSCS: leveraging the internet to deliver an experience that is just as good as the in-person experience at a fraction of the cost. Online learning does not stop there, though. Automated feedback, communities of practice, and several other pedagogical techniques find unique places in the online medium. I’ve talked about a few of these unique benefits in the past, like the ability to transfer course ownership to the students and the natural emphasis on positive activity rather than negative, and I believe we’re only scratching the surface of the ways in which online education can actually improve on the in-person classroom experience.

I, of course, can be accused of bias in that, as an instructor and developer of the Georgia Tech OMS, I want to see it succeed. However, the inverse is true: I work on the Georgia Tech OMS because I believe it will succeed. I’m excited to work on it because while most programs out there are using the internet to improve on distance learning, the Georgia Tech OMS is about using the internet to create new and improved educational experiences altogether. Don’t get me wrong: there’s nothing wrong with distance learning, and it presents some very rich opportunities of its own. Distance learning is all about increasing access to the same quality education, and that is an incredibly important. I’m excited, though, to work on online learning and find ways to use the internet to make higher education more affordable, more accessible, and more effective.

So, if you’re ever asked why we ballyhoo the Georgia Tech OMS so much when online programs, even from highly reputable universities, are becoming common, the reason is that the OMS is about online learning, not distance learning. It’s very different, and it may lead to great things.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

The conflicting functions of universities

I alluded to this in my previous blog post, but I think it deserves some explicit attention. In some of my recent conversations about the direction of the Online Masters of Science in Computer Science at Georgia Tech, I’ve come to believe that this question is at the core of many of the questions we wrestle with when developing courses in higher education.

What is the function of college? In other words, what goal does college accomplish? That’s a big nebulous question, so let me narrow it down a bit: what is the function of undergraduate education at research universities?

I see two competing functions. First, I see the function that higher education sees in itself. As seen by many professors and administrators,  I would argue that the function many see in higher education is the creation, maintenance, and dissemination of knowledge. Research institutions are called ‘research’ institutions for a reason, after all, but research is not merely about uncovering knowledge; it’s also about communicating knowledge so that a new generation can grow that knowledge. Thus, I would argue that teaching is well within the function of universities, as seen by universities themselves (despite data indicating the contrary).

That’s the function I would argue universities see in themselves. However, I would argue that there is a conflict between that function and the function that students derive from college. For the most part, students attend college to get a better job. To be somewhat unscientific, a quick Google search on “why go to college” corroborates this: the majority of the top results focus on the increased earning potential of individuals who attended college. Other benefits come up as well, but given the massive investment in getting an undergraduate degree, it’s certainly reasonable to expect a sizable return on investment.

So, to summarize that, on the one hand we have universities that are focused on the creation and dissemination of knowledge, and on the other we have students focused on job training and placement[1]. That Venn diagram may sometimes have an intersection, but I would argue that most often, it does not. The things we learn in undergraduate classrooms often have very little applicability to the real world. Getting an undergraduate education is less often about what you learn and more often about proving that you can do so; a person that can graduate from Georgia Tech with a degree in Computer Science can learn the skills necessary to do a particular job, but simply having that degree does not suggest they already have those skills.

This is often a point of contention for students in my experience. Students often ask when they will use something in the real world. In our Knowledge-Based AI class, we do often receive feedback from students that they would like to see examples that are more applicable to their job. While this feedback is understandable, it doesn’t fall into the function most of us attribute to the Masters degree. Others will disagree with me, of course, but I would argue that few research universities view their degree programs as job training, and thus, few would be eager to revise their curricula to bring them more in line with the demands of the job market.

So, we have students treating higher education as job training, while higher education does not regard itself as job training. Is this a problem? I would argue yes; the massive cost of higher education is associated with the cost of the creation and dissemination of knowledge, not with cost of job training. Getting a Bachelors degree to get a good job is like buying house just to have a front lawn. If job training is the goal, we can accomplish it more cheaply and efficiently, while also allowing research universities to focus on the types of knowledge creation that they were built to do.

Ultimately, I would argue that the goals of research universities and the goals of students at research universities are remarkably misaligned. Moreover, I would argue that universities shouldn’t modify themselves to come more into alignment with what students are using them for; alternative, more affordable solutions are necessary to allow job seekers to get job training and knowledge seekers to get knowledge.

Full disclaimer: I work for Udacity, and Udacity’s Nanodegree credentials are a step in the direction of streamlining the function that I claim students are deriving from higher education. You wouldn’t be off-base to believe I’m simply biased in favor of the company I work for. In actuality, however, initiatives like the Nanodegree credentials are why I work for Udacity; I wholeheartedly believe in its mission to make it easier for students to get what they’ve wanted all along.

[1] I don’t mean to claim this distinction is purely black-and-white; there are certainly students that go to college to learn and grow knowledge, and there are certainly departments and individuals at the college level that are concerned with their graduates’ job placement. When thinking about the primary goals these groups have in mind, though, I’d argue my distinction holds: students primarily use college as a step toward a career, and universities generally consider themselves to be bastions of knowledge.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

Graduation rates: Look to your left. Now look to your right. Now look in two other random directions.

I feel I should start this post by saying I’m not advocating a return to the old way of doing things. The purpose of this post is just to note a change and ponder a bit what the change means.

An article last week talked about Georgia Tech’s moves to make student retention and graduation among its top priorities. The goal is to keep and graduate more students. That’s a good thing, right?

Many years ago, it wouldn’t have been considered a good thing. Georgia Tech used to be famous in part for its “look to your left” speech at convocation. In Thomas Friedman’s The World is Flat, Friedman includes an excerpt from former Georgia Tech President G. Wayne Clough. Clough recalls, “When I came to Tech as an awestruck freshman back in the sixties, they had this drill for the incoming students. They would tell us: ‘Look to your left. Look to your right. Only one of you will graduate.'”

Back then, a low graduation rate was a point of pride. And in some ways, doesn’t that make sense? If fewer people are able to accomplish something, doesn’t that mean the task is harder? And if the task is harder, doesn’t that make it more of an accomplishment if you do succeed at it? To take an analogy, isn’t completing a marathon more prestigious than completing a 5K because fewer people can do it, because it’s more challenging?

We’ve gone from bragging about our low graduation rate to treating it as a problem to be repaired. What happened? Perhaps we started caring about our students more. Perhaps we recognized that it was the wrong 33% of students graduating, that the reasons students were failing out weren’t correlated with long-term success. Perhaps we recognized that when students leave, they take their tuition checks with them.

Today, Georgia Tech graduates 82% of its students within six years of matriculation. Today, the speech would read, “Look to your left. Now look to your right. Now look at two other random people in the audience. Four of the five of you will graduate within six years.” Which begs the question: is a Georgia Tech diploma as prestigious today as it was fifty years ago? If 80% of students that attempt one can complete it, is it easier than if only 33% of students can complete it? If the ranks of marathon runners expanded 250%, would we not regard the accomplishment as less prestigious than previously?

Grade inflation would lead to higher graduation rates. Easier classes would lead to higher graduation rates. There are many ways we can increase graduation rates at the expense of the education. Just as we could increase the ranks of marathon runners by shortening the length we call a ‘marathon’, we also can increase our graduation rate simply by decreasing the challenge. There are lots of ways to increase graduation rates without negatively impacting the education, though. We can get better at choosing students who are likely to succeed in the first place. We can provide better structures and support environments to maximize the likelihood of student success. We can improve the quality of the instruction to ensure students learn more. We can only admit the best runners, we can make sure the route is fair and the weather is good, and we can provide good hydration along the course to maximize our marathon runners’ chances at success.

But at a certain level, don’t we want to know which students could succeed without that support? College serves many functions: one is to teach a body of knowledge in a certain field, and toward that end, improving the instruction and environment is a highly desirable move because it increases the number of students that attain that body of knowledge. But college is also a test: it examines whether one can succeed at a rigorous course of study. By improving the instruction, do we not inherently lower the rigor? Do we not want to know which students would have succeeded under more rigorous conditions? Aren’t we curious which of those marathon runners could have also completed the race on a hilly course on a rainy day without food or water?

I don’t have an answer here. This comes down, in part, to the function of college: is it to obtain a corpus of knowledge, or is it to prove a level of ability? Who even gets to decide the function? I would speculate universities would favor the former function, but for the vast majority of students, the reason for earning the diploma is to get a job, and those jobs often involve few of the skills learned in the classroom. A Georgia Tech diploma, in my experience, is less an indicator of what you know and more an indicator of what you can do, which would tend to lean toward the latter. If we can design college programs that are more closely tied to the requirements of the job market, we may be able to move toward the former function of higher education, but I don’t believe higher education is interested in reinventing itself as job training — even though that may be one of the major functions it plays in present society.

There are many complicated relationships at play here, but I can’t get over a simple interesting fact: in a span of only a few decades, Georgia Tech went from bragging about its low graduation rate to bragging about its high graduation rate. That’s a fascinating about-face. For my part, I agree with the new focus: we should do everything in our power to equip our students to succeed, as well as to ensure we’re only giving the opportunity to the students who have the potential to succeed in the first place. But I can’t deny that as an undergraduate at Tech, I took great pride at succeeding at a school where success wasn’t guaranteed.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone

Recommended Reading: “America as 100 College Students”

This morning, the Bill & Melinda Gates Foundation posted this article.

The article gives the demographics of college students in the United States and paints a very different picture than the popular conception. Some interesting takeaways:

  • 25% of students now receive part of their college education online.
  • 26% already have children.
  • 83% receive some kind of financial aid, and 72% are employed while pursuing their degree.

To keep up with these portions of the college population, it’s clear that higher education needs to get cheaper, more flexible, and more accessible. Massive numbers of students can’t afford to take years off from work to pursue a higher degree, and the inability to do so should not preclude them from world-class education.

It’s important to remember that college is not just a place for students to learn from professors; it is also a place for students to learn from one another. The unique perspective, industry experience, and life experience of these non-traditional students should not simply be accommodated; it should be leveraged.

Share on Google+Share on FacebookTweet about this on TwitterShare on LinkedInShare on RedditPin on PinterestEmail this to someone