A few months ago, I tweeted about the AI collaboration policy I added to my course syllabi essentially at the last minute before the summer semester began.
I feel like it’s a little trite nowadays to say something “went viral”, but I do know that I woke up the next morning to a ton of notifications and replies. Over the next few months, I had a number of different media requests come up that could be traced directly back to that thread. And frequently in those, I was asked if there was a place where my policies could be found, and… there really wasn’t. I mean, my syllabi are public, but that just shows the policy in isolation, not the rationale behind it. Plus, my classes actually have slightly different policies, so looking at one in isolation only shows a partial view of the overall idea.
In most of my classes, there are a number of specific skills and pieces of knowledge I want students to come away with. AI has the potential to be an enormous asset in these. I love the observation that a student could be writing some code at 11PM at night and get stuck, and instead of posting a question on a forum and waiting to get a response the next day they could work on it with some AI assistance immediately. But at the same time, these agents can often be too powerful: it’s entirely plausible for a student to instead put a problem into ChatGPT or Copilot or any number of other tools and get an answer that fulfills the assignment while having no understanding of their own. And that’s ultimately the issue: when does the AI assist the student, and when does the AI replace the student?
To try to address this, I came up with a two-fold policy. The first—the formal, binding, enforceable part of the policy—was:
My “We’re living in the future moment” came from the fact that this is exactly the same policy I’ve always had for collaboration with classmates and friends. You can talk about your assignment and your ideas all you want, but the content of the deliverable should always come from you and you alone. That applies to human collaboration, and that applies to AI collaboration as well.
With human collaboration, though, I find that that line is pretty implicitly enforced. There are some clear boundaries. It’s weird to give a classmate edit access to your document. It’d be strange to hand a friend your laptop at a coffeeshop and ask them to write a paragraph on your essay. There are gray areas, sure, but the line between collaboration and misconduct is thinner. That’s not to say that students don’t cheat, but rather that when they do, they probably know it.
Collaboration with AI tends to feel a bit different. I think it’s partially because it’s still a tool, meaning that it feels like anything we create with the AI is fundamentally ours—we don’t regard a math problem that we solve or a chair that we build as any less “our” work because we used a calculator or a hammer, though we’d consider it more shared if we instead asked a friend to perform the calculations or pound in a few nails. And this is the argument I hear from people who think we should allow more collaboration with AI: it’s still just a tool, and we should be testing how well people know how to use it.
But what’s key is that in an educational context, the goal is not the product itself, but rather what producing the product says about the learner’s understanding. That’s why it’s okay to buy a cake from a grocery store to bring to a party, but it’s not okay to turn in that same cake for a project in a culinary class. In education, the product is about what it says about the learner. If a student is using AI, we still want to make sure that the product is reflecting something about the learner’s understanding.
And for that reason, I augmented my policy with two additional heuristics:
Truth be told, I really prefer just the second heuristic, but there are instances—especially in getting feedback from AI on one’s own work—where it’s overly restrictive.
Both heuristics have the same goal: ensure that the AI is contributing to the learner’s understanding, not directly to the assignment. That keeps the assignment as a reflection of the learner’s understanding rather than of their ability to use a tool to generate a product. The learner’s understanding may be enhanced or developed or improved by their collaboration with AI, but the assignment still reflects the understanding, not the collaboration.
There’s a corollary here to something I do in paper-writing. When writing papers with student co-authors, I often review their sections. I often come across simple things that I think should be changed—minor tips like grammatical errors, occasional misuse of personal pronouns in formal writing, etc. If a student is the primary author, it makes sense to give more major feedback as comments for the student to incorporate, but for more minor suggestions it would often be easier to make them directly than to explain them—but I usually leave them as comments anyway because that pivots the process into a learning/apprenticeship model. By that same token, sure, there are things generative AI can do that make sense to incorporate directly—that’s part of why there’s been such a rush to incorporate them directly into word processors and email clients and other tools. But reviewing and implementing the correction oneself helps develop an understanding of the rationale behind the correction. It’s indicative of a slightly improved understanding—or at least, I suspect it is.
So, in some ways, my policy is actually more draconian than others. I actually don’t want students to simply click the magic AI grammar-fixing button and have it make suggested changes to their work directly (not least because I subscribe to the view that grammar should not be as restrictive and prescriptive as it currently is seen to be—see Semicolon by Cecelia Watson for more on what I mean there). I’m fine with them receiving those suggestions from such a tool, but the execution of those suggestions should remain with the student.
Of course, there are a couple wrinkles. First, one of my classes deliberately doesn’t have this policy. One of my classes is a heavily project-oriented class where students propose their own ~100-hour project to complete. The goal is to make an authentic contribution to the field—or, since that’s a hard task in only 100 hours, to at least practice the process by which one might make an authentic contribution to the field. Toward that end, if a tool can allow students to do more in that 100 hours than they could otherwise, so be it! The goal there is to understand how to justify a project proposal in the literature and connect the resulting project with that prior context: if AI allows that project to be more ambitious, all the better. The key is to understand what students are really expected to learn in a particular class, and the extent to which AI can support or undermine that learning.
And second and most importantly: we are right at the beginning of this revolution. Generative AI emerged faster than any comparably revolutionary tool before it. Educators ultimately learned to adjust to prior technological innovations: when students received scientific calculators, we assigned more and tougher problems; when students started writing in word processors rather than with pen and paper, we expected more revisions and more polished results; when students got access to search engines instead of library card catalogs, we expected more sources and better-researched papers. Generative AI has arrived with unprecedented speed, but the fundamental question remains: now that students have a new, powerful tool, how will we alter and raise our expectations for what they can achieve?