Working in both education and artificial intelligence, the last few months have seen the release of tons of tools with some really exciting applications to teaching and learning. But for a lot of those applications, exciting is just one side of the coin, and the other side is scary. There are so many ways in which AI can help teachers, especially on the content generation side, but that can have a massive down side too: it can erode trust in students that the teacher really is the one behind the content that they present and it can replace high-quality content with low-quality AI slop. And there’s not a clear dividing line between the two different categories of use cases.

Recently, I’ve been experimenting with one form of AI content generation that I think has a lot of upside: I take content that otherwise would have been presented in static text with minimal reinforcing visuals, and I use AI to liven it up with voice narration, video avatars, and improved visuals. And importantly, that AI is generally trained on my own likeness and voice, so it is made to look and sound like me.

But while doing this, it became really clear to me how dangerous this can get. It wouldn’t be difficult to share my avatar with someone else and have someone else write content for me. If I got into the habit of posting weekly AI-produced video announcements, it’d be easy to ask one of my teaching assistants to write it for me for a week and publish it through my avatar, letting people believe it’s really me. Technologically there’s no obstacle. But ethically, there clearly is.

I feel that in order to navigate this, it’s important to be proactive and forthcoming with how you intend to use AI. Your name, likeness, and reputation are your biggest assets, and if people come to doubt whether they can trust even those as being authentic representations of you, you’ve lost something significant.

So, toward that end, I wrote down for myself three rules for principled AI content generation. These are the rules I follow to ensure (a) that my use of AI in the content generation loop does not undermine anyone’s trust in the real me, and (b) to ensure that I’m using AI to improve what I’m putting out in the world, rather than spending 10% the time to make something 50% as good.

The three rules are:

Authenticity: Anything presented in my name—whether that be posted under my name on a forum post, sent by my via email, or presented by an AI avatar of me in video—must be written by me. AI can play the role of a collaborator or editor in the content generation process: it can give minor feedback that I directly incorporate myself, or be a brainstorming buddy on ideas, or basically do anything else I’d be comfortable having my spouse, my colleagues, or my teaching assistants do. But just like I wouldn’t ask any of them to write an email as me or post to a course forum as me, I’m not going to have AI venture that far either. If I wouldn’t be comfortable getting in front of a teleprompter and reading it, nothing bearing my name, face, or voice can present it.

Enrichment: AI content generation is only used when it improves what I would have done otherwise. If I was comfortable going into a studio and presenting something with a script and slide, that’s still what I do. But lots of time I have content that I’m not ready to give the full studio treatment: I’m either not confident enough in my explanation, or the field is moving too fast to feel like we can commit anything to durable video. In those situations, we usually stick to text, static slides, and pointers to external links and readings. For that content, AI-generated presentations and narration improve what we would have done otherwise, and so it’s acceptable. In the same way, I stopped filming weekly video announcements for my course years ago and decided to keep them in text because it became too much work to do every week for relatively meager gains (<10% of students watched the videos rather than read the text); AI generation of weekly video announcements would be acceptable to me because it’s not something I’m going to do myself anyway. AI has to make what we would have done better, not let us make something worse with way less effort.

Transparency: It always has to be clear when AI is responsible for content generation. I’m not going to roll out my AI avatar and pretend it’s the real me. I’m even considering doing something stylistic to my AI avatar to give away that it’s my avatar on screen, not the real me. It’s a slippery slope to pretending your AI is the real you, and it’s a slope we’re all already on in some way. In a lot of ways, when we use spell check or when we use an email client’s automated reminders to follow-up with someone, we’re lacking transparency about the role of AI in what we’re doing—and at some level I think that’s okay. What’s more complicated even is that I think that level shifts over time: nowadays my expectation is that people use spell check, but I remember my English teacher 25 years ago disallowing it. But while there’s a vast gray area in between about when transparency is needed, there are some cases that I feel are obvious—sending an email written by AI or posting a video of your AI avatar as if it’s you are clearly back on the other side of the gray area.

Transparency and authenticity are related, but distinct: in theory, I could be very transparent that my avatar is presenting something written entirely by ChatGPT, so it’s possible to be transparent without being authentic. In the same way, I could write something that my AI avatar presents, and when asked, I could pretend like I filmed it in the studio and hope they don’t notice any idiosyncrasies.