My background is in intelligent tutoring systems. In intelligent tutoring systems, an artificial intelligence agent typically monitors student performance and reacts accordingly, giving feedback or support where necessary.
One of the interesting things about developing an online class is that because students are already engaging with a software system, the infrastructure and context necessary for an intelligent tutoring system are already present. What’s more, not only are they present, but they’re directly integrated into the context of the lesson. Whereas oftentimes intelligent tutoring systems are separate activities that complement a previously-received lecture, putting the learning online from the get-go allows us to integrate intelligent tutoring directly into the context of the lesson.
This development is in its infancy as far as I’m concerned, but I wanted to explain one way in which we use this in our Georgia Tech OMS class. Throughout the course, we have 125 interactive exercises each equipped with an AI agent – which I’ve taken to calling a ‘nanotutor’ to reflect the tiny scope of the skills that these agents teach – that gives students feedback on their latest responses. Let’s walk through an example of an exercise.
This exercise is an example of the block-stacking problem, a famous problem in AI. Here, we address how an AI agent would select and prioritize actions to accomplish some goal. The agent can move only one block at a time, and cannot move a block that has another block on top of it. To start the lesson, students are asked to do this task themselves: write a series of operators that will lead to the desired result. In the blue box above, students write their proposed series of operators.
Each time students submit an answer, they receive direct feedback from the nanotutor for that exercise. For example, imagine the students enter operators that aren’t valid for the problem at all:
On the right, the student receives the feedback from the nanotutor reflecting the problem with their response: here, the input simply wasn’t readable. The agent replies by telling students exactly what kind of input would be readable for this problem: a series of operators, one on each line, with the syntax ‘Move(object, location)’. So, the student fixes their input and writes the following:
Now the input is readable, and in fact, if these moves could be executed, the goal would be accomplished. But there is a problem: to move block A on top of block B, A cannot have anything on top of it. However, block A has block C on top of it. So, the agent replies to the student that while their input is now readable, it isn’t valid. It disobeys one of the rules. So, the student corrects this problem:
Now the student’s moves are all valid and legal. However, there is another problem now: while the student’s moves are all valid and legal, they don’t lead to the desired result. A is still on the table at the end of this sequence of moves, and so the nanotutor replies telling the student that there moves are all readable and legal, but they don’t lead to the right configuration.
So, finally, the student revises this one more time:
The exercises throughout the course range wildly in complexity, but almost all are equipped with an agent that operates under this structure. Here’s a visual that represents the general way these agents approach these problems:
The nanotutor first checks if an answer is readable; if not, it replies with feedback on what responses would be readable, transitioning the student from the blue rectangle to the red circle. Once the response is readable, the agent checks if it is valid; if not, it responds with what is invalid about the answer, such as an illegal move, transitioning students from the red circle to the orange circle. Once the answer is valid, it checks to see if it’s correct; if not, it points out the error, transitioning students from the orange circle to the blue one. Then, finally, if there are multiple possible correct answers but some are better than others, it gives feedback on that as well, potentially transitioning students from the blue circle to the green one. For example, the nanotutor for the block problem above might give students feedback on how the problem could be solved in only three moves if they are using four.
During the problem-solving activity, students receive instant feedback on the correctness of their answer targeted directly to their individual errors. In a traditional classroom, students typically rely on an instructor to provide this kind of feedback, but it is difficult for an instructor to give feedback to the entire class at once. Some classes use tools like this on assignments to help students, but this still decontextualizes the problem: students solve problems outside the context of the lectures, and thus do not have an opportunity to check their understanding of lecture material while watching the lecture. The online interface provided here, however, lets us embed this individualized feedback directly in the context of the lesson itself in a way that can provide feedback to an unlimited number of students simultaneously.