ABSTRACT

This paper evaluates the student modeling procedure in the ACT Programming Tutor (APT). APT is a practice environment that provides assistance to students as they write short programs. The tutor is constructed around a set of several hundred programming rules called the ideal student model, that allows the program to solve exercises along with the student. As the student works the tutor maintains an estimate of the probability that the student has learned the rules in the ideal model, in a process we call knowledge tracing. The cognitive model, and the learning and performance assumptions that underlie knowledge tracing are described. The assumptions that underlie knowledge tracing also yield performance predictions. These predictions provide a good fit to students’ performance in completing tutor exercises, but a more important issue is how well the model predicts students’ performance outside the tutor environment. A previous study showed that the model provides a good fit to average posttest performance across students, but is less sensitive to individual differences. This paper describes a method of individualizing learning and performance estimates on-line in the tutor and assesses the validity of the resulting performance predictions.