ABSTRACT

Advances in technology and analytic methods spur interest in forms of assessment that look very different from familiar tests. They can involve interactivity, digital environments, or simulations and games, and they can capture complex products or rich data about processes. This chapter discusses how core concepts of educational measurement modeling can be extended to reason from performances in interactive simulation environments, drawing on the sociocognitive perspective, the argument framework, and subjectivist-Bayesian modeling. It describes complementary roles for learning-analytic methods to identify evidence in interaction, evolving, performances and modular probability-based latent-variable models to integrate evidence across tasks and quantify its value. Such a framework holds advantages for managing evidence and inference and for addressing the social values inherent in reliability, validity, comparability, and fairness.