ABSTRACT

Identifying the sources of cognitive complexity on ability and achievement test items has been an active research area; for a review, seeBejar (2010). Several advantages result from developing an empirically plausible cognitive theory for a test. First, the results provide evidence for construct validity, especially the response process aspect, as required by theStandards for Educational and Psychological Tests (AERA/NCME/APA, 1999). Second, as noted by several researchers (e.g.,Henson and Douglas, 2005), results from cognitive processing research are relevant to test design. Identifying the levels and sources of cognitive complexity in each item has implications for both selecting items and designing items. Third, the increasing attractiveness of automatic item generation (seeBejar, 2010;Gierl and Haladyna, 2012) depends on the predictability of the psychometric properties of the generated items. Cognitive models of item difficulty and other psychometric properties can provide the necessary predictions. Finally, assessments of individual differences in cognitive skills are useful for more specific interpretations of test scores. Diagnostic assessment, when applied in the context of appropriate psychometric models (Hensen et al., 2009;von Davier, 2008), provides important supplementary information beyond overall test scores (for reviews, seeLeighton and Gierl, 2007;Rupp et al., 2010).