Module 20. Item Response Theory
In Module 13, we discussed classical test theory item analysis (CTT-IA), where the focus was on how difficult and discriminating each item on a given test was within a particular sample. Under the CTT-IA framework, items on a given test are retained or discarded based on how difficult they are, as estimated by the percentage of respondents answering the item correctly-the p value-and how well they discriminate among our examinees, as estimated by an item-total correlation-the point-biserial correlation coefficient. In addition, our estimate of a person’s underlying true score (or ability level) is simply the sum of the number of items correct, regardless of which items the individual answered correctly. CTTIA has been a workhorse over the years for test developers and users who want to improve the quality of their tests. Given no other information, CTT-IA can be useful for local, small-scale test development and revision. However, there are newer, more psychometrically sophisticated models of item responding that provide much more useful and generalizable information to test developers and users who want to improve the quality of their tests, namely, item response theory (IRT) .