ABSTRACT

Selected Response Items (SRIs) are test item types in which the answer is part of the item and the examinee needs to recognise it. Examples include multiple-choice, true/false, matching, binary choice, extended matching, and multiple true/false. SRIs are common choices for formal and informal educational testing because they are economical to use, can routinely contribute to valid measurement, and have a substantial research base. SRIs are criticised for atomising and decontextualising content and cognition and because they require recognition and not production from examinees. Some of the criticisms are well-founded and others can be ameliorated through best practices. The overarching best practice principle is all item development activities must promote the validity of the interpretation of the examinee’s response and must channel examinee thinking toward content and away from test-taking.

Multiple-choice items (MC) consist of a stem followed by a set of options, which includes a key (the correct response) and distractors. Distractors serve an active role to attract responses from examinees who are insecure about the key and often represent examinee misconceptions. Three options are considered sufficient. Multiple-choice items can be scored correct/incorrect, partial credit, or using negative marking, which addresses guessing. One of the main advantages of MCs is their versatility, especially with technological delivery.

Matching items are MC in which the option set is the same for a series of stems. True/false and binary choice items are MC with two options. Each has its own particular best practices and weaknesses.

Technology first accelerated SRI use in the early twentieth century, and again in the early twenty-first century technology has facilitated SRI test construction, delivery, and scoring. With SRIs’ efficiency, validity, research base, and technological enhancement, they will remain a robust part of formal and informal educational measurement.