ABSTRACT

In Bangladesh, the high-stake Secondary School Certificate (SSC) test of English has been reported to have a negative washback on the teaching–learning practices at schools, which, in fact, questions the quality of the test. Research studies have also reported a gap between the intention of the curriculum and the current SSC English test format, which calls for further investigation into the validity and reliability of the test. This chapter reports the findings of a research study that aimed to investigate (a) the quality of the questions by discriminating between the more-able and less-able candidates and (b) the ways these items are marked. The test items analysed were chosen from the SSC English examinations, 2017 question set. Data for calculating the marks obtained by candidates were collected from about 5000 randomly chosen exam scripts marked under the supervision of six head examiners from three education boards. The findings reveal that the differences among the most test items are too trivial to meaningfully distinguish between the average and high-ability candidates; the texts for reading tasks are poorly written/edited and there are evidences of marker inferences especially in more subjective test items.