ABSTRACT

In the not too distant past educational indicators were seen as fairly simplistic in nature and defined as straightforward quantitative measures of different aspects of an education system. Most commonly indicators were aggregated at the national level and in some cases at the regional, local district or school level to provide a basic summary of educational provision, take-up and costs. However, in the last ten years or so a more complex picture has developed with indicators falling into distinct categories of input, process, context and output data. This work has drawn to a large extent on the OECD indicators project (INES) which aims to develop a comprehensive system of educational indicators in four different aspects – student learning outcomes, education and labor market destinations, schools and school processes and attitudes and expectations of stakeholder groups in education (OECD, 1995). However, Scheerens (1999) has argued that current educational indicator systems are limited in approach as they have “…no aspiration to “dig deep”, while employing easily measured characteristics and so-called proxy measures.” He goes on to suggest that:

Another “danger” is the use of process or throughput data as evaluation criteria, instead of explanatory conditions of educational outputs. This could easily lead to goal displacement, where the “means” in education are treated as “goals” in themselves. A technical limitation which might encourage this improper use of process indicators is the fact that the question of relating process and output indicators by means of formal statistical analysis has hardly been tackled for applied purposes.

(Scheerens, 1999).