ABSTRACT

Yet despite these shortcomings – or perhaps because of them – the UK is now a major player – if not the major player – globally in terms of school and student data. Its collection and analysis, not to mention the opportunities it provides to academics to carry out good-quality school effectiveness research, has become the template of choice for policy-makers from countries as far

afield as Australia (Downes and Vindurampulle 2007) and Poland (Jakubowski 2008). Over the past decade, various measures for gauging pupil attainment and progress in schools have been introduced: from simple threshold measures of raw academic attainment (such as the percentage of pupils obtaining a particular set of examination grades) to the latest complex contextual value-added (or ‘intake adjusted’) models that take account of a wide range of factors outside the control of schools. These developments have, by and large, been greeted favourably by teachers, and rightly so, though they have sometimes been made to serve two masters simultaneously: to inform school improvement through target setting and to facilitate the government’s accountability agenda through the publication of performance data. However, there are obstacles to extending the use of data even within a profession that welcomes it, one of which is the fact that the terminology used carries with it a context-specific lexicon whose terms and cognates, while straightforward to those familiar with their provenance, have different shades of meaning in everyday life. To a modest extent, this book seeks to prise open that ‘black box’ and explain the concepts, terminology and processes behind the collection, analysis, interpretation and utilisation of data in schools: from measures of attainment and progress, to non-cognitive metrics for social and emotional aspects of learning and staff responsibility. It is an ambition best served (we think) by considering in depth the English system, which leads the field, rather than describing in lesser detail many systems from different countries, though of course all data systems share some common features. For one thing, there is an obvious ‘flow’ to data processes, from collection through analysis and interpretation to utilisation (see Figure 1.1), and it is a simple matter to represent in each system the typical learning feedback cycles within that flow: systemic practice learning (1) when the experience of utilisation is fed back to those who decide what data to collect; systemic technical learning (2) when the experience of analysis is fed

back; professional learning (3) when the experience of utilisation informs new interpretative approaches; personal learning (4) when utilisation leads to better utilisation; and institutional learning (5) when the experience of utilisation leads schools to new ways of analysing the data passed to them by outside agencies. Additionally, most systems share the fact that the collection and analysis of data both require high levels of technical expertise, and engagement with the interpretation and utilisation of data (where it can most effectively be used for improvement) requires higher levels of professional expertise (see Figure 1.1).