ABSTRACT

The dimensions of quality for applied and practice-based research presented in the article by Furlong and Oancea (2006) and further elaborated in this special volume have made an important contribution to debate among researchers and policy-makers during the last two years. The Research Assessment Exercise in 2002 revealed an unhelpful fracture in the purposes and funding of research. In preparing their submissions, academics in professionally oriented disciplines such as accountancy, social work and education had difficulty in judging which research products would be given high status in the assessment process; a similar difficulty, albeit for somewhat different reasons, applied to those in disciplines such as chemistry, engineering and material science with strong traditions of carrying out development work with funding from industry. In both cases the problem arose both from the assumptions made by assessors about the status of research in relation to the credentials of the sponsor (research councils having a higher status than public or private funding bodies) and the nature of the products (articles in peer-reviewed academic journals having a higher status than reports published by non-academic organizations like Government departmentsoften because the theoretical and/or methodological characteristics were more fully articulated in the former). The discourse used in these two broad categories of publication compounded the problems since peer-reviewed journals and reports of contract research were necessarily reported in distinctively different discourses, each of which instantiates claims for worth based on different criteria. This fracture in the values underpinning the assessment of ‘research worth’ was not merely an internal matter between academics, but also the cause for continuing debate amongst politicians and policy-makers for whom research which gained the highest academic ratings was often neither relevant to current urgent policy concerns nor expressed in a language which made it accessible to what became known as ‘users’ (Wiles, 2004, p. 33). In response

to this obvious need for policy-makers and academic researchers to address each other’s needs and purposes more responsively, the BERA SIG on educational research and educational policy-making was set up in 2005, by Saunders. Both authors of this contribution, working respectively in a university educational research institute and the recently established professional body for teaching in England, are active members of this SIG and have also worked together in the relationship of researcher and research sponsor/manager on the Pedagogies with E-Learning Resources (PELRS) project1

which is the subject of this contribution. The Furlong and Oancea quality framework is, therefore, extremely welcome in

providing not only an occasion, but also some important concepts, for addressing, if not definitively resolving, these enduring dilemmas. However, it is also important to be aware of the framework’s provenance in a highly politicized and competitive research environment. In drawing up the framework a selection was made of which literature to review, who were to be selected as ‘key persons’ to provide ‘insights’ on the developing model and who were to be the ‘key representatives of the academic, policy and practitioner communities’ invited to the consultation day. It must be both a strength and a weakness that the framework emerged from a distillation of current ideas and assumptions-across a wide community-about research. Its strengths lie in its clarity, in the wide degree of acceptance it has received from many stakeholders, and its practical utility in guiding not only those preparing RAE submissions and those serving on RAE panels, but also research sponsors and ‘users’ in thinking more deeply about what they might mean by ‘quality’. Its weakness, we would argue, lies in the insufficiency of its theorization, and specifically the missed opportunity to question assumptions about status and quality that have been culturally-historically constructed and then re-enacted through the beliefs and values of its sources and informants. For example, the framework seems to the two authors of this contribution to perpetuate a division between one kind of research that can claim ‘scientific robustness’ (which seems to apply only to the left hand column, according to the labelling) and three other kinds that can claim ‘social and economic robustness’. Implicit in the model is the notion that research which has strong ‘value for use’ or ‘capacity building and value for people’ will not also display ‘methodological and theoretical robustness’. This is the assumption which we want to challenge in giving an account of our research, and in doing so to give a more explicit and nuanced rendition of the relationship between knowledge and human activity. In this contribution we suggest how the value of our research into learning transformation can best be assessed by using criteria drawn from cultural-historical activity theory (CHAT). This entails a strong and constructively critical engagement with the framework, blurring the boundaries between the left-hand column and the rest, because we argue that in research about change-which necessarily includes all learning-the dimension of ‘methodological and theoretical robustness’ can only be fully realized through the dimensions of ‘impact’ and ‘value for people’. The PELRS project employed a CHAT approach which is characterized as ‘methodologically a form of action research that stresses the integration of basic theoretical work with empirical-practical engagement’ (Langemeyer & Nissen, 2005).