ABSTRACT

Suppose we were to apply the various tests of attributive confidence (laid out in Chapter 1, pp. 14-15) to the research and theory underpinning CBT, how well would it fare in comparison with other methods, or compared to no intervention at all? What works? Although politicians and research funders like the simplicity of this phrase, and though it has performed a useful function in cutting through academic obfuscation (cf. Macdonald & Roberts 1995) it is too simple for its own good. What we should be asking instead is more complicated: (1) What exactly is the ‘what’? (2) How does it produce its effects? How is ‘works’ assessed? Is it measured against tangible behavioural change and standardised psychological measures, or more subjectively? (3) Can we be reasonably sure that any apparently useful results are attributable to the method(s) and not to collateral factors? (4) What are the essential ingredients of the intervention and what forgo-able add-ons? (5) How long do any beneficial effects last in comparison with other interventions? (6) For what conditions, and against what diagnostic criteria have methods been used and to what differential effects? (7) How much does it cost in relation to other approaches? (8) Who gets what is claimed to work and who doesn’t? (9) Are there any side-effects; how foreseeable are they, and how remediable are they?