ABSTRACT

From the very beginnings of economic science, economists (and the public) have been convinced that economic theories can offer impartial guidance for public policy. In other words economic science, it has always been believed, can objectively pronounce some policies to be economically “bad,” and other policies to be economically “good.” For the age of Adam Smith there was little ambiguity in the phrase “economically good” (or its opposite). That which increased the “wealth of nations” was clearly economically good. And for classical economists what constitutes a nation’s wealth seemed reasonably clear.2 As economics (following on the marginalist revolution) advanced through the era of neoclassicism, the precise nature of the economic criterion came to be developed far more critically and self-consciously. The emergence of the theory of welfare economics during the first half of the now concluding century consisted, to a significant degree, in attempts to grapple ingeniously with conceptual problems raised by the subjective character of utility, as distinct from objective wealth (and thus by the difficulty of aggregating a society’s economic well-being). Hayek’s mid-century insights concerning the dispersed character of available knowledge in society3 further challenged the possibility of treating the economic problem facing society, as being that of achieving global efficiency in the allocation of resources. Recent critiques4 of traditional welfare theory have radically questioned the very possibility of devising an economic criterion that might, independently of particular moral philosophical positions, be deployed in order to pronounce one economic state of affairs (or one economic policy) to be “economically better” than a second.