Agricultural research in North America, for most of its history, has been devoted to the goal of increasing production. This emphasis led to the neglect of two topics: the environmental consequences of agricultural technology (including chemicals) and the social behavior of people engaged in production. Farmers were viewed to some extent as undependable components of the market system, since they could not be relied upon to use the techniques and strategies that research had shown would produce the best results at the least cost (see Bennett 1986). This narrow view of the nature of agriculture developed in the first quarter of the twentieth century, as “farm economics” gave way to “agricultural economics” and as the old extension stations and services became thoroughly integrated into the land grant universities with their huge agriculture departments (Bennett 1982,7–10). Farming, which had been seen in the nineteenth century as a worthwhile human endeavor carried on by sturdy yeomen, was turned into a branch of the business world. Since the burgeoning “science” of economics increasingly took over the130 responsibility of setting standards of performance and profit, the principal aim of agriculture came to be one of increasing production without much concern for the “externalities” of the activity: for example, the costs of resource degradation, or the costs to the farmer of commodity price fluctuation. The latter issue led, by the 1890s, to vigorous agrarian protest movements. However, the environmental costs of growing rationalization of farming did not begin to be an issue until the ecology movements of the 1970s and 1980s and its offshoot, the “alternative agriculture” movements, got under way (see, for example, Lockeretz 1986).