ABSTRACT

Vitriolic arguments about the merits of Bayesian versus classical approaches seem to have

faded into a quaint past of which current researchers in the social sciences are, for the most

part, blissfully unaware. In fact, it almost seems odd well into the 21st century that deep

philosophical conflicts dominated the last century on this issue. What happened? Bayesian

methods always had a natural underlying advantage because all unknown quantities are

treated probabilistically, and this is the way that statisticians and applied statisticians really

prefer to think. However, without the computational mechanisms that entered into the field

we were stuck with models that couldn’t be estimated, prior distributions (distributions

that describe what we know before the data analysis) that incorporated uncomfortable

assumptions, and an adherence to some bankrupt testing notions. Not surprisingly, what

changed all this was a dramatic increase in computational power and major advances in the

algorithms used on these machines. We now live in a world where there are very few model

limitations, other than perhaps our imaginations. We therefore live in world now where

researchers are for the most part comfortable specifying Bayesian and classical models as it

suits their purposes.