ABSTRACT

Although the classical and Bayesian paradigms are quite different, both take the likelihood as their starting point. As we have seen in the last chapter, classical inference proceeds by forming likelihoods, and regarding them as functions of the model parameters. The estimates of those parameters are then obtained by finding the values of the parameters that maximise the likelihoods. This typically requires numerical analysis methods in order to perform the optimisation. The reason for the enduring success of maximum-likelihood methods is because the estimators that result have excellent properties, which we shall discuss later. The Bayesian paradigm is quite different, as explained in Chapter 1. As we shall appreciate in more detail in Section 4.1, we form the posterior distribution of the parameters by effectively multiplying the likelihood by the prior distribution. In order to make appropriate comparisons between classical and Bayesian methods, we devote this chapter to aspects of likelihood optimisation, and the properties of the resulting estimators.