ABSTRACT

In Section 4.1, Bayes’ procedure is briefly discussed (see Zellner (1971), Bernardo and Smith (1994), O’Hagan (1994), Hogg and Craig (1995) and so on for further discussion). In Sections 4.2 and 4.3, the Bayesian approach is applied to regression models. The heteroscedasticity model proposed by Harvey (1976) is discussed in Section 4.2, while the autocorrelation model discussed by Chib (1993) is introduced in Section 4.3,

4.1 Elements of Bayesian Inference

When we have the random sample (X1, X2,…, Xn) consider estimating the unknown parameter θ. In Section 1.7.5, the maximum likelihood estimator is introduced for estimation of the parameter. Suppose that X1, X2,…, Xn are mutually independently distributed and Xi has a probability density function f(x; θ), where θ is the unknown parameter to be estimated. As discussed in Section 1.7.5, the joint density of X1, X2, …, Xn is given by:

which is called the likelihood function, denoted by l(θ)=f(x1, x2,…, xn;θ). In Bayes’ estimation, the parameter is taken as a random variable, say Θ, where a prior information on Θ is taken into account for estimation. The joint density function (or the likelihood function) is regarded as the conditional density function of X1, X2,…, Xn given Θ=θ. Therefore, we write the likelihood function as the conditional density f(x1, x2,…, xn|θ). The probability density function of Θ is called the prior probability density function and given by fθ(θ). The conditional probability density function, fθ|x(θ|x1, x2,…, xn) have to be obtained, which is represented as:

The relationship in the first equality is known as Bayes’ formula. The conditional probability density function of Θ given X1=x1, X2=x2,…, Xn=xn, i.e., fθ|x(θ|x1, x2,…, xn), is called the posterior probability density function, which is proportional to the product of the likelihood function and the prior density function.