ABSTRACT

Any way one may look, the area of sequential Bayesian estimation is very complicated. Thus, unfortunately, its presentation can quickly get out of hand for an audience at practically any level. It so happens that its indepth understanding would normally require a high level of mathematical sophistication. However, in a methodological book like ours, it will be a disservice to set a bar for mathematics prerequisite that high. Frankly, we would rather not alienate those who may otherwise embrace the applied flavor of this book. Hence, we only include some selected ideas and concepts of Bayes sequential estimation. In Section 15.2, we provide a brief review of selected concepts from fixed-

sample-size estimation. We include notions of priors, conjugate priors, and marginal and posterior distributions. These are followed by discussions about Bayes risk, Bayes estimator, and highest posterior density (HPD) credible intervals. For brevity, we do not mention Bayes tests of hypotheses. One may

quickly review Bayes tests from Mukhopadhyay (2000, Section 10.6). A fuller treatment is found in Ferguson (1967), Berger (1985) and elsewhere. Section 15.3 provides an elementary exposition of sequential concepts

under a Bayesian framework. First, we discuss risk evaluation and identification of a Bayes estimator under one fixed sequential sampling strategy working in the background. This is followed by the formulation of a Bayes stopping rule. However, identification of a Bayes stopping rule frequently becomes a formidable problem in its own right. Hence, we take liberty to not go into many details at that point other than cite some of the leading references. Section 15.4 includes some data analysis.