ABSTRACT

We begin with independent and identically distributed (iid) observations https://www.w3.org/1998/Math/MathML"> X 1 , … , X n https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780429113741/fa82963e-6109-44c7-ad13-faec498214f4/content/eq5407.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> having a common probability mass function (pmf) or probability density function (pdf) https://www.w3.org/1998/Math/MathML"> f ( x ; θ ) , x ∈ X https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780429113741/fa82963e-6109-44c7-ad13-faec498214f4/content/eq5408.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> . It is not essential for https://www.w3.org/1998/Math/MathML"> X https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780429113741/fa82963e-6109-44c7-ad13-faec498214f4/content/eq5409.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> to be real valued or iid. The sample size https://www.w3.org/1998/Math/MathML"> n https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780429113741/fa82963e-6109-44c7-ad13-faec498214f4/content/eq5410.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> is assumed known. We suppose that https://www.w3.org/1998/Math/MathML"> θ https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780429113741/fa82963e-6109-44c7-ad13-faec498214f4/content/eq5411.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> is fixed but unknown and https://www.w3.org/1998/Math/MathML"> θ ∈ Θ ⊆ R k https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780429113741/fa82963e-6109-44c7-ad13-faec498214f4/content/eq5412.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> . This chapter develops point estimation techniques only. Section 7.2 introduces the method of maximum likelihood that was pioneered by Fisher (1922, 1925a, 1934). Since one may encounter many estimators for https://www.w3.org/1998/Math/MathML"> θ https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780429113741/fa82963e-6109-44c7-ad13-faec498214f4/content/eq5413.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> , some criteria to compare their performances are addressed in Section 7.3. We show ways to find the best estimator among all unbiased estimators. Sections 7.4 and 7.5 give a number of fundamental tools, for example, the Rao-Blackwell Theorem, the Cramér-Rao Inequality, and the Lehmann-Scheffé Theorems. Section 7.6 discusses a large-sample criterion called consistency due to Fisher (1922).