ABSTRACT

This chapter introduces kernel smoothing techniques in the scenarios where two or more distributions are involved. It introduces local significance testing for the difference between two density estimates. The chapter extends to the comparison of multiple densities in the framework of classification/supervised learning. It examines deconvolution density estimation for smoothing data measured with error. The chapter highlights the role that kernel estimation can play in learning about other non-parametric smoothing techniques, in this case, nearest neighbour estimation. It fills in the mathematical details of the considered topics. A naive application of the estimation techniques designed for error-free data to the contaminated data can produce biased results which may lead to erroneous conclusions. Two different errors-in-variables models have been widely considered in the literature: classical errors and Berkson errors. Kernel estimators with a fixed, global bandwidth can be considered as the most fundamental case for data smoothing.