ABSTRACT

This chapter discusses basic concepts related statistical modeling, estimation of parameters and prediction of random variables. It is emphasized that statistical reasoning is always based on assumptions made by the modeler. The assumptions can be tested only by making some more general assumptions. The modeling possibilities are dependent on the measurement scales. The basic criteria when comparing different estimators are the bias and the variance, forming together mean square error. Linear estimators have attractive properties. The least squares, maximum likelihood and Bayesian estimation are introduced. With respect to prediction, the concepts linear predictor, best linear predictor and best linear unbiased predictor are defined. The two dominant approaches to hypothesis testing are presented: Fisher’s theory and the Neyman-Pearson theory. One easily finds false significant effects when making multiple tests. Confidence intervals are an alternative view point to the reliability of parameter estimates. When models are not nested, they can be compared using information theoretic criteria. A total of 13 short examples are included.