ABSTRACT

In designing an experiment, the researcher’s main interest is in creating controlled conditions to easily measure those characteristics that are of interest to him or her. That is, the experiment is designed in such a way as to maintain those factors uniform that are not part of the treatment. While doing so, the researcher must keep in mind that the design must satisfy the assumptions required for proper interpretation of data. Failure to meet the assumptions affects not only the significance level but also the sensitivity of the F-test and the t-tests. The assumptions underlying most of the designs in this book are that: (1) the effects of blocks, treatments, and error are additive (this implies no interaction), (2) the observations have a normal distribution (experimental errors are normally distributed), (3) the observations are distributed independently (experimental errors are independent), and (4) the variance of the observations is constant, which means homogeneity of variance. This implies that the treatment effects are constant and that experimental errors have common variance. In Appendix K, the reader can use MINITAB to evaluate whether any of these assumptions are violated. We must keep in mind that in certain conditions, not all of these assumptions are met. For example, when data are expressed as percentages (such as the percentage of plants infected with a disease or the percentage of germinated seed in a plot), the observations have a binomial distribution and, hence, the variance is not a constant. Similarly, when we are dealing with count data (such as the number of rare insects in a particular field or the number of infested plants in a greenhouse), we have a Poisson distribution, in which the variance is equal to the mean (more about the assumptions and their violations in the sections to follow).