ABSTRACT

In this chapter, we will introduce two complex difference statistics: factorial ANOVA and ANCOVA. Both factorial ANOVA and ANCOVA tell you whether considering more than one independent variable at a time gives you additional information over and above what you would get if you did the appropriate basic inferential statistics for each independent variable separately. Both of these inferential statistics have two or more independent variables and one scale (normally distributed) dependent variable. Factorial ANOVA is used when there is a small number of independent variables (usually two or three) and each of these variables has a small number of levels or categories (usually two to four). ANCOVA typically is used to adjust or control for differences between the groups based on another, typically interval-level variable, called the covariate. For example, imagine that we found that boys and girls differ on math achievement. However, this could be due to the fact that boys take more math courses in high school. ANCOVA allows us to adjust the math achievement scores based on the relationship between number of math courses taken and math achievement. We can then determine if boys and girls still have different math achievement scores after making the adjustment. ANCOVA can also be used if one wants to use one or more discrete or nominal variables and one or two continuous variables to predict differences in one dependent variable. Assumptions of Factorial ANOVA and ANCOVA (Analysis of Covariance) The assumptions for factorial ANOVA and ANCOVA include that the observations are independent, the variances of the groups are equal (homogeneity of variances), and the dependent variable is normally distributed for each group. Assessing whether the observations are independent (i.e., each participant’s score is not related systematically to any other participant’s score) is a design issue that should be evaluated prior to entering the data into SPSS. Using random sampling is the best way of ensuring that the observations are independent; however, this is not always possible. The most important thing to avoid is having known relationships among participants in the study (e.g., several family members or several participants obtained through “snowball” sampling included as “separate” participants). Second, to test the assumption of homogeneity of variances, SPSS computes the Levene statistic, which can be requested using the General Linear Model command. It is important to have homogeneity of variances, particularly if sample sizes differ across levels of the independent variable(s). The third assumption is that the dependent variable needs to be normally distributed. Factorial ANOVA is robust against violations of the assumption of the normal distributions of the dependent variable. To test this assumption you can compare box plots or compute skewness values through the Explore command for the dependent variable for each group (cell) defined by each combination of the levels of the independent variables. Additional Assumptions for Analysis of Covariance (ANCOVA) For ANCOVA there is a fourth assumption that there is a linear relationship between the covariates and the dependent variable. This can be checked with a scatterplot (or matrix scatterplot if there is more than one covariate). The regression slopes for the covariates (in relation to the dependent variable) need to be the same for each group (this is called homogeneity of regression slopes). This assumption is one of the most important assumptions, and it can be checked with an F test on the interaction of the independent variables with the covariate. If the F test is significant, then this assumption has been violated. • Retrieve hsbdataB from your data file.