ABSTRACT

In this chapter, we focused our attention on the application of regularization techniques to latent variable models, in particular factor analysis and SEM. We saw that data analysts have a large set of tools from which to choose when fitting these models. Indeed, other approaches that we did not touch on in this chapter, but which are also potentially useful for this purpose are approaches based on penalized likelihood (Huang, Chen, & Weng, 2017) and moderated nonlinear factor analysis (Bauer & Hussong, 2009). The approaches that we discussed in this chapter allow the researcher to fit standard latent variable models while at the same time taking advantage of regularization, which can be particularly useful for situations involving high-dimensional data structures. In many respects, regularized latent variable models, be they factor analysis or SEM, incorporate the same principles that were outlined in Chapter 2, and demonstrated in the preceding chapters. At the time of this writing, it is not possible to identify any one approach as being optimal or preferred vis-à-vis the others. Rather, it is recommended that the researcher use several of the methods outlined here and compare the results obtained from them in order to develop a full understanding of how regularization impacts the estimates when compared to the standard approach, and whether different approaches to penalizing the estimates yield different results. Such an analytic approach can be thought of as a type of sensitivity analysis. The more similar the various penalized estimator results are, the more confidence we can have in the generalizability of the results. On the other hand, if different penalty functions yield very different results, we would need to be more careful in our interpretation of the findings and generalization of the results.

In Chapter 8, we will conclude the book with a discussion of regularization techniques for multilevel models. We will see how multilevel models, in general, can be seen as direct extensions of single-level regression models. We will also learn that the techniques that have been featured throughout this book, including the lasso and ridge estimators, can be readily extended for use with models designed for nested data structures. Indeed, most of the principles that were first outlined in Chapter 2 and that have been foundational to the examples provided heretofore will also be features of the multilevel modeling that is described in the final chapter of the book.