Parallel models are ordered regression models that impose the parallel regression constraint for every independent variable in the model. In other words, the slopes do not vary across cutpoint equations. As a result, parallel models are “ordinal” models in the strictest sense of the term because the parallel regression assumption ensures a strict stochastic ordering (McCullagh 1980, pp. 115-116), which we discussed in detail in Chapter 1. Parallel models are also the most parsimonious ordered regression models covered in this book. Parallel models only require the estimation of one coefficient for each independent variable and M-1 cutpoints or constants, where M is the number of outcome categories. For example, a parallel model for an outcome with 4 categories and 10 independent variables requires the estimation of 13 parameters: 10 slopes and 3 cutpoints. By contrast, partial and nonparallel models with the same set of variables would require the estimation of as many as 33 parameters. Parsimony is one of the most important advantages of parallel models, which is particularly important for analyses based on small samples.