This chapter looks at some methods for dimension-reduction which are not specific to characterizing human performance data sets, but which may however be useful for the purpose. It reviews several well-known global parametric methods, and describes how similar methods may be used for modeling human performance data. The chapter presents three global methods: principal component analysis, nonlinear principal component analysis (NLPCA), and a variation on NLPCA called sequential NLPCA (SNLPCA). M. A. Kramer’s SNLPCA algorithm is a modification to the NLPCA method which produces a nonlinear factorization, and where the training process prioritizes each resulting feature as to its relative power in explaining the variations of the training set. The input-training neural network of S. Tan and M. L. Mavrovouniotis generates a much better mapping than does Kramers’s NLPCA method. SNLPCA performs a series of NLPCA operations, each training a neural network with a bottleneck layer consisting of a single unit.