ABSTRACT

Classical dimensionality reduction techniques like Principal Component Analysis (PCA) and Multidimensional Scaling perform well on linear data, whereas, they fail when it comes to data that are non-linear. Thus, many non-linear dimensionality reduction techniques have been proposed to overcome the weaknesses of these traditional methods. This chapter will provide a comparative analysis of various dimensionality reduction techniques, from classical approaches to state-of-the-art non-linear manifold learning techniques. Some of the techniques that will be discussed and compared in this chapter are linear PCA, classical scaling, Kernel PCA, Isomap, Maximum Variance Unfolding, Locally Linear Embedding, Laplacian Eigenmaps, Hessian LLE, and t-SNE. Other algorithms such as diffusion maps, Sammon mapping, Local Tangent Space Analysis, multilayer autoencoders, Locally Linear Coordination, and Manifold Charting will also be discussed briefly. This chapter also discusses the classification of dimensionality reduction techniques based on their underlying principles and general properties. Finally, a comparison of manifold learning algorithms is presented along with an example using a dataset and a tutorial for better understanding the algorithms.