ABSTRACT

In the past several years, many neural networks learning algorithms were proposed to solve the MCA problem, such as [36, 38, 48, 60, 119, 130, 135, 185]. These learning algorithms can be used to calculate minor component direction from input data without calculating the covariance matrix in advance, which makes neural networks method more suitable in real-time applications and have a lower computational complexity than traditional algebraic approaches, such as eigenvalue decomposition (EVD) and singular value decomposition (SVD). However, there is a divergence problem in some existing MCA algorithms [48, 130]. For example, OJAn algorithm [185], MCA EXIN algorithm [48], and LUO algorithm [119] may suffer from the divergence of weight vector norm, as discussed in [48]. To guarantee convergence, some self-stabilizing MCA learning algorithms are proposed, for instance, MOLLER algorithm [130], CHEN algorithm [36], DOUGLAS algorithm [60], and Ye et al.’s algorithms [188]. In these self-stabilizing algorithms, the weight vector of the neuron can be guaranteed to converge to a normalized minor component direction. In this chapter, following the studies in [36, 60, 130, 186], we propose a stable neural networks learning algorithm for minor component analysis, which has a more satisfactory numerical stability compared with some existing MCA algorithms.