ABSTRACT

This chapter discusses several techniques related to machine learning. The k-nearest neighbor (k-NN) algorithm is the simplest machine learning technique known to man. For k-NN, it can be proved that as the number of data points in the training set goes to infinity, the error rate approaches optimal, in a welldefined sense. Neural networks seem to be almost synonymous with the field of artificial intelligence (AI). There are many different types of neural networks, the chapter also discusses one— the multilayer perceptron (MLP). The chapter describes linear discriminant analysis (LDA) training process in detail. LDA training process based on Lagrange multipliers and eigenvector analysis. However, in LDA there is nothing comparable to the kernel trick, which plays a prominent role in support vector machine (SVM). The chapter argues that a Hidden Markov Model (HMM) can be viewed as a sequential version of naive Bayes.