ABSTRACT

This chapter discovers a family of problems from measurable dynamics and their connection to the theory of positive definite functions. It presents an extension of the traditional setting for reproducing kernel Hilbert space (RKHS) theory. The chapter talks about a measure theoretic variant of the discrete networks. An extreme learning machine (ELM) is a neural network configuration in which a hidden layer of weights are randomly sampled C. E. Rasmussen and C. K. I. Williams, and the object is then to determine analytically resulting output layer weights. Hence ELM may be thought of as an approximation to a network with infinite number of hidden units. The RKHSs have been studied extensively since the pioneering papers by Aronszajn Aronszajn. One of the applications of kernels and the associated RKHS is to optimization. The literature so far has focused on the theory of kernel functions defined on continuous domains, either domains in Euclidean space, or complex domains in one or more variables.