ABSTRACT

In this chapter a theoretical introduction of computational optimization is discussed regarding minimization, maximization, constrained and unconstrained optimization, convex and non-convex optimization. Also, several important computational optimization techniques are discussed, such as: Gauss-Newton and quasi-Newton methods, gradient-based methods such as steepest descent, conjugate gradient, and also non-gradient methods such as genetic algorithm and swarm intelligence algorithms. Also, several important optimizers used in machine learning are presented such as: Levenberg–Marquardt algorithm, Scaled Conjugate Gradient, RMSProp, Stochastic Gradient Descent. Furthermore, several important and relatively new optimizers are discussed such as: Adam optimizer (2014), which is an important optimizer, frequently used in machine learning and deep learning, Adagrad (2012) and Adadelta (2012).