ABSTRACT

Root finding and optimization are closely related problems. This chapter seeks solutions to functions, linear and nonlinear, that meet some requirement. It begins with the most elementary, the bisection method, which resembles a children's game. The chapter looks at the Newton–Raphson method, an iterative approach that uses the derivative to estimate a better result for the next iteration. It also looks at the secant method, which is similar to the Newton–Raphson method, but uses an estimate for the derivative. The chapter reviews a minimization technique that will find a minimum if it is bracketed, like the bisection method for root finding. It discusses gradient descent, a minimization technique that relies on knowing the first derivative of the function in question, like the Newton–Raphson method. The golden section search is a good algorithm for maximizing functions of one variable, if the maximum can be bounded. Gradient descent and hill climbing do an excellent job of finding a nearby minimum.