ABSTRACT

This chapter describes bisection, Newton, quasi-Newton, and gradient algorithms for single and multi-variable problems. The former depend on a method such as bisection or Newton's that numerically solves an equation. Newton's method follows naturally from this principle: Differential calculus is all about approximating functions by affine functions. Newton's method iteratively replaces its current estimate y of a solution to by the solution to. Now let's generalize Newton's method further to compute a solution to where. Thus each iteration of Newton's method only requires differentiation and solving n simultaneous linear equations in n variables. The standard elementary nonlinear unconstrained optimization algorithms search for local optima, by either setting the derivative to zero, or iteratively improving their solution. The name or word would have been the key. For another example, L could be an ordered list of transaction record codes, each linked to a file documenting that particular transaction.