ABSTRACT

Gradient-based optimization techniques have been a mainstay for unconstrained mathematical programming (optimization) procedures for decades. These techniques use gradients of the objective function in order to develop a path of sequential directions that lead to the optimal solution. For convex, well-behaved problems, the techniques essentially guarantee convergence making them extremely useful for select sets of problems. These unconstrained mathematical techniques typically have limited use within structural problems which often have constraints on responses such as displacements, stresses, natural frequencies or physical properties such as areas, moments of inertia or mass. The concepts of the gradient-based techniques coupled with their mathematical rigour are important and useful in certain constrained optimization algorithms. Within constrained optimization problems, adjustments to these ideas must be made to ensure that the algorithms will not allow the crossing or violation of a constraint as shown in Chapters 4 and 5. The following sections will provide the necessary theory as well as examples to illustrate the effectiveness of the different gradient-based, unconstrained optimization search techniques.