ABSTRACT

The solution techniques for unconstrained optimization problems that have been described in earlier chapters invariably use the gradient information to locate the optimum. Such methods, as we have seen, require the objective function to be continuous and differentiable, and the optimal solution depends on the chosen initial conditions. These methods are not efficient in handling discrete variables and are more likely to stay at a local optimum for a multimodal objective function. Gradient-based methods often have to be restarted to ensure that the local optimum reached is indeed the global one.