ABSTRACT

This chapter explores upon some of the principal results from optimization theory that have a bearing on swarm intelligence methods. After establishing the terminology used to describe optimization problems in general, the two main classes of these problems are discussed that determine the types of optimization methods – deterministic or stochastic – that can be used. Some general results pertaining to stochastic optimization methods are outlined that set the boundaries for their performance. A simple strategy is described for extracting better performance from stochastic optimization methods that is well-suited to parallel computing. The optimization of a non-convex function is a non-convex optimization problem. The only deterministic method to find the global optimum in a non-convex optimization problem is to set up a grid of points in the search space, evaluate the fitness function at each point, and interpolate these values to find the minimum fitness value.