Introduction to global optimization techniques
In previous chapters, we addressed a number of optimization techniques for solving unconstrained and constrained optimization problems. All these techniques obtain a local minimum of the problem. This minimum may not be the best possible solution. The optimization problem may have a better minimum with an improved value of the objective function. To illustrate this case, consider the objective function: f(x) = - sin (x) / x This objective function has an infinite number of local minima. A number of these local minima are shown in Figure 9.1. This problem has only one global minimum at the point x = 0. The value of the objective function at this optimal point is f * = -1.0, which is lowest value over all other values of the parameter x. While in some problems, finding a local minimum with a reasonable value of the objective function is acceptable, in other applications it is mandatory to find the global minimum of the problem. Over the years, many techniques were developed for finding the global minimum of a nonlinear optimization problem. These techniques include Statistical Optimization [3], Simulated Annealing [4], Genetic Algorithms [5], Particle Swarm Optimization (PSO) [6], Weed Invasive Optimization [7], Wind Optimization [8], and Ant Colony Optimization [9], just to mention a few. All of these techniques introduce an element of randomness in the iterations to escape local minima. Some of these techniques are inspired by nature, which is always able to find the global minimum of its optimization problems.
Introduction to global optimization techniques, Page 1 of 2
< Previous page Next page > /docserver/preview/fulltext/books/pc/pbsp008e/PBSP008E_ch9-1.gif /docserver/preview/fulltext/books/pc/pbsp008e/PBSP008E_ch9-2.gif