Nonlinear Optimization in Electrical Engineering with Applications in MATLAB® provides an introductory course on nonlinear optimization in electrical engineering, with a focus on applications such as the design of electric, microwave, and photonic circuits, wireless communications, and digital filter design. Basic concepts are introduced using a step-by-step approach and illustrated with MATLAB® codes that the reader can use and adapt. Topics covered include: classical optimization methods; one dimensional optimization; unconstrained and constrained optimization; global optimization; space mapping optimization; adjoint variable methods. Nonlinear Optimization in Electrical Engineering with Applications in MATLAB® is essential reading for advanced students in electrical engineering.
Inspec keywords: sensitivity analysis; nonlinear programming; linear programming; search problems; electrical engineering
Other keywords: first-order unconstrained optimization techniques; adjoint sensitivity analysis; constrained optimization techniques; second-order unconstrained optimization techniques; nonlinear optimization; MATLAB; classical optimization; linear programming; electrical engineering; derivative-free unconstrained techniques; one-dimensional optimization-line search; global optimization techniques
Subjects: General electrical engineering topics; Optimisation techniques; General and management topics; Optimisation techniques
In this chapter, some of the mathematical concepts are reviewed. This chapter also reviews some properties of vectors and matrices and introduces some of the vocabulary used in optimization theory. It also discusses the properties of the solution of a system of linear equations.
Many engineering optimization problems can be cast as a Linear Program (LP). Linear Programming is an optimization problem where the objective function and the constraints are linear functions of the optimization variables. In addition, several nonlinear optimization problems can be solved by iteratively solving linearized versions of the original problem. This chapter focuses on the solution of Linear Programs. Different statements of the LP problem are introduced. The Simplex method is explained in both the tabular and matrix forms. Several approaches to the Simplex method are discussed.
Before the arrival of the digital computer age, several optimization approaches and theories were utilized to solve simple problems with one or few variables. Powerful tools such as the single-variable Taylor expansion, the multivariable Taylor expansion, and the Karush-Kuhn-Tucker (KKT) conditions were utilized in solving unconstrained and constrained optimization problems. Substitution methods were also used to convert constrained optimization problems to unconstrained optimization ones. Other methods such as the method of constrained variations were also used for solving simple constrained optimization problems. In this chapter, I focus on explaining some of these classical approaches. The theoretical bases of these techniques are relevant for numerical optimization approaches to be addressed in the following chapters.
Optimization problems with a single variable have special importance in optimization theory. The reason for this is that multidimensional optimization problems are usually solved iteratively through optimization algorithms that utilize single variable optimization approaches. Typically, at the kth iteration of an algorithm, a promising search direction s(k) 2 <n in the n-dimensional parameter space is first determined. Starting from the solution at the kth iteration x(k), a line search is carried out in the direction s(k). A general point along that line is given by x 1/4 x(k) þ as(k), where a > 0 is the search parameter. The optimal value of the search parameter is the one that minimizes the objective function.
The focus of this chapter on solving unconstrained optimization problems using only function values without utilizing any derivative information. Unconstrained optimization is a class of optimization techniques where the solution of the n-dimensional problem can be any point in the whole parameter space x*∊2 <n.
In this chapter first-order unconstrained optimization approaches have been discussed. These approaches assume the existence of first-order derivatives either through a finite difference approximation or through an adjoint sensitivity approach and also the steepest descent approach and conjugate gradient approaches have been discussed.
Taylor expansion shows us that the more derivatives we know about a function at a point, the more we can accurately predict its value over a larger neighbourhood of the expansion point. Actually, if we know all the higher-order derivatives of a function at the expansion point, then we know how the function behaves over the whole space! This is a remarkable result that motivates using higher-order sensitivities. The main obstacle in estimating these sensitivities is their evaluation cost. First-order derivatives estimated using finite differences require O(n) extra simulations as discussed in Chapter 1. Second-order sensitivities are estimated using O(n2) extra simulations. The cost of estimating third or higher-order sensitivities is even higher. This is the main reason why most techniques utilize only first-order sensitivities. Second-order optimization techniques utilize second-order sensitivities or an approximation of these derivatives to predict the steps taken within the optimization algorithm. They all enjoy a faster rate of convergence than that achieved using first-order sensitivities especially near the optimal solution. Their fast convergence rate near the optimal point comes at the high cost of estimating the second-order sensitivities. Because second-order derivatives are the derivatives of first-order derivatives, they can be approximated if first-order derivatives are known at different space points. A whole class of techniques was developed that uses approximate formulas to predict the second-order derivatives using different values of the first-order ones. This approximation reduces the convergence rate but makes the cost of these techniques more acceptable. We start this chapter by reviewing Newton's method, which is the main concept behind all these techniques. We then move to discuss techniques for improving this technique either by using a combination of first- and second-order derivatives or by using approximate second-order derivatives.
Many of the concepts utilized in unconstrained optimization techniques can be extended to the constrained case. This is why the study of unconstrained optimization is useful as a start. The optimality conditions for the constrained case, however, are different. In some cases, the constraints are redundant and the solution obtained using unconstrained optimization techniques is the same one obtained using techniques for constrained optimization. In other cases, the constraints limit the feasibility region, giving rise to a solution completely different from the unconstrained one.
In previous chapters, we addressed a number of optimization techniques for solving unconstrained and constrained optimization problems. All these techniques obtain a local minimum of the problem. This minimum may not be the best possible solution. The optimization problem may have a better minimum with an improved value of the objective function. To illustrate this case, consider the objective function: f(x) = - sin (x) / x This objective function has an infinite number of local minima. A number of these local minima are shown in Figure 9.1. This problem has only one global minimum at the point x = 0. The value of the objective function at this optimal point is f * = -1.0, which is lowest value over all other values of the parameter x. While in some problems, finding a local minimum with a reasonable value of the objective function is acceptable, in other applications it is mandatory to find the global minimum of the problem. Over the years, many techniques were developed for finding the global minimum of a nonlinear optimization problem. These techniques include Statistical Optimization [3], Simulated Annealing [4], Genetic Algorithms [5], Particle Swarm Optimization (PSO) [6], Weed Invasive Optimization [7], Wind Optimization [8], and Ant Colony Optimization [9], just to mention a few. All of these techniques introduce an element of randomness in the iterations to escape local minima. Some of these techniques are inspired by nature, which is always able to find the global minimum of its optimization problems.
Adjoint sensitivity analysis offers an efficient approach for sensitivity estimation. Using at most one extra simulation of an adjoint system (or circuit), the first-order sensitivities of the desired response (or objective function) are evaluated with respect to all parameters regardless of their number. This approach applies to both frequency-domain and time-domain responses. It was shown to be applicable to higher-order sensitivities as well. The cost of estimating these higher-order sensitivities using adjoint approaches is also less than that of the corresponding finite difference approach.