Nonlinear Optimization in Electrical Engineering with Applications in MATLAB^{®}
Nonlinear Optimization in Electrical Engineering with Applications in MATLAB^{®} provides an introductory course on nonlinear optimization in electrical engineering, with a focus on applications such as the design of electric, microwave, and photonic circuits, wireless communications, and digital filter design. Basic concepts are introduced using a stepbystep approach and illustrated with MATLAB^{®} codes that the reader can use and adapt. Topics covered include: classical optimization methods; one dimensional optimization; unconstrained and constrained optimization; global optimization; space mapping optimization; adjoint variable methods. Nonlinear Optimization in Electrical Engineering with Applications in MATLAB^{®} is essential reading for advanced students in electrical engineering.
Inspec keywords: sensitivity analysis; nonlinear programming; linear programming; search problems; electrical engineering
Other keywords: firstorder unconstrained optimization techniques; adjoint sensitivity analysis; constrained optimization techniques; secondorder unconstrained optimization techniques; nonlinear optimization; MATLAB; classical optimization; linear programming; electrical engineering; derivativefree unconstrained techniques; onedimensional optimizationline search; global optimization techniques
Subjects: General electrical engineering topics; Optimisation techniques; General and management topics; Optimisation techniques
 Book DOI: 10.1049/PBSP008E
 Chapter DOI: 10.1049/PBSP008E
 ISBN: 9781849195430
 eISBN: 9781849195447
 Page count: 323
 Format: PDF

Front Matter
 + Show details  Hide details

p.
(1)

1 Mathematical background
 + Show details  Hide details

p.
1
–27
(27)
In this chapter, some of the mathematical concepts are reviewed. This chapter also reviews some properties of vectors and matrices and introduces some of the vocabulary used in optimization theory. It also discusses the properties of the solution of a system of linear equations.

2 An introduction to linear programming
 + Show details  Hide details

p.
29
–67
(39)
Many engineering optimization problems can be cast as a Linear Program (LP). Linear Programming is an optimization problem where the objective function and the constraints are linear functions of the optimization variables. In addition, several nonlinear optimization problems can be solved by iteratively solving linearized versions of the original problem. This chapter focuses on the solution of Linear Programs. Different statements of the LP problem are introduced. The Simplex method is explained in both the tabular and matrix forms. Several approaches to the Simplex method are discussed.

3 Classical optimization
 + Show details  Hide details

p.
69
–100
(32)
Before the arrival of the digital computer age, several optimization approaches and theories were utilized to solve simple problems with one or few variables. Powerful tools such as the singlevariable Taylor expansion, the multivariable Taylor expansion, and the KarushKuhnTucker (KKT) conditions were utilized in solving unconstrained and constrained optimization problems. Substitution methods were also used to convert constrained optimization problems to unconstrained optimization ones. Other methods such as the method of constrained variations were also used for solving simple constrained optimization problems. In this chapter, I focus on explaining some of these classical approaches. The theoretical bases of these techniques are relevant for numerical optimization approaches to be addressed in the following chapters.

4 Onedimensional optimizationline search
 + Show details  Hide details

p.
101
–130
(30)
Optimization problems with a single variable have special importance in optimization theory. The reason for this is that multidimensional optimization problems are usually solved iteratively through optimization algorithms that utilize single variable optimization approaches. Typically, at the kth iteration of an algorithm, a promising search direction s(k) 2 <n in the ndimensional parameter space is first determined. Starting from the solution at the kth iteration x(k), a line search is carried out in the direction s(k). A general point along that line is given by x 1/4 x(k) þ as(k), where a > 0 is the search parameter. The optimal value of the search parameter is the one that minimizes the objective function.

5 Derivativefree unconstrained techniques
 + Show details  Hide details

p.
131
–152
(22)
The focus of this chapter on solving unconstrained optimization problems using only function values without utilizing any derivative information. Unconstrained optimization is a class of optimization techniques where the solution of the ndimensional problem can be any point in the whole parameter space x*∊2 <n.

6 Firstorder unconstrained optimization techniques
 + Show details  Hide details

p.
153
–174
(22)
In this chapter firstorder unconstrained optimization approaches have been discussed. These approaches assume the existence of firstorder derivatives either through a finite difference approximation or through an adjoint sensitivity approach and also the steepest descent approach and conjugate gradient approaches have been discussed.

7 Secondorder unconstrained optimization techniques
 + Show details  Hide details

p.
175
–202
(28)
Taylor expansion shows us that the more derivatives we know about a function at a point, the more we can accurately predict its value over a larger neighbourhood of the expansion point. Actually, if we know all the higherorder derivatives of a function at the expansion point, then we know how the function behaves over the whole space! This is a remarkable result that motivates using higherorder sensitivities. The main obstacle in estimating these sensitivities is their evaluation cost. Firstorder derivatives estimated using finite differences require O(n) extra simulations as discussed in Chapter 1. Secondorder sensitivities are estimated using O(n2) extra simulations. The cost of estimating third or higherorder sensitivities is even higher. This is the main reason why most techniques utilize only firstorder sensitivities. Secondorder optimization techniques utilize secondorder sensitivities or an approximation of these derivatives to predict the steps taken within the optimization algorithm. They all enjoy a faster rate of convergence than that achieved using firstorder sensitivities especially near the optimal solution. Their fast convergence rate near the optimal point comes at the high cost of estimating the secondorder sensitivities. Because secondorder derivatives are the derivatives of firstorder derivatives, they can be approximated if firstorder derivatives are known at different space points. A whole class of techniques was developed that uses approximate formulas to predict the secondorder derivatives using different values of the firstorder ones. This approximation reduces the convergence rate but makes the cost of these techniques more acceptable. We start this chapter by reviewing Newton's method, which is the main concept behind all these techniques. We then move to discuss techniques for improving this technique either by using a combination of first and secondorder derivatives or by using approximate secondorder derivatives.

8 Constrained optimization techniques
 + Show details  Hide details

p.
203
–231
(29)
Many of the concepts utilized in unconstrained optimization techniques can be extended to the constrained case. This is why the study of unconstrained optimization is useful as a start. The optimality conditions for the constrained case, however, are different. In some cases, the constraints are redundant and the solution obtained using unconstrained optimization techniques is the same one obtained using techniques for constrained optimization. In other cases, the constraints limit the feasibility region, giving rise to a solution completely different from the unconstrained one.

9 Introduction to global optimization techniques
 + Show details  Hide details

p.
233
–259
(27)
In previous chapters, we addressed a number of optimization techniques for solving unconstrained and constrained optimization problems. All these techniques obtain a local minimum of the problem. This minimum may not be the best possible solution. The optimization problem may have a better minimum with an improved value of the objective function. To illustrate this case, consider the objective function: f(x) =  sin (x) / x This objective function has an infinite number of local minima. A number of these local minima are shown in Figure 9.1. This problem has only one global minimum at the point x = 0. The value of the objective function at this optimal point is f * = 1.0, which is lowest value over all other values of the parameter x. While in some problems, finding a local minimum with a reasonable value of the objective function is acceptable, in other applications it is mandatory to find the global minimum of the problem. Over the years, many techniques were developed for finding the global minimum of a nonlinear optimization problem. These techniques include Statistical Optimization [3], Simulated Annealing [4], Genetic Algorithms [5], Particle Swarm Optimization (PSO) [6], Weed Invasive Optimization [7], Wind Optimization [8], and Ant Colony Optimization [9], just to mention a few. All of these techniques introduce an element of randomness in the iterations to escape local minima. Some of these techniques are inspired by nature, which is always able to find the global minimum of its optimization problems.

10 Adjoint sensitivity analysis
 + Show details  Hide details

p.
261
–302
(42)
Adjoint sensitivity analysis offers an efficient approach for sensitivity estimation. Using at most one extra simulation of an adjoint system (or circuit), the firstorder sensitivities of the desired response (or objective function) are evaluated with respect to all parameters regardless of their number. This approach applies to both frequencydomain and timedomain responses. It was shown to be applicable to higherorder sensitivities as well. The cost of estimating these higherorder sensitivities using adjoint approaches is also less than that of the corresponding finite difference approach.

Back Matter
 + Show details  Hide details

p.
(1)