Second-order unconstrained optimization techniques

Access Full Text

Second-order unconstrained optimization techniques

For access to this article, please select a purchase option:

Buy chapter PDF
£10.00
(plus tax if applicable)
Buy Knowledge Pack
10 chapters for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
Nonlinear Optimization in Electrical Engineering with Applications in MATLAB® — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Author(s): Mohamed Bakr
Source: Nonlinear Optimization in Electrical Engineering with Applications in MATLAB®,2013
Publication date September 2013

Taylor expansion shows us that the more derivatives we know about a function at a point, the more we can accurately predict its value over a larger neighbourhood of the expansion point. Actually, if we know all the higher-order derivatives of a function at the expansion point, then we know how the function behaves over the whole space! This is a remarkable result that motivates using higher-order sensitivities. The main obstacle in estimating these sensitivities is their evaluation cost. First-order derivatives estimated using finite differences require O(n) extra simulations as discussed in Chapter 1. Second-order sensitivities are estimated using O(n2) extra simulations. The cost of estimating third or higher-order sensitivities is even higher. This is the main reason why most techniques utilize only first-order sensitivities. Second-order optimization techniques utilize second-order sensitivities or an approximation of these derivatives to predict the steps taken within the optimization algorithm. They all enjoy a faster rate of convergence than that achieved using first-order sensitivities especially near the optimal solution. Their fast convergence rate near the optimal point comes at the high cost of estimating the second-order sensitivities. Because second-order derivatives are the derivatives of first-order derivatives, they can be approximated if first-order derivatives are known at different space points. A whole class of techniques was developed that uses approximate formulas to predict the second-order derivatives using different values of the first-order ones. This approximation reduces the convergence rate but makes the cost of these techniques more acceptable. We start this chapter by reviewing Newton's method, which is the main concept behind all these techniques. We then move to discuss techniques for improving this technique either by using a combination of first- and second-order derivatives or by using approximate second-order derivatives.

Chapter Contents:

  • 7.1 Introduction
  • 7.2 Newton's method
  • 7.3 The Levenberg-Marquardt method
  • 7.4 Quasi-Newton methods
  • 7.4.1 Broyden's rank-1 update
  • 7.4.2 The Davidon-Fletcher-Powell (DFP) formula
  • 7.4.3 The Broyden-Fletcher-Goldfarb-Shanno method
  • 7.4.4 The Gauss-Newton method
  • A7.1 Wireless channel characterization
  • A7.2 The parameter extraction problem
  • A7.3 Artificial neural networks training
  • References
  • Problems

Inspec keywords: Newton method; optimisation

Other keywords: taylor expansion; space points; optimal solution; expansion point function; optimization algorithm; Newton method; second order unconstrained optimization techniques; approximate formulas; approximate second order derivatives

Subjects: Numerical approximation and analysis; Optimisation techniques; Interpolation and function approximation (numerical analysis); Optimisation; Interpolation and function approximation (numerical analysis); Optimisation techniques; Numerical analysis

Preview this chapter:
Zoom in
Zoomout

Second-order unconstrained optimization techniques, Page 1 of 2

| /docserver/preview/fulltext/books/pc/pbsp008e/PBSP008E_ch7-1.gif /docserver/preview/fulltext/books/pc/pbsp008e/PBSP008E_ch7-2.gif

Related content

content/books/10.1049/pbsp008e_ch7
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading