Second-order unconstrained optimization techniques
Taylor expansion shows us that the more derivatives we know about a function at a point, the more we can accurately predict its value over a larger neighbourhood of the expansion point. Actually, if we know all the higher-order derivatives of a function at the expansion point, then we know how the function behaves over the whole space! This is a remarkable result that motivates using higher-order sensitivities. The main obstacle in estimating these sensitivities is their evaluation cost. First-order derivatives estimated using finite differences require O(n) extra simulations as discussed in Chapter 1. Second-order sensitivities are estimated using O(n2) extra simulations. The cost of estimating third or higher-order sensitivities is even higher. This is the main reason why most techniques utilize only first-order sensitivities. Second-order optimization techniques utilize second-order sensitivities or an approximation of these derivatives to predict the steps taken within the optimization algorithm. They all enjoy a faster rate of convergence than that achieved using first-order sensitivities especially near the optimal solution. Their fast convergence rate near the optimal point comes at the high cost of estimating the second-order sensitivities. Because second-order derivatives are the derivatives of first-order derivatives, they can be approximated if first-order derivatives are known at different space points. A whole class of techniques was developed that uses approximate formulas to predict the second-order derivatives using different values of the first-order ones. This approximation reduces the convergence rate but makes the cost of these techniques more acceptable. We start this chapter by reviewing Newton's method, which is the main concept behind all these techniques. We then move to discuss techniques for improving this technique either by using a combination of first- and second-order derivatives or by using approximate second-order derivatives.
Second-order unconstrained optimization techniques, Page 1 of 2
< Previous page Next page > /docserver/preview/fulltext/books/pc/pbsp008e/PBSP008E_ch7-1.gif /docserver/preview/fulltext/books/pc/pbsp008e/PBSP008E_ch7-2.gif