Ethylene glycol (data page): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
The '''Symmetric Rank 1''' ('''SR1''') method is a [[quasi-Newton method]] to update the second derivative (Hessian)
The name of the author is Jayson. It's not a common factor but what she likes doing is to perform domino but she doesn't have the time recently. My wife and I reside in Mississippi and I adore every day residing right here. She works as a journey agent but quickly she'll be on her own.<br><br>My blog: live psychic reading ([http://checkmates.co.za/index.php?do=/profile-56347/info/ Discover More Here])
based on the derivatives (gradients) calculated at two points. It is a generalization to the [[secant method]] for a multidimensional problem.
This update maintains the ''symmetry'' of the matrix but does ''not'' guarantee that the update be [[positive definite matrix|''positive definite'']].
 
The sequence of Hessian approximations generated by the SR1 method converges to the true Hessian under mild conditions, in theory; in practice, the approximate Hessians generated by the SR1 method show faster progress towards the true Hessian than do popular alternatives ([[BFGS]] or [[Davidon-Fletcher-Powell formula|DFP]]), in preliminary numerical experiments.<ref name="CGT">{{harvtxt|Conn|Gould|Toint|1991}}</ref> The SR1 method has computational advantages for [[sparsity|sparse]] or [[partial separability|partially separable]] problems.
 
A twice continuously differentiable function <math>x \mapsto f(x)</math> has a [[gradient]] (<math>\nabla f</math>) and [[Hessian matrix]] <math>B</math>: The function <math>f</math> has an expansion as a [[Taylor series]] at <math>x_0</math>, which can be truncated
::<math>f(x_0+\Delta x)=f(x_0)+\nabla f(x_0)^T \Delta x+\frac{1}{2} \Delta x^T {B} \Delta x </math>;
its gradient has a Taylor-series approximation also
::<math>\nabla f(x_0+\Delta x)=\nabla f(x_0)+B \Delta x</math>,
which is used to update <math>B</math>.  The above secant-equation need not have a unique solution  <math>B</math>.
The SR1 formula computes (via an update of [[Rank (linear algebra)|rank]] 1) the symmetric solution that is closest to the current approximate-value  <math>B_k</math>:
::<math>B_{k+1}=B_{k}+\frac {(y_k-B_k \Delta x_k) (y_k-B_k \Delta x_k)^T}{(y_k-B_k \Delta x_k)^T \Delta x_k}</math>,
where
::<math>y_k=\nabla f(x_k+\Delta x_k)-\nabla f(x_k)</math>.
The corresponding update to the approximate inverse-Hessian <math>H_k=B_k^{-1}</math> is
::<math>H_{k+1}=H_{k}+\frac {(\Delta x_k-H_k y_k)(\Delta x_k-H_k y_k)^T}{(\Delta x_k-H_k y_k)^T y_k}</math>.
 
The SR1 formula has been rediscovered a number of times. A drawback is that the denominator can vanish. Some authors have suggested that the update be applied only if
::<math>|\Delta x_k^T (y_k-B_k \Delta x_k)|\geq r \|\Delta x_k\|\cdot \|y_k-B_k \Delta x_k\| </math>,
where <math>r\in(0,1)</math> is a small number, e.g. <math>10^{-8}</math>.<ref>{{harvtxt|Nocedal|Wright|1999}}</ref>
 
==See also==
* [[Quasi-Newton method]]
* [[Newton's method in optimization]]
* [[BFGS method|Broyden-Fletcher-Goldfarb-Shanno (BFGS) method]]
* [[L-BFGS|L-BFGS method]]
 
==Notes==
<references/>
 
==References==
* Byrd, Richard H. (1996) Analysis of a Symmetric Rank-One Trust Region Method. ''SIAM Journal on Optimization'' 6(4)
* {{cite journal|last1=Conn|first1=A. R.|last2=
Gould|first2=N. I. M.|last3=Toint|first3=Ph. L.|title=Convergence of quasi-Newton matrices generated by the symmetric rank one update|journal=Mathematical Programming|date=March 1991|publisher=Springer Berlin/ Heidelberg|
issn=0025-5610|pages=177–195|volume=50|number=1|doi=10.1007/BF01594934|ref=harv|id=[ftp://ftp.numerical.rl.ac.uk/pub/nimg/pubs/ConnGoulToin91_mp.pdf PDF file at Nick Gould's website]|mr=}}
* Khalfan, H. Fayez (1993) A Theoretical and Experimental Study of the Symmetric Rank-One Update. ''SIAM Journal on Optimization'' 3(1)
* Nocedal, Jorge & Wright, Stephen J. (1999). ''Numerical Optimization''. Springer-Verlag. ISBN 0-387-98793-2.
 
{{Optimization algorithms|unconstrained}}
 
[[Category:Optimization algorithms and methods]]

Latest revision as of 11:02, 20 October 2014

The name of the author is Jayson. It's not a common factor but what she likes doing is to perform domino but she doesn't have the time recently. My wife and I reside in Mississippi and I adore every day residing right here. She works as a journey agent but quickly she'll be on her own.

My blog: live psychic reading (Discover More Here)