|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| In [[mathematics]] and computing, the '''Levenberg–Marquardt algorithm (LMA)''',<ref name="Levenberg-Marquadt"><!--added under references heading by script-assisted edit-->
| | If you compare registry products there are a amount of points to look out for. Because of the sheer amount of for registry products accessible found on the Internet at when it could be quite easy to be scammed. Something usually overlooked is the fact that several of these cleaners can not surprisingly end up damaging the PC. And the registry they state they have cleaned might merely lead to more issues with your computer than the ones you started with.<br><br>Windows Defender - this does come standard with several Windows OS Machines, however, otherwise could be download from Microsoft for free. It usually aid protect against spyware.<br><br>If you compare registry products you need a quickly acting registry cleaning. It's no advantageous spending hours and your PC waiting for the registry cleaning to complete its task. We want a cleaner to complete its task in minutes.<br><br>Paid registry cleaners on the different hand, I have found, are often cheap. They offer normal, free updates or at least inexpensive updates. This follows because the software producer must confirm their product is best in staying before its competitors.<br><br>One other way whenever arresting the 1328 error is to clean out a PC's registry. The registry is quite important because it's where settings plus files employed by Windows for running are stored. As it is frequently employed, breakdowns and instances of files getting corrupted are not unusual. Additionally as a result of the means it really is configured, the "registry" gets saved inside the incorrect fashion frequently, which makes your system run slow, eventually causing a PC to suffer from a series of errors. The most effective way one may use in cleaning out registries is to employ a reliable [http://bestregistrycleanerfix.com/tune-up-utilities tuneup utilities] system. A registry cleaner may seek out and repair corrupted registry files plus settings allowing one's computer to run normally again.<br><br>Why this problem occurs frequently? What are the causes of it? In fact, there are 3 major causes that may cause the PC freezing issue. To solve the problem, we should take 3 procedures in the following paragraphs.<br><br>When the registry is corrupt or full of mistakes, the signs can be felt by the computer owner. The slow performance, the frequent program crashes and the nightmare of all computer owners, the blue screen of death.<br><br>So inside summary, whenever comparing registry cleaning, make sure the 1 you choose provides you the following.A backup and restore facility, quick surgery, automatic deletion center, start-up administration, an simple technique of contact along with a income back guarantee. |
| The algorithm was first published by Kenneth Levenberg, while working at the [[Frankford Arsenal|Frankford Army Arsenal]]. It was rediscovered by [[Donald Marquardt]] who worked as a [[statistician]] at [[DuPont]] and independently by Girard, Wynn and Morrison.</ref> also known as the '''damped least-squares (DLS)''' method, is used to solve [[non-linear least squares]] problems. These minimization problems arise especially in [[least squares]] [[curve fitting]].
| |
| | |
| The LMA interpolates between the [[Gauss–Newton algorithm]] (GNA) and the method of [[gradient descent]]. The LMA is more [[Robustness (computer science)|robust]] than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be a bit slower than the GNA. LMA can also be viewed as Gauss–Newton using a [[trust region]] approach.
| |
| | |
| The LMA is a very popular curve-fitting algorithm used in many software applications for solving generic curve-fitting problems. However, as for many fitting algorithms, the LMA finds only a [[local minimum]], which is not necessarily the [[global minimum]].
| |
| | |
| == The problem ==
| |
| The primary application of the Levenberg–Marquardt algorithm is in the least squares curve fitting problem: given a set of m empirical datum pairs of independent and dependent variables, (''x<sub>i</sub>'', ''y<sub>i</sub>''), optimize the parameters ''β'' of the model curve ''f''(''x'','''''β''''') so that the sum of the squares of the deviations
| |
| | |
| :<math>S(\boldsymbol \beta) = \sum_{i=1}^m [y_i - f(x_i, \ \boldsymbol \beta) ]^2</math>
| |
| | |
| becomes minimal.
| |
| | |
| == The solution ==
| |
| Like other numeric minimization algorithms, the Levenberg–Marquardt algorithm is an [[iteration|iterative]] procedure. To start a minimization, the user has to provide an initial guess for the parameter vector, '''''β'''''. In cases with only one minimum, an uninformed standard guess like '''''β'''''<sup>T</sup>=(1,1,...,1) will work fine; in cases with [[local minimum|multiple minima]], the algorithm converges only if the initial guess is already somewhat close to the final solution.
| |
| | |
| In each iteration step, the parameter vector, '''''β''''', is replaced by a new estimate, '''''β''''' + '''''δ'''''. To determine '''''δ''''', the functions <math> f(x_i,\boldsymbol \beta+\boldsymbol \delta)</math> are approximated by their linearizations
| |
| | |
| : <math>f(x_i,\boldsymbol \beta+\boldsymbol \delta) \approx f(x _i,\boldsymbol \beta) + J_i \boldsymbol \delta \!</math>
| |
| | |
| where
| |
| : <math>J_i=\frac{\partial f(x_i,\boldsymbol\beta)}{\partial \boldsymbol\beta}</math>
| |
| is the [[gradient]] (row-vector in this case)
| |
| of ''f'' with respect to '''''β'''''.
| |
| | |
| At the minimum of the sum of squares, <math>S(\beta)</math>, the [[gradient]] of <math>S</math> with respect to '''''δ''''' will be zero. The above first-order approximation of <math> f(x_i,\boldsymbol \beta+\boldsymbol \delta)</math> gives
| |
| : <math>S(\boldsymbol\beta+\boldsymbol\delta) \approx \sum_{i=1}^m \left( y_i - f(x_i,\boldsymbol\beta) - J_i \boldsymbol\delta\right)^2</math>.
| |
| Or in vector notation,
| |
| : <math> S(\boldsymbol\beta+\boldsymbol\delta) \approx \|\mathbf{y} - \mathbf{f}(\boldsymbol\beta) - \mathbf{J}\boldsymbol\delta\|^2</math>.
| |
| Taking the derivative with respect to '''δ''' and setting the result to zero gives:
| |
| | |
| :<math>\mathbf{(J^{T}J)\boldsymbol \delta = J^{T} [y - f(\boldsymbol \beta)]} \!</math>
| |
| | |
| where <math>\mathbf{J}</math> is the [[Jacobian matrix and determinant|Jacobian matrix]] whose ''i''<sup>th</sup> row equals <math>J_i</math>, and where <math>\mathbf{f}</math> and <math>\mathbf{y}</math> are vectors with ''i''<sup>th</sup> component
| |
| <math>f(x_i,\boldsymbol \beta)</math> and <math>y_i</math>, respectively.
| |
| This is a set of linear equations which can be solved for '''''δ'''''.
| |
| | |
| Levenberg's contribution is to replace this equation by a "damped version",
| |
| | |
| :<math>\mathbf{(J^{T}J + \lambda I)\boldsymbol \delta = J^{T} [y - f(\boldsymbol \beta)]}\!</math>
| |
| | |
| where '''I''' is the identity matrix, giving as the increment, '''''δ''''', to the estimated parameter vector, '''''β'''''.
| |
| | |
| The (non-negative) damping factor, λ, is adjusted at each iteration. If reduction of ''S'' is rapid, a smaller value can be used, bringing the algorithm closer to the [[Gauss–Newton algorithm]], whereas if an iteration gives insufficient reduction in the residual, λ can be increased, giving a step closer to the gradient descent direction. Note that the [[gradient]] of ''S'' with respect to '''''β'''''
| |
| equals <math>-2(\mathbf{J}^{T} [\mathbf{y} - \mathbf{f}(\boldsymbol \beta) ] )^T</math>. Therefore, for large values of ''λ'', the step will
| |
| be taken approximately in the direction of the gradient. If either the length of the calculated step, '''''δ''''', or the reduction of sum of squares from the latest parameter vector, '''''β''''' + '''''δ''''', fall below predefined limits, iteration stops and the last parameter vector, '''''β''''', is considered to be the solution.
| |
| | |
| Levenberg's algorithm has the disadvantage that if the value of damping factor, λ, is large, inverting '''J'''<sup>T</sup>'''J''' + λ'''I''' is not used at all. Marquardt provided the insight that we can scale each component of the gradient according to the curvature so that there is larger movement along the directions where the gradient is smaller. This avoids slow convergence in the direction of small gradient. Therefore, Marquardt replaced the identity matrix, '''I''', with the diagonal matrix consisting of the diagonal elements of '''J'''<sup>T</sup>'''J''', resulting in the Levenberg–Marquardt algorithm:
| |
| | |
| :<math>\mathbf{(J^T J + \lambda\, diag(J^T J))\boldsymbol \delta = J^T [y - f(\boldsymbol \beta)]}\!</math>.
| |
| | |
| A similar damping factor appears in [[Tikhonov regularization]], which is used to solve linear [[ill-posed problems]], as well as in [[ridge regression]], an [[estimation theory|estimation]] technique in [[statistics]].
| |
| | |
| === Choice of damping parameter ===
| |
| Various more-or-less heuristic arguments have been put forward for the best choice for the damping parameter λ. Theoretical arguments exist showing why some of these choices guaranteed local convergence of the algorithm; however these choices can make the global convergence of the algorithm suffer from the undesirable properties of [[gradient descent|steepest-descent]], in particular very slow convergence close to the optimum.
| |
| | |
| The absolute values of any choice depends on how well-scaled the initial problem is. Marquardt recommended starting with a value λ<sub>0</sub> and a factor ν > 1. Initially setting λ = λ<sub>0</sub> and computing the residual sum of squares ''S''('''''β''''') after one step from the starting point with the damping factor of | |
| λ = λ<sub>0</sub> and secondly with λ<sub>0</sub>/ν. If both of these are worse than the initial point then the damping is increased by successive multiplication by ν until a better point is found with a new damping factor of λ<sub>0</sub>ν<sup>''k''</sup> for some ''k''.
| |
| | |
| If use of the damping factor λ/ν results in a reduction in squared residual then this is taken as the new value of λ (and the new optimum location is taken as that obtained with this damping factor) and the process continues; if using λ/ν resulted in a worse residual, but using λ resulted in a better residual, then λ is left unchanged and the new optimum is taken as the value obtained with λ as damping factor.
| |
| | |
| ==Example==
| |
| | |
| [[Image:Lev-Mar-poor-fit.png|thumb|Poor fit]] | |
| [[Image:Lev-Mar-better-fit.png|thumb|Better fit]]
| |
| [[Image:Lev-Mar-best-fit.png|thumb|Best fit]]
| |
| | |
| In this example we try to fit the function <math>y=a \cos(bX) + b \sin(aX)</math> using the Levenberg–Marquardt algorithm implemented in
| |
| [[GNU Octave]] as the ''leasqr'' function. The 3 graphs Fig 1,2,3 show progressively better fitting for the parameters ''a''=100, ''b''=102 used
| |
| in the initial curve. Only when the parameters in Fig 3 are chosen closest to the original, are the curves fitting exactly. This equation
| |
| is an example of very sensitive initial conditions for the Levenberg–Marquardt algorithm. One reason for this sensitivity is the existence of multiple minima — the function <math>\cos(\beta x)</math> has minima at parameter value <math>\hat \beta</math> and <math>\hat \beta +2n \pi.</math>
| |
| | |
| ==Other applications==
| |
| The Levenberg–Marquardt algorithm has been applied to nonlinear inverse problems. A particular application is generating computational models of oil reservoirs given the observed data.<ref>Gharib Shirangi, M., [History matching production data and uncertainty assessment with an efficient TSVD parameterization algorithm, Journal of Petroleum Science and Engineering, http://www.sciencedirect.com/science/article/pii/S0920410513003227]</ref>
| |
| | |
| ==Notes==
| |
| {{Reflist}}
| |
| | |
| == See also ==
| |
| * [[Trust region]]
| |
| * [[Nelder–Mead method]]
| |
| | |
| ==References==
| |
| <div class='references-normal'>
| |
| * {{Cite journal
| |
| | author = [[Kenneth Levenberg]]
| |
| | title = A Method for the Solution of Certain Non-Linear Problems in Least Squares
| |
| | journal = Quarterly of Applied Mathematics
| |
| | volume = 2
| |
| | pages = 164–168
| |
| | year = 1944
| |
| }}
| |
| * {{Cite journal
| |
| | author = A. Girard
| |
| | title =
| |
| | journal = Rev. Opt
| |
| | volume = 37
| |
| | pages = 225, 397
| |
| | year = 1958
| |
| }}
| |
| * {{Cite journal
| |
| | author = C.G. Wynne
| |
| | title = Lens Designing by Electronic Digital Computer: I
| |
| | journal = Proc. Phys. Soc. London
| |
| | volume = 73
| |
| | issue = 5
| |
| | pages = 777
| |
| | year = 1959
| |
| | doi = 10.1088/0370-1328/73/5/310
| |
| }}
| |
| * {{Cite journal
| |
| | author = Jorge J. Moré and Daniel C. Sorensen
| |
| | title = Computing a Trust-Region Step
| |
| | journal = SIAM J. Sci. Stat. Comput.
| |
| | pages = 553–572
| |
| | year = 1983
| |
| | issue = 4
| |
| }}
| |
| * {{Cite journal
| |
| | author = D.D. Morrison
| |
| | title =
| |
| | journal = Jet Propulsion Laboratory Seminar proceedings
| |
| | pages =
| |
| | year = 1960
| |
| }}
| |
| * {{Cite journal
| |
| | author = [[Donald Marquardt]]
| |
| | title = An Algorithm for Least-Squares Estimation of Nonlinear Parameters
| |
| | journal = SIAM Journal on Applied Mathematics
| |
| | volume = 11
| |
| | issue = 2
| |
| | pages = 431–441
| |
| | year = 1963
| |
| | doi = 10.1137/0111030
| |
| }}
| |
| * {{Cite journal
| |
| | author = Philip E. Gill and [[Walter Murray]]
| |
| | title = Algorithms for the solution of the nonlinear least-squares problem
| |
| | journal = [[SIAM Journal on Numerical Analysis]]
| |
| | volume = 15
| |
| | issue = 5
| |
| | pages = 977–992
| |
| | year = 1978
| |
| | doi = 10.1137/0715063
| |
| }}
| |
| * {{Cite journal
| |
| | author = Jose Pujol
| |
| | title = The solution of nonlinear inverse problems and the Levenberg-Marquardt method
| |
| | publisher = SEG
| |
| | year = 2007
| |
| | journal = Geophysics
| |
| | volume = 72
| |
| | number = 4
| |
| | pages = W1–W16
| |
| | url = http://link.aip.org/link/?GPY/72/W1/1
| |
| | doi = 10.1190/1.2732552
| |
| }}
| |
| * {{cite book | last = Nocedal | first = Jorge | coauthors = Wright, Stephen J. | title = Numerical Optimization, 2nd Edition | year = 2006 | publisher = Springer | isbn = 0-387-30303-0 }}
| |
| <references/> | |
| </div> | |
| | |
| == External links ==
| |
| | |
| ===Descriptions===
| |
| *Detailed description of the algorithm can be found in [http://www.nrbook.com/a/bookcpdf.php Numerical Recipes in C, Chapter 15.5: Nonlinear models]
| |
| *C. T. Kelley, ''Iterative Methods for Optimization'', SIAM Frontiers in Applied Mathematics, no 18, 1999, ISBN 0-89871-433-8. [http://www.siam.org/books/textbooks/fr18_book.pdf Online copy]
| |
| *[http://www3.villanova.edu/maple/misc/mtc1093.html History of the algorithm in SIAM news]
| |
| *[http://ananth.in/docs/lmtut.pdf A tutorial by Ananth Ranganathan]
| |
| *[http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/3215/pdf/imm3215.pdf Methods for Non-Linear Least Squares Problems] by K. Madsen, H.B. Nielsen, O. Tingleff is a tutorial discussing non-linear least-squares in general and the Levenberg-Marquardt method in particular
| |
| * T. Strutz: ''Data Fitting and Uncertainty (A practical introduction to weighted least squares and beyond)''. Vieweg+Teubner, ISBN 978-3-8348-1022-9.
| |
| | |
| === Implementations ===
| |
| * Levenberg-Marquardt is a built-in algorithm in [[Mathematica]] <!-- <ref>[http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationIntroductionLocalMinimization.html Unconstrained optimization methods in Mathematica.</ref> -->, [[Matlab]], [[GNU Octave]], [[Origin (software)|Origin]], and [[IGOR Pro]].
| |
| *The oldest implementation still in use is [http://www.netlib.org/minpack/ lmdif], from [[MINPACK]], in [[Fortran]], in the [[public domain]]. See also:
| |
| **[http://apps.jcns.fz-juelich.de/lmfit lmfit], a self-contained [[C programming language|C]] implementation of the MINPACK algorithm, with an easy-to-use wrapper for curve fitting, liberal licence (freeBSD).
| |
| ** [http://eigen.tuxfamily.org/index.php?title=Main_Page eigen], a C++ linear algebra library, includes an adaptation of the minpack algorithm in the "NonLinearOptimization" module.
| |
| **The [[GNU Scientific Library]] library has a C interface to MINPACK.
| |
| **[http://devernay.free.fr/hacks/cminpack.html C/C++ Minpack] includes the Levenberg–Marquardt algorithm.
| |
| **Several high-level languages and mathematical packages have wrappers for the [[MINPACK]] routines, among them:
| |
| ***Python library [[scipy]], module <code>scipy.optimize.leastsq</code>,
| |
| ***[[IDL (programming language)|IDL]], add-on [http://cow.physics.wisc.edu/~craigm/idl/fitting.html MPFIT].
| |
| ***[[R (programming language)]] has the [http://cran.r-project.org/web/packages/minpack.lm/index.html minpack.lm] package.
| |
| *[http://www.ics.forth.gr/%7elourakis/levmar/ levmar] is an implementation in [[C (programming language)|C]]/[[C++]] with support for constraints, distributed under the [[GNU General Public License]].
| |
| **levmar includes a [[MEX file]] interface for [[MATLAB]]
| |
| **[[Perl]] ([[Perl Data Language|PDL]]), [[Python (programming language)|python]], [[Haskell (programming language)|Haskell]] and [[.NET Framework|.NET]] interfaces to levmar are available: see [http://www.johnlapeyre.com/pdl/index.html PDL::Fit::Levmar] or [https://metacpan.org/module/PDL::Fit::LM PDL::Fit::LM], [http://trac.astrometry.net/wiki/PyLevmar PyLevmar], [http://hackage.haskell.org/package/levmar HackageDB levmar] and [https://github.com/AvengerDr/LevmarSharp LevmarSharp].
| |
| *[http://www.ics.forth.gr/%7elourakis/sparseLM/ sparseLM] is a [[C (programming language)|C]] implementation aimed at minimizing functions with large, arbitrarily [[Sparse matrix|sparse]] Jacobians. Includes a MATLAB MEX interface.
| |
| *[http://www2.imm.dtu.dk/~hbni/Software/SMarquardt.m SMarquardt.m] is a stand-alone routine for Matlab or Octave.
| |
| *[http://www.bnikolic.co.uk/inmin/inmin-library.html InMin] library contains a C++ implementation of the algorithm based on the [http://eigen.tuxfamily.org/index.php?title=Main_Page eigen] C++ linear algebra library. It has a pure C-language API as well as a Python binding
| |
| *[http://code.google.com/p/ceres-solver/ ceres] is a non-linear minimisation library with an implementation of the Levenberg–Marquardt algorithm. It is written in C++ and uses [http://eigen.tuxfamily.org/index.php?title=Main_Page eigen]
| |
| *[http://www.alglib.net/optimization/levenbergmarquardt.php ALGLIB] has implementations of improved LMA in C# / C++ / Delphi / Visual Basic. Improved algorithm takes less time to converge and can use either Jacobian or exact Hessian.
| |
| *[[NMath]] has an implementation for the [[.NET Framework]].
| |
| *[[gnuplot]] uses its own implementation [http://www.gnuplot.info/ gnuplot.info].
| |
| *[[Java (programming language)|Java programming language]] implementations: 1) [http://scribblethink.org/Computer/Javanumeric/index.html Javanumerics], 2) [http://virtualrisk.cvs.sourceforge.net/*checkout*/virtualrisk/util/lma/lma_v1.3.zip LMA-package] (a small, user friendly and well documented implementation with examples and support), 3) [http://commons.apache.org/math/apidocs/org/apache/commons/math/optimization/general/LevenbergMarquardtOptimizer.html Apache Commons Math]
| |
| *[http://oooconv.free.fr/fitoo/fitoo_en.html OOoConv] implements the L-M algorithm as an OpenOffice.org Calc spreadsheet.
| |
| *[[SAS (software)|SAS]], there are multiple ways to access SAS's implementation of the Levenberg–Marquardt algorithm: it can be accessed via [http://support.sas.com/documentation/cdl/en/imlug/59656/HTML/default/langref_sect187.htm#imlug_langref_nlplm NLPLM Call] in [http://support.sas.com/documentation/cdl/en/imlug/59656/HTML/default/imlstart_sect1.htm PROC IML] and it can also be accessed through the [http://support.sas.com/documentation/cdl/en/ormpug/63352/HTML/default/viewer.htm#ormpug_nlp_sect021.htm LSQ] statement in [http://support.sas.com/documentation/cdl/en/ormpug/63352/HTML/default/viewer.htm#ormpug_nlp_sect001.htm PROC NLP], and the [http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_nlin_sect007.htm METHOD=MARQUARDT] option in [http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_nlin_sect001.htm PROC NLIN].
| |
| | |
| {{Optimization algorithms}}
| |
| | |
| {{DEFAULTSORT:Levenberg-Marquardt algorithm}}
| |
| [[Category:Statistical algorithms]]
| |
| [[Category:Optimization algorithms and methods]]
| |
| [[Category:Least squares]]
| |
If you compare registry products there are a amount of points to look out for. Because of the sheer amount of for registry products accessible found on the Internet at when it could be quite easy to be scammed. Something usually overlooked is the fact that several of these cleaners can not surprisingly end up damaging the PC. And the registry they state they have cleaned might merely lead to more issues with your computer than the ones you started with.
Windows Defender - this does come standard with several Windows OS Machines, however, otherwise could be download from Microsoft for free. It usually aid protect against spyware.
If you compare registry products you need a quickly acting registry cleaning. It's no advantageous spending hours and your PC waiting for the registry cleaning to complete its task. We want a cleaner to complete its task in minutes.
Paid registry cleaners on the different hand, I have found, are often cheap. They offer normal, free updates or at least inexpensive updates. This follows because the software producer must confirm their product is best in staying before its competitors.
One other way whenever arresting the 1328 error is to clean out a PC's registry. The registry is quite important because it's where settings plus files employed by Windows for running are stored. As it is frequently employed, breakdowns and instances of files getting corrupted are not unusual. Additionally as a result of the means it really is configured, the "registry" gets saved inside the incorrect fashion frequently, which makes your system run slow, eventually causing a PC to suffer from a series of errors. The most effective way one may use in cleaning out registries is to employ a reliable tuneup utilities system. A registry cleaner may seek out and repair corrupted registry files plus settings allowing one's computer to run normally again.
Why this problem occurs frequently? What are the causes of it? In fact, there are 3 major causes that may cause the PC freezing issue. To solve the problem, we should take 3 procedures in the following paragraphs.
When the registry is corrupt or full of mistakes, the signs can be felt by the computer owner. The slow performance, the frequent program crashes and the nightmare of all computer owners, the blue screen of death.
So inside summary, whenever comparing registry cleaning, make sure the 1 you choose provides you the following.A backup and restore facility, quick surgery, automatic deletion center, start-up administration, an simple technique of contact along with a income back guarantee.