Heat kernel signature: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Addbot
m Bot: Removing Orphan Tag (Nolonger an Orphan) (Report Errors)
en>Myasuda
 
Line 1: Line 1:
The '''MM algorithm''' is an iterative [[optimization]] method which exploits the [[convex function|convexity]] of a function in order to find their maxima or minima. The '''MM''' stands for “Majorize-Minimization” or “Minorize-Maximization”, depending on whether you’re doing maximization or minimization. '''MM''' itself is not an algorithm, but a description of how to construct an [[optimization algorithm]].
Jason is his name and he fully digs that title. Taking part in [http://browse.deviantart.com/?qh=&section=&global=1&q=basketball basketball] is a person of the matters he loves most. Alaska has constantly been his home. Invoicing is where by his main profits arrives from. He is not godd at design and style but you might want to examine his internet site: http://www.benillup.cat/aspnet_client/salomon-fusion-05586477.htm<br><br>Feel free to visit my website ... [http://www.benillup.cat/aspnet_client/salomon-fusion-05586477.htm salomon fusion]
 
The [[EM algorithm]] can be treated as a special case of the MM algorithm. However, in the EM algorithm complex [[conditional expectation]] and extensive analytical skills are usually involved, while in the MM algorithm convexity and inequalities are our major focus, and it is relatively easier to understand and apply in most of the cases.
 
==History==
The original idea of the '''MM algorithm''' can be dated back at least to 1970 when Ortega and Rheinboldt were doing their studies related to [[line search]] methods.<ref>
{{cite journal
|last1=Ortega |first1=J.M.
|last2=Rheinboldt|first2=W.C.
|title=Iterative Solutions of Nonlinear Equations in Several Variables
|journal=[[New York: Academic]]
|year=1970  |pages=253–255
}}
</ref> The same idea kept reappearing under different guise in different areas since, until 2000 Hunter and Lange put all in to a general frame works and named '''MM''' for the first time.<ref>
{{cite journal
|last1=Hunter|first1=D.R.
|last2=Lange|first2=K.
|title=Quantile Regression via an MM Algorithm
|journal=[[Journal of Computational
and Graphical Statistics]]
|year=2000 |volume=9 |pages=60–77
}}
</ref> Recently studies have shown that it can be used in a wide range of context, like [[mathematics]], [[statistics]], [[machine learning]], [[engineering]], etc.
 
==How it works==
'''MM algorithm''' works by finding a surrogate function that minorizes or majorizes the objective function. Optimizing the surrogate functions will drive the objective function upward or downward until a local [[optimum]] is reached.
 
Take the '''minorize-maximazation''' version for example.
 
Let <math> f(\theta) </math> be the objective convex function we want to maximize. At the <math> m </math> step of the algorithm, <math> m=0,1... </math>, the constructed function <math> g(\theta|\theta_m) </math> will be called the minorized version of the objective function (the surrogate function) at <math> \theta_m </math> if
 
    <math> g(\theta|\theta_m)</math> ≤ <math> f(\theta) </math>  for all <math> \theta </math>
    <math> g(\theta_m|\theta_m)=f(\theta_m) </math>
 
Then we maximize <math> g(\theta|\theta_m) </math> instead of <math> f(\theta) </math>, and let
 
    <math> \theta_{m+1}=\arg\max_{\theta}g(\theta|\theta_m) </math>
 
The above iterative method will guarantee that <math> f(\theta_m) </math> will converge to a local optimum or a saddle point as <math> m </math> goes to infinity, because by the construction we have
  <math> f(\theta_{m+1})</math> ≥ <math> g(\theta_{m+1}|\theta_m)</math> ≥ <math> g(\theta_m|\theta_m)= f(\theta_m)</math>
 
The marching of <math>\theta_m </math> and the surrogate functions relative to the objective function is shown on the Figure [[File:Mmalgorithm.jpg|right|thumb|MM algorithm]]
 
We can just flip the image upside down, and that would be the methodology while we are doing '''Majorize-Minimization'''.
 
==Ways to construct surrogate functions==
Basically, we can use any inequalities to construct the desired majorized/minorized version of the objective function, but there are several typical choices
* [[Jensen's inequality]]
* [[Convexity inequality]]
* [[Cauchy–Schwarz inequality]]
* [[Inequality of arithmetic and geometric means]]
 
==References==
{{reflist}}
 
[[Category:Optimization algorithms and methods]]

Latest revision as of 17:50, 22 November 2014

Jason is his name and he fully digs that title. Taking part in basketball is a person of the matters he loves most. Alaska has constantly been his home. Invoicing is where by his main profits arrives from. He is not godd at design and style but you might want to examine his internet site: http://www.benillup.cat/aspnet_client/salomon-fusion-05586477.htm

Feel free to visit my website ... salomon fusion