|
|
Line 1: |
Line 1: |
| [[Image:Metropolis hastings algorithm.png|thumb|450px|The proposal [[probability distribution|distribution]] ''Q'' proposes the next point that the [[random walk]] might move to.]]
| | Hello and welcome. My name is Irwin and I totally dig that name. For many years I've been working as a payroll clerk. Years ago we moved to North Dakota. To collect coins is 1 of the issues I adore most.<br><br>my blog post; meal delivery service ([http://Nitv.in/dietmealsdelivered20142 have a peek at this web-site]) |
| | |
| In [[statistics]] and in [[statistical physics]], the '''Metropolis–Hastings algorithm''' is a [[Markov chain Monte Carlo]] (MCMC) method for obtaining a sequence of [[pseudo-random number sampling|random samples]] from a [[probability distribution]] for which direct sampling is difficult. This sequence can be used to approximate the distribution (i.e., to generate a [[histogram]]), or to [[Monte Carlo integration|compute an integral]] (such as an [[expected value]]). Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, other methods are usually available (e.g. [[adaptive rejection sampling]]) that can directly return independent samples from the distribution, and are free from the problem of auto-correlated samples that is inherent in MCMC methods.
| |
| | |
| ==History==
| |
| The algorithm was named after [[Nicholas Metropolis]], who was an author along with [[Arianna W. Rosenbluth]], [[Marshall N. Rosenbluth]], [[Augusta H. Teller]], and [[Edward Teller]] of the 1953 paper ''[[Equation of State Calculations by Fast Computing Machines]]'' which first proposed the algorithm for the specific case of the [[canonical ensemble]];<ref name=Metropolis/><ref name=Rosenthal_site/> and W. K. Hastings who extended it to the more general case in 1970.<ref name=Hastings/>
| |
| There is controversy over the credit for discovery of the algorithm.
| |
| Edward Teller states in his memoirs that the five authors of the 1953 paper worked
| |
| together for "days (and nights)".<ref name=Teller/>
| |
| M. Rosenbluth, in an oral history recorded shortly before his death<ref name=Rosenbluth/> credits E. Teller with posing the
| |
| original problem, himself with solving it, and A.W. Rosenbluth (his wife) with programming the computer.
| |
| According to M. Rosenbluth, neither Metropolis nor A.H. Teller participated in any way.
| |
| Rosenbluth's account of events is supported by other contemporary recollections.<ref name=Gubernatis/>
| |
| | |
| ==Intuition==
| |
| The Metropolis–Hastings algorithm can draw samples from any [[probability distribution]] ''P(x)'', provided you can compute the value of a function ''f(x)'' which is ''proportional'' to the density of ''P''. The lax requirement that ''f(x)'' should be merely proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because calculating the necessary normalization factor is often extremely difficult [[Bayesian statistics|in practice]].
| |
| | |
| The Metropolis–Hastings algorithm works by generating a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution, ''P(x)''. These sample values are produced iteratively, with the distribution of the next sample being dependent only on the current sample value (thus making the sequence of samples into a [[Markov chain]]). Specifically, at each iteration, the algorithm picks a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted (in which case the candidate value is used in the next iteration) or rejected (in which case the candidate value is discarded, and current value is reused in the next iteration)−the probability of acceptance is determined by comparing the likelihoods of the current and candidate sample values with respect to the desired distribution ''P(x)''.
| |
| | |
| For the purpose of illustration, the Metropolis algorithm, a special case of the Metropolis–Hastings algorithm, is described below. | |
| <!---The sample values are linked in a [[Markov chain]], which means that the probability of each sample is conditionally independent of any earlier sample, given the sample immediately before it. In other words,
| |
| general idea is to generate a sequence of samples which are linked in a [[Markov chain]]; in other words, where each sample in the sequence is conditionally independent of any earlier sample, given the sample immediately before it. The procedure for choosing successive samples guarantees that the distribution of sample values will match the desired distribution ''P(x)'' after a long time.!-->
| |
| | |
| '''Metropolis algorithm'''
| |
| | |
| Let ''f(x)'' be a function that is proportional to the desired probability distribution ''P(x)''.
| |
| | |
| # Initialization: Choose an arbitrary point ''x<sub>0</sub>'' to be the first sample, and choose an arbitrary probability density <math>Q(x|y)</math> which suggests a candidate for the next sample value ''x'', given the previous sample value ''y''. For the Metropolis algorithm, ''Q'' must be symmetric; in other words, it must satisfy <math>Q(x|y) = Q(y|x)</math>. A usual choice is to let <math>Q(x|y)</math> be a [[Gaussian distribution]] centered at ''y'', so that points closer to ''y'' are more likely to be visited next—making the sequence of samples into a [[random walk]]. The function ''Q'' is referred to as the ''proposal density'' or ''jumping distribution''.
| |
| # For each iteration ''t'':
| |
| #* Generate a candidate ''x''' for the next sample by picking from the distribution <math>Q(x'|x_t)</math>.
| |
| #* Calculate the ''acceptance ratio'' α = ''f(x')/f(x<sub>t</sub>)'', which will be used to decide whether to accept or reject the candidate. Because ''f'' is proportional to the density of ''P'', we have that '' α = f(x')/f(x<sub>t</sub>)'' = ''P(x')/P(x<sub>t</sub>)''.
| |
| #* If α ≥ 1, then the candidate is more likely than ''x<sub>t</sub>''; automatically accept the candidate by setting ''x<sub>t+1</sub> = x'''. Otherwise, accept the candidate with probability α; if the candidate is rejected, set ''x<sub>t+1</sub> = x<sub>t</sub>'', instead.
| |
| | |
| This algorithm proceeds by randomly attempting to move about the sample space, sometimes accepting the moves and sometimes remaining in place. Note that the acceptance ratio <math>\alpha</math> indicates how probable the new proposed sample is with respect to the current sample, according to the distribution <math>\displaystyle P(x)</math>. If we attempt to move to a point that is more probable than the existing point (i.e. a point in a higher-density region of <math>\displaystyle P(x)</math>), we will always accept the move. However, if we attempt to move to a less probable point, we will sometimes reject the move, and the more the relative drop in probability, the more likely we are to reject the new point. Thus, we will tend to stay in (and return large numbers of samples from) high-density regions of <math>\displaystyle P(x)</math>, while only occasionally visiting low-density regions. Intuitively, this is why this algorithm works, and returns samples that follow the desired distribution <math>\displaystyle P(x)</math>.
| |
| | |
| Compared with an algorithm like [[adaptive rejection sampling]] that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages:
| |
| *The samples are correlated. Even though over the long term they do correctly follow <math>\displaystyle P(x)</math>, a set of nearby samples will be correlated with each other and not correctly reflect the distribution. This means that if we want a set of independent samples, we have to throw away the majority of samples and only take every ''n''th sample, for some value of ''n'' (typically determined by examining the auto-correlation between adjacent samples). Auto-correlation can be reduced by increasing the ''jumping width'' (the average size of a jump, which is related to the variance of the jumping distribution), but this will also increase the likelihood of rejection of the proposed jump. Too large or too small a jumping size will lead to a ''slow-mixing'' Markov chain, i.e. a highly correlated set of samples, so that a very large number of samples will be needed to get a reasonable estimate of any desired property of the distribution.
| |
| *Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result, a ''burn-in'' period is typically necessary, where an initial number of samples (e.g. the first 1,000 or so) are thrown away.
| |
| On the other hand, most simple [[rejection sampling]] methods suffer from the "curse of dimensionality", where the probability of rejection increases exponentially as a function of the number of dimensions. Metropolis–Hastings, along with other MCMC methods, do not have this problem to such a degree, and thus are often the only solutions available when the number of dimensions of the distribution to be sampled is high. As a result, MCMC methods are often the methods of choice for producing samples from [[hierarchical Bayesian model]]s and other high-dimensional statistical models used nowadays in many disciplines.
| |
| | |
| In [[multivariate distribution|multivariate]] distributions, the classic Metropolis–Hastings algorithm as described above involves choosing a new multi-dimensional sample point. When the number of dimensions is high, finding the right jumping distribution to use can be difficult, as the different individual dimensions behave in very different ways, and the jumping width (see above) must be "just right" for all dimensions at once to avoid excessively slow mixing. An alternative approach that often works better in such situations, known as [[Gibbs sampling]], involves choosing a new sample for each dimension separately from the others, rather than choosing a sample for all dimensions at once. This is especially applicable when the multivariate distribution is composed out of a set of individual [[random variable]]s in which each variable is conditioned on only a small number of other variables, as is the case in most typical [[hierarchical Bayesian model|hierarchical model]]s. The individual variables are then sampled one at a time, with each variable conditioned on the most recent values of all the others. Various algorithms can be used to choose these individual samples, depending on the exact form of the multivariate distribution: some possibilities are [[adaptive rejection sampling]], a one-dimensional Metropolis–Hastings step, or [[slice sampling]].
| |
| | |
| ==Formal derivation==
| |
| The purpose of the Metropolis–Hastings algorithm is to generate a collection of states according to a desired distribution ''P(x)''. To accomplish this, the algorithm uses a [[Markov process]] which asymptotically reaches a unique [[Markov_chain#Steady-state_analysis_and_limiting_distributions|stationary distribution]] π(x).<ref name=Roberts_Casella/>
| |
| | |
| A Markov process is uniquely defined by its transition probabilities, the probability <math>P(x\rightarrow x')</math> of transitioning between any two states x to x'. It has a unique stationary distribution π(x) when the following two conditions are met:<ref name=Roberts_Casella/>
| |
| # '''existence of stationary distribution''': there must exist a stationary distribution π(x). This is guaranteed by the condition of [[Markov_chain#Reversible_Markov_chain|detailed balance]] which requires that each transition x→x' is reversible: for every pair of states x, x', the probability of being in state x and transit to the state x' must be equal to the probability of being in state x' and transit to the state x, <math>\pi(x)P(x\rightarrow x') = \pi(x')P(x'\rightarrow x)</math>.
| |
| # '''uniqueness of stationary distribution''': the stationary distribution π(x) must be unique. This is guaranteed by [[Markov_Chain#Ergodicity|ergodicity]] of the Markov process, which requires that every state must (1) be aperiodic—the system does not return to the same state at fixed intervals; and (2) be positive recurrent—the system will return to the state with nonzero probability.
| |
| | |
| Metropolis–Hastings algorithm resides in designing a Markov process (by constructing transition probabilities) which fulfils the two above conditions, such that its stationary distribution π(x) is chosen to be ''P(x)''. The derivation of the algorithm starts with the condition of detailed balance:
| |
| | |
| <math>P(x)P(x\rightarrow x') = P(x')P(x'\rightarrow x)</math>
| |
| | |
| which is re-written as
| |
| | |
| <math>\frac{P(x\rightarrow x')}{P(x'\rightarrow x)} = \frac{P(x')}{P(x)}</math>.
| |
| | |
| The approach is to separate the transition in two sub-steps; the proposal and the acceptance-rejection. The '''proposal distribution''' <math>\displaystyle g(x\rightarrow x')</math> is the conditional probability of proposing a state x' given x, and the '''acceptance distribution''' <math>\displaystyle A(x\rightarrow x')</math> the conditional probability to accept the proposed state x'. The transition probability can be written as the product of them:
| |
| | |
| <math>P(x\rightarrow x') = g(x\rightarrow x') A(x\rightarrow x')</math> .
| |
| | |
| Inserting this relation the previous equation, we have
| |
| | |
| <math>\frac{A(x\rightarrow x')}{A(x'\rightarrow x)} = \frac{P(x')}{P(x)}\frac{g(x'\rightarrow x)}{g(x\rightarrow x')}</math> .
| |
| | |
| The next step in the derivation is to choose an acceptance that fulfills detailed balance. One common choice is the Metropolis choice:
| |
| | |
| <math>A(x\rightarrow x') = \min\left(1,\frac{P(x')}{P(x)}\frac{g(x'\rightarrow x)}{g(x\rightarrow x')}\right)</math>
| |
| | |
| i.e., we always accept when the acceptance is bigger than 1, and we reject accordingly when the acceptance is smaller than 1.
| |
| This constitutes the required quantity for implementing the algorithm.
| |
| | |
| The Metropolis–Hastings algorithm thus consists in the following:
| |
| | |
| # Initialisation: pick an initial state x at random;
| |
| # randomly pick a state x' according to <math>\displaystyle g(x\rightarrow x')</math>;
| |
| # accept the state according to <math>\displaystyle A(x\rightarrow x')</math>. If not accepted, that means that x' = x, and so there is no need to update anything. Else, the system transits to x';
| |
| # go to 2 until T states were generated;
| |
| # save the state x, go to 2.
| |
| | |
| The saved states are in principle drawn from the distribution <math>P(x)</math>, as step 4 ensures they are de-correlated.
| |
| The value of T must be chosen according to different factors such as the proposal distribution and, formally, it has to be of the order of the [[autocorrelation]] time of the Markov process.<ref name=Newman_Barkema/>
| |
| | |
| It is important to notice that it is not clear, in a general problem, which distribution <math>\displaystyle g(x\rightarrow x')</math> one should use; it is a free parameter of the method which has to be adjusted to the particular problem in hands.
| |
| | |
| ==Step-by-step instructions==
| |
| | |
| Suppose the most recent value sampled is <math>x_t\,</math>. To follow the Metropolis–Hastings algorithm, we next draw a new proposal state <math>x'\,</math> with probability density <math>Q(x'\mid x_t)\,</math>, and calculate a value
| |
| | |
| :<math>
| |
| a = a_1 a_2\,
| |
| </math>
| |
| | |
| where
| |
| | |
| :<math>
| |
| a_1 = \frac{P(x')}{P(x_t)} \,\!
| |
| </math>
| |
| | |
| is the probability (e.g., Bayesian posterior) ratio between the proposed sample <math>x'\,</math> and the previous sample <math>x_t\,</math>, and
| |
| | |
| :<math>
| |
| a_2 = \frac{Q(x_t \mid x')}{Q(x'\mid x_t)}
| |
| </math>
| |
| | |
| is the ratio of the proposal density in two directions (from <math>x_t\,</math> to <math>x'\,</math> and ''vice versa'').
| |
| This is equal to 1 if the proposal density is symmetric.
| |
| Then the new state <math>\displaystyle x_{t+1}</math> is chosen according to the following rules.
| |
| | |
| :<math>
| |
| \begin{matrix}
| |
| \mbox{If } a \geq 1: & \\
| |
| & x_{t+1} = x',
| |
| \end{matrix}
| |
| </math>
| |
| :<math>
| |
| \begin{matrix}
| |
| \mbox{else} & \\
| |
| & x_{t+1} = \left\{
| |
| \begin{array}{lr}
| |
| x' & \mbox{ with probability }a \\
| |
| x_t & \mbox{ with probability }1-a.
| |
| \end{array}
| |
| \right.
| |
| \end{matrix}
| |
| </math>
| |
| | |
| The Markov chain is started from an arbitrary initial value <math>\displaystyle x_0</math> and the algorithm is run for many iterations until this initial state is "forgotten".
| |
| These samples, which are discarded, are known as ''burn-in''. The remaining set of accepted values of <math>x</math> represent a [[Sample (statistics)|sample]] from the distribution <math>P(x)</math>.
| |
| | |
| The algorithm works best if the proposal density matches the shape of the target distribution <math>\displaystyle P(x)</math> from which direct sampling is difficult, that is <math>Q(x'\mid x_t) \approx P(x') \,\!</math>.
| |
| If a Gaussian proposal density <math>\displaystyle Q</math> is used the variance parameter <math>\displaystyle \sigma^2</math> has to be tuned during the burn-in period.
| |
| This is usually done by calculating the ''acceptance rate'', which is the fraction of proposed samples that is accepted in a window of the last <math>\displaystyle N</math> samples.
| |
| The desired acceptance rate depends on the target distribution, however it has been shown theoretically that the ideal acceptance rate for a one dimensional Gaussian distribution is approx 50%, decreasing to approx 23% for an <math>\displaystyle N</math>-dimensional Gaussian target distribution.<ref name=Roberts/>
| |
| | |
| If <math>\displaystyle \sigma^2</math> is too small the chain will ''mix slowly'' (i.e., the acceptance rate will be high but successive samples will move around the space slowly and the chain will converge only slowly to <math>\displaystyle P(x)</math>). On the other hand,
| |
| if <math>\displaystyle \sigma^2</math> is too large the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density, so <math>\displaystyle a_1</math> will be very small and again the chain will converge very slowly.
| |
| | |
| [[Image:3dRosenbrock.png|thumb|350px|The result of three [[Markov chain]]s running on the 3D [[Rosenbrock function]] using the Metropolis-Hastings algorithm. The algorithm samples from regions where the [[posterior probability]] is high and the chains begin to mix in these regions. The approximate position of the maximum has been illuminated. Note that the red points are the ones that remain after the burn-in process. The earlier ones have been discarded.]]
| |
| | |
| ==See also==
| |
| * [[Simulated annealing]]
| |
| * [[Detailed balance]]
| |
| * [[Multiple-try Metropolis]]
| |
| * [[Metropolis light transport]]
| |
| * [[Gibbs sampling]]
| |
| | |
| ==References==
| |
| {{Reflist|
| |
| refs=
| |
| <ref name=Metropolis>{{cite journal
| |
| |first1=N. |last1=Metropolis |authorlink1=Nicholas Metropolis
| |
| |first2=A.W. |last2=Rosenbluth
| |
| |first3=M.N. |last3=Rosenbluth |authorlink3=Marshall N. Rosenbluth
| |
| |first4=A.H. |last4=Teller
| |
| |first5=E. |last5=Teller |authorlink5=Edward Teller
| |
| |title=[[Equations of State Calculations by Fast Computing Machines]]
| |
| |journal=[[Journal of Chemical Physics]]
| |
| |volume=21 |issue=6 |pages=1087–1092 |year=1953
| |
| |doi=10.1063/1.1699114 |bibcode=1953JChPh..21.1087M
| |
| }}</ref> and W. Keith Hastings
| |
| <ref name=Rosenthal_site>{{cite web
| |
| |title=W.K. Hastings, Statistician and Developer of the Metropolis-Hastings Algorithm
| |
| |url=http://probability.ca/hastings/
| |
| |first=Jeffrey |last=Rosenthal
| |
| |date=March 2004
| |
| |accessdate=2009-06-02
| |
| }}</ref>
| |
| <ref name=Hastings>{{cite journal
| |
| |first=W.K. |last=Hastings
| |
| |title=Monte Carlo Sampling Methods Using Markov Chains and Their Applications
| |
| |journal=[[Biometrika]]
| |
| |volume=57 |issue=1 |pages=97–109 |year=1970
| |
| |jstor=2334940 | zbl = 0219.65008 |doi=10.1093/biomet/57.1.97
| |
| }}</ref>
| |
| <ref name=Teller>Teller, Edward. ''Memoirs: A Twentieth-Century Journey in Science and Politics''. [[Perseus Publishing]], 2001, p. 328</ref>
| |
| <ref name=Rosenbluth>Rosenbluth, Marshall. [http://www.aip.org/history/ohilist/28636_1.html "Oral History Transcript"]. American Institute of Physics</ref>
| |
| <ref name=Gubernatis>{{cite journal |title=Marshall Rosenbluth and the Metropolis Algorithm |author=J.E. Gubernatis |journal=[[Physics of Plasmas]] | volume=12| pages=057303| year=2005| doi=10.1063/1.1887186 | bibcode=2005PhPl...12e7303G |issue=5
| |
| }}</ref>
| |
| <ref name=Roberts>{{cite journal
| |
| |first1=G.O. |last1=Roberts
| |
| |first2=A. |last2=Gelman
| |
| |first3=W.R. |last3=Gilks
| |
| |title=Weak convergence and optimal scaling of random walk Metropolis algorithms
| |
| |journal=[[Ann. Appl. Probab.]]
| |
| |volume=7 |issue=1 |pages=110–120 |year=1997
| |
| |doi=10.1214/aoap/1034625254
| |
| }}</ref>
| |
| <ref name=Roberts_Casella>{{Cite isbn | 0387212396
| |
| }}</ref>
| |
| <ref name= Newman_Barkema>{{cite isbn | 0198517971
| |
| }}</ref>
| |
| }}
| |
| | |
| == Further reading ==
| |
| * [[Bernd A. Berg]]. ''Markov Chain Monte Carlo Simulations and Their Statistical Analysis''. Singapore, [[World Scientific]], 2004.
| |
| * Siddhartha Chib and Edward Greenberg: "Understanding the Metropolis–Hastings Algorithm". ''[[American Statistician]]'', 49(4), 327–335, 1995
| |
| * Bolstad, William M. (2010) ''Understanding Computational Bayesian Statistics'', [[John Wiley & Sons]] ISBN 0-470-04609-0
| |
| | |
| == External links ==
| |
| * [http://xbeta.org/wiki/show/Metropolis-Hastings+algorithm Metropolis-Hastings algorithm on xβ]
| |
| * [http://www.quantiphile.com/2010/11/01/metropolis-hastings/ Matlab implementation of Metropolis-Hastings]
| |
| | |
| {{DEFAULTSORT:Metropolis-Hastings Algorithm}}
| |
| [[Category:Monte Carlo methods]]
| |
| [[Category:Markov chain Monte Carlo]]
| |
| [[Category:Statistical algorithms]]
| |