Line moiré

From formulasearchengine
Revision as of 16:34, 7 March 2013 by en>Yobot (WP:CHECKWIKI errors fixed + general fixes using AWB (8961))
Jump to navigation Jump to search

In statistics, Fisher's scoring algorithm is a form of Newton's method used to solve maximum likelihood equations numerically.

Sketch of Derivation

Let Y1,,Yn be random variables, independent and identically distributed with twice differentiable p.d.f. f(y;θ), and we wish to calculate the maximum likelihood estimator (M.L.E.) θ* of θ. First, suppose we have a starting point for our algorithm θ0, and consider a Taylor expansion of the score function, V(θ), about θ0:

V(θ)V(θ0)𝒥(θ0)(θθ0),

where

𝒥(θ0)=i=1n|θ=θ0logf(Yi;θ)

is the observed information matrix at θ0. Now, setting θ=θ*, using that V(θ*)=0 and rearranging gives us:

θ*θ0+𝒥1(θ0)V(θ0).

We therefore use the algorithm

θm+1=θm+𝒥1(θm)V(θm),

and under certain regularity conditions, it can be shown that θmθ*.

Fisher scoring

In practice, 𝒥(θ) is usually replaced by (θ)=E[𝒥(θ)], the Fisher information, thus giving us the Fisher Scoring Algorithm:

θm+1=θm+1(θm)V(θm).

See also

References

Jennrich, R. I., & Sampson, P. F. (1976). Newton-Raphson and related algorithms for maximum likelihood variance component estimation. Technometrics, 18, 11-17.