|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| In [[mathematical statistics]], the '''Fisher information''' (sometimes simply called '''information'''<ref>Lehmann and Casella, p. 115</ref>) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ upon which the probability of X depends.
| | Elderly video games ought in order to mention be discarded. They can indeed be worth some money at a number of video retailers. After you buy and sell many game titles, you can even get your upcoming bill at no cost!<br><br> |
| Formally, it is the [[variance]] of the [[score (statistics)|score]], or the [[expected value]] of the [[observed information]]. In [[Bayesian statistics]], the asymptotic distribution of the [[posterior distribution|posterior]] [[mode (statistics)|mode]] depends on the Fisher information and not on the [[prior distribution|prior]] (according to the [[Bernstein–von Mises theorem]], which was anticipated by [[Laplace]] for [[exponential families]]).<ref>[[Lucien Le Cam]] (1986) ''Asymptotic Methods in Statistical Decision Theory'': Pages 336 and 618–621 (von Mises and Bernstein).
| |
| </ref> The role of the Fisher information in the asymptotic theory of [[maximum-likelihood estimation]] was emphasized by the statistician [[Ronald Fisher|R.A. Fisher]] (following some initial results by [[Francis Ysidro Edgeworth|F. Y. Edgeworth]]). The Fisher information is also used in the calculation of the [[Jeffreys prior]], which is used in Bayesian statistics. | |
|
| |
|
| The Fisher-information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates. It can also be used in the formulation of test statistics, such as the [[Wald test]]. | | The fact that [http://Search.Un.org/search?ie=utf8&site=un_org&output=xml_no_dtd&client=UN_Website_en&num=10&lr=lang_en&proxystylesheet=UN_Website_en&oe=utf8&q=explained&Submit=Go explained] in the really last Clash of Clans' Clan Wars overview, anniversary romantic relationship war is breach away into a couple phases: Alertness Day and Leisure activity Day. Anniversary coloration lasts 24 hours and as a result means that you has the potential to accomplish altered things.<br><br>Indeed be aware of how multi player works. If you're investing in a real game exclusively for its multiplayer, be sure they have everything required to gain this. If you could planning on playing while fighting a person in all your household, you may be taught that you will want two copies of the very clash of clans cheats to game against one another.<br><br>Doing now, there exists minimum social options / attributes with this game i.e. there is not any chat, attempting to team track using friends, etc but then again we could expect this excellent to improve soon even though Boom Beach continues to stay in their Beta Mode.<br><br>If this isn't true, you've landed at the correct spot! Truly, we have produced once lengthy hrs of research, perform and screening, a simple solution for thr Clash of most Clans Cheat totally undetectable and operates perfectly. And due to the effort of our teams, the particular never-ending hrs of exhilaration in your iPhone, ipad or iPod Touch enjoying Clash of Clans with cheat code Clash of Clans produced especially for you personally!<br><br>Further question the extent which it''s a 'strategy'" sports. A good moron without strategy in just about any respect will advance concerning gamers over time. So long as the individual sign in occasionally to be sure your digital 'builders'" are building something, your game power will increase. That''s all there's going without training shoes. Individuals which have been the most effective each and every one in the game are, typically, those who can be actually playing a long, plus those who gave real cash to identify extra builders. (Applying two builders, an further more one can possibly will certainly be obtained for 400 gems which cost $4.99 and the next another one costs 1000 gems.) Utilizing four builders, you could very well advance amongst people about doubly as fast as a guy with two builders.<br><br>Find the leap into the [http://www.google.Co.uk/search?hl=en&gl=us&tbm=nws&q=pre-owned&gs_l=news pre-owned] or operated xbox game marketplace. Several fans will get a Clash of Clans Hack and finish this game really really fast. If you loved this short article and you would like to receive much more information with regards to how to hack clash of clans ([http://prometeu.net Recommended Web-site]) i implore you to visit our web page. Several shops let these activities being dealt in just to promote them at your lessened cost. On your be by far one of the most cost-effective technique to be newer video games the actual higher cost. |
| | |
| ==History==
| |
| The Fisher information was discussed by several early statisticians, notably [[Francis Ysidro Edgeworth|F. Y. Edgeworth]].<ref>Savage (1976)</ref> For example, Savage<ref>Savage(1976), page 156</ref> says: "In it [Fisher information], he [Fisher] was to some extent anticipated (Edgeworth 1908–9 esp. 502, 507–8, 662, 677–8, 82–5 and references he [Edgeworth] cites including Pearson and Filon 1898 [. . .])."
| |
| There are a number of early historical sources<ref>Edgeworth (Sept. 1908, Dec. 1908)</ref>
| |
| and a number of reviews of this early work.<ref>Pratt(1976)</ref><ref>Stigler (1978,1986,1999)</ref><ref>Hald (1998,1999)</ref>
| |
| | |
| ==Definition==
| |
| The Fisher information is a way of measuring the amount of information that an observable [[random variable]] ''X'' carries about an unknown [[parameter]] θ upon which the probability of ''X'' depends. The probability function for ''X'', which is also the [[likelihood function]] for θ, is a function ''f''(''X''; ''θ''); it is the [[probability mass function|probability mass]] (or [[probability density function|probability density]]) of the random variable ''X'' conditional on the value of θ. The partial derivative with respect to θ of the [[natural logarithm]] of the likelihood function is called the [[score (statistics)|score]].
| |
| | |
| Under certain regularity conditions,<ref name=SubaRao>{{cite web|last=Suba Rao|title=Lectures on statistical inference|url=http://www.stat.tamu.edu/~suhasini/teaching613/inference.pdf}}</ref> it can be shown that the first [[moment (mathematics)|moment]] of the score (that is, its [[expected value]]) is 0:
| |
| :<math>
| |
| \operatorname{E} \left[\left. \frac{\partial}{\partial\theta} \log f(X;\theta)\right|\theta \right]
| |
| = | |
| \operatorname{E} \left[\left. \frac{\frac{\partial}{\partial\theta} f(X;\theta)}{f(X; \theta)}\right|\theta \right]
| |
| = | |
| \int \frac{\frac{\partial}{\partial\theta} f(x;\theta)}{f(x; \theta)} f(x;\theta)\; dx
| |
| = | |
| </math>
| |
| | |
| :<math>
| |
| = | |
| \int \frac{\partial}{\partial\theta} f(x;\theta)\; dx
| |
| = | |
| \frac{\partial}{\partial\theta} \int f(x; \theta)\; dx
| |
| = | |
| \frac{\partial}{\partial\theta} \; 1 = 0.
| |
| </math>
| |
| | |
| The second moment is called the Fisher information:
| |
| | |
| :<math>
| |
| \mathcal{I}(\theta)=\operatorname{E} \left[\left. \left(\frac{\partial}{\partial\theta} \log f(X;\theta)\right)^2\right|\theta \right] = \int \left(\frac{\partial}{\partial\theta} \log f(x;\theta)\right)^2 f(x; \theta)\; dx\,,
| |
| </math>
| |
| | |
| where, for any given value of θ, the expression E[...|''θ''] denotes the conditional [[expected value|expectation]] over values for ''X'' with respect to the probability function ''f''(''x''; ''θ'') given ''θ''. Note that <math>0 \leq \mathcal{I}(\theta) < \infty</math>. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable ''X'' has been averaged out.
| |
| | |
| Since the [[expected value|expectation]] of the [[score (statistics)|score]] is zero, the Fisher information is also the [[variance]] of the score.
| |
| | |
| If {{nowrap|log ''f''(''x''; ''θ'')}} is twice differentiable with respect to ''θ'', and under certain regularity conditions, then the Fisher information may also be written as<ref>Lehmann and Casella, eq. (2.5.16).</ref>
| |
| :<math>
| |
| \mathcal{I}(\theta) = - \operatorname{E} \left[\left. \frac{\partial^2}{\partial\theta^2} \log f(X;\theta)\right|\theta \right]\,,
| |
| </math>
| |
| since
| |
| :<math>
| |
| \frac{\partial^2}{\partial\theta^2} \log f(X;\theta)
| |
| =
| |
| \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)}
| |
| \;-\;
| |
| \left( \frac{\frac{\partial}{\partial\theta} f(X;\theta)}{f(X; \theta)} \right)^2
| |
| =
| |
| \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)}
| |
| \;-\;
| |
| \left( \frac{\partial}{\partial\theta} \log f(X;\theta)\right)^2
| |
| </math>
| |
| and
| |
| :<math>
| |
| \operatorname{E} \left[\left. \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)}\right|\theta \right]
| |
| =
| |
| ...
| |
| =
| |
| \frac{\partial^2}{\partial\theta^2} \int f(x; \theta)\; dx
| |
| =
| |
| \frac{\partial^2}{\partial\theta^2} \; 1 = 0.
| |
| </math>
| |
| Thus, the Fisher information is the negative of the expectation of the second [[derivative]] with respect to ''θ'' of the [[natural logarithm]] of ''f''. Information may be seen to be a measure of the "curvature" of the [[support curve]] near the [[maximum likelihood|maximum likelihood estimate]] of θ. A "blunt" support curve (one with a shallow maximum) would have a low negative expected second derivative, and thus low information; while a sharp one would have a high negative expected second derivative and thus high information.
| |
| | |
| Information is additive, in that the information yielded by two [[statistical independence|independent]] experiments is the sum of the information from each experiment separately:
| |
| | |
| :<math> \mathcal{I}_{X,Y}(\theta) = \mathcal{I}_X(\theta) + \mathcal{I}_Y(\theta). </math>
| |
| | |
| This result follows from the elementary fact that if random variables are independent, the variance of their sum is the sum of their variances.
| |
| In particular, the information in a random sample of size ''n'' is ''n'' times that in a sample of size 1, when observations are independent and identically distributed.
| |
| | |
| The information provided by a [[sufficiency (statistics)|sufficient statistic]] is the same as that of the sample ''X''. This may be seen by using [[Sufficient statistic#Fisher-Neyman.27s factorization theorem|Neyman's factorization criterion]] for a sufficient statistic. If ''T''(''X'') is sufficient for θ, then
| |
| | |
| :<math> f(X;\theta) = g(T(X), \theta) h(X) \!</math> | |
| | |
| for some functions ''g'' and ''h''. See [[sufficiency (statistics)|sufficient statistic]] for a more detailed explanation. The equality of information then follows from the following fact:
| |
| | |
| :<math> \frac{\partial}{\partial\theta} \log \left[f(X ;\theta)\right]
| |
| = \frac{\partial}{\partial\theta} \log \left[g(T(X);\theta)\right] </math>
| |
| | |
| which follows from the definition of Fisher information, and the independence of ''h''(''X'') from θ. More generally, if {{nowrap|''T {{=}} t''(''X'')}} is a [[statistic]], then
| |
| | |
| :<math>
| |
| \mathcal{I}_T(\theta)
| |
| \leq
| |
| \mathcal{I}_X(\theta)
| |
| </math>
| |
| | |
| with equality [[if and only if]] ''T'' is a [[sufficient statistic]].
| |
| | |
| ===Informal derivation of the Cramér–Rao bound===
| |
| The [[Cramér–Rao bound]] states that the inverse of the Fisher information is a lower bound on the variance of any [[unbiased estimator]] of θ. Van Trees (1968) and Frieden (2004) provide the following method of deriving the [[Cramér–Rao bound]], a result which describes use of the Fisher information, informally:
| |
| | |
| Consider an [[unbiased estimator]] <math>\hat\theta(X)</math>. Mathematically, we write
| |
| | |
| :<math>
| |
| \operatorname{E}\left[ \left. \hat\theta(X) - \theta \right| \theta \right]
| |
| = \int \left[ \hat\theta(x) - \theta \right] \cdot f(x ;\theta) \, dx = 0.
| |
| </math>
| |
| | |
| The [[likelihood function]] ''f''(''X''; ''θ'') describes the probability that we observe a given sample ''x'' ''given'' a known value of ''θ''. If ''f'' is sharply peaked with respect to changes in θ, it is easy to intuit the "correct" value of θ given the data, and hence the data contains a lot of information about the parameter. If the likelihood ''f'' is flat and spread-out, then it would take many, many samples of ''X'' to estimate the actual "true" value of ''θ''. Therefore, we would intuit that the data contain much less information about the parameter.
| |
| | |
| Now, we differentiate the unbiased-ness condition above to get
| |
| | |
| :<math>
| |
| \frac{\partial}{\partial\theta} \int \left[ \hat\theta(x) - \theta \right] \cdot f(x ;\theta) \, dx
| |
| = \int \left(\hat\theta-\theta\right) \frac{\partial f}{\partial\theta} \, dx - \int f \, dx = 0.
| |
| </math>
| |
| | |
| We now make use of two facts. The first is that the likelihood ''f'' is just the probability of the data given the parameter. Since it is a probability, it must be normalized, implying that
| |
| | |
| :<math>\int f \, dx = 1.</math>
| |
| | |
| Second, we know from basic calculus that
| |
| | |
| :<math>\frac{\partial f}{\partial\theta} = f \, \frac{\partial \log f}{\partial\theta}.</math>
| |
| | |
| Using these two facts in the above let us write
| |
| | |
| :<math>
| |
| \int \left(\hat\theta-\theta\right) f \, \frac{\partial \log f}{\partial\theta} \, dx = 1.
| |
| </math>
| |
| | |
| Factoring the integrand gives
| |
| | |
| :<math>
| |
| \int \left(\left(\hat\theta-\theta\right) \sqrt{f} \right) \left( \sqrt{f} \, \frac{\partial \log f}{\partial\theta} \right) \, dx = 1.
| |
| </math>
| |
| | |
| If we square the equation, the [[Cauchy–Schwarz inequality]] lets us write
| |
| | |
| :<math>
| |
| \left[ \int \left(\hat\theta - \theta\right)^2 f \, dx \right] \cdot \left[ \int \left( \frac{\partial \log f}{\partial\theta} \right)^2 f \, dx \right] \geq 1.
| |
| </math>
| |
| | |
| The right-most factor is defined to be the Fisher Information
| |
| | |
| :<math>
| |
| \mathcal{I}\left(\theta\right) = \int \left( \frac{\partial \log f}{\partial\theta} \right)^2 f \, dx.
| |
| </math>
| |
| | |
| The left-most factor is the expected mean-squared error of the estimator ''θ''<sup style="position:relative;top:-2pt;left:-4pt">^</sup>, since
| |
| | |
| :<math>
| |
| \operatorname{E}\left[ \left. \left( \hat\theta\left(X\right) - \theta \right)^2 \right| \theta \right] = \int \left(\hat\theta - \theta\right)^2 f \, dx.
| |
| </math>
| |
| | |
| Notice that the inequality tells us that, fundamentally,
| |
| | |
| :<math>
| |
| \operatorname{Var}\left[\hat\theta\right] \, \geq \, \frac{1}{\mathcal{I}\left(\theta\right)}.
| |
| </math>
| |
| | |
| In other words, the precision to which we can estimate ''θ'' is fundamentally limited by the Fisher Information of the likelihood function.
| |
| | |
| ===Single-parameter Bernoulli experiment===
| |
| A [[Bernoulli trial]] is a random variable with two possible outcomes, "success" and "failure", with success having a probability of ''θ''. The outcome can be thought of as determined by a coin toss, with the probability of heads being ''θ'' and the probability of tails being {{nowrap|1 − ''θ''}}.
| |
| | |
| The Fisher information contained in ''n'' independent [[Bernoulli trial]]s may be calculated as follows. In the following, ''A'' represents the number of successes, ''B'' the number of failures, and {{nowrap|''n {{=}} A + B''}} is the total number of trials.
| |
| | |
| :<math>
| |
| \begin{align}
| |
| \mathcal{I}(\theta)
| |
| & =
| |
| -\operatorname{E}
| |
| \left[ \left.
| |
| \frac{\partial^2}{\partial\theta^2} \log(f(A;\theta))
| |
| \right| \theta \right] \qquad (1) \\
| |
| & =
| |
| -\operatorname{E}
| |
| \left[ \left.
| |
| \frac{\partial^2}{\partial\theta^2} \log
| |
| \left(
| |
| \theta^A(1-\theta)^B\frac{(A+B)!}{A!B!}
| |
| \right)
| |
| \right| \theta \right] \qquad (2) \\
| |
| & =
| |
| -\operatorname{E}
| |
| \left[ \left.
| |
| \frac{\partial^2}{\partial\theta^2}
| |
| \left(
| |
| A \log (\theta) + B \log(1-\theta)
| |
| \right)
| |
| \right| \theta \right] \qquad (3) \\
| |
| & =
| |
| -\operatorname{E}
| |
| \left[ \left.
| |
| \frac{\partial}{\partial\theta}
| |
| \left(
| |
| \frac{A}{\theta} - \frac{B}{1-\theta}
| |
| \right)
| |
| \right| \theta \right] \qquad (4) \\
| |
| & =
| |
| +\operatorname{E}
| |
| \left[ \left.
| |
| \frac{A}{\theta^2} + \frac{B}{(1-\theta)^2}
| |
| \right| \theta \right] \qquad (5) \\
| |
| & =
| |
| \frac{n\theta}{\theta^2} + \frac{n(1-\theta)}{(1-\theta)^2} \qquad (6) \\
| |
| & \text{since the expected value of }A\text{ given }\theta\text{ is }n\theta,\text{ etc.} \\
| |
| & = \frac{n}{\theta(1-\theta)} \qquad (7)
| |
| \end{align}
| |
| </math>
| |
| | |
| (1) defines Fisher information.
| |
| (2) invokes the fact that the information in a [[sufficient statistic]] is the same as that of the sample itself.
| |
| (3) expands the [[natural logarithm]] term and drops a constant.
| |
| (4) and (5) differentiate with respect to ''θ''.
| |
| (6) replaces ''A'' and ''B'' with their expectations. (7) is algebra.
| |
| | |
| The end result, namely,
| |
| :<math>\mathcal{I}(\theta) = \frac{n}{\theta(1-\theta)},</math>
| |
| | |
| is the reciprocal of the [[variance]] of the mean number of successes in ''n'' [[Bernoulli trial]]s, as expected (see last sentence of the preceding section).
| |
| | |
| == Matrix form ==
| |
| When there are ''N'' parameters, so that θ is a {{nowrap|''N'' × 1}} [[column vector|vector]] <math>\theta = \begin{bmatrix}
| |
| \theta_{1}, \theta_{2}, \dots , \theta_{N} \end{bmatrix}^{\mathrm T},</math> then the Fisher information takes the form of an {{nowrap|''N'' × ''N''}} [[matrix (mathematics)|matrix]], the Fisher Information Matrix (FIM), with typical element
| |
| | |
| :<math>
| |
| {\left(\mathcal{I} \left(\theta \right) \right)}_{i, j}
| |
| =
| |
| \operatorname{E}
| |
| \left[\left.
| |
| \left(\frac{\partial}{\partial\theta_i} \log f(X;\theta)\right)
| |
| \left(\frac{\partial}{\partial\theta_j} \log f(X;\theta)\right)
| |
| \right|\theta\right].
| |
| </math>
| |
| | |
| The FIM is a {{nowrap|''N'' × ''N''}} [[positive semidefinite matrix|positive semidefinite]] [[symmetric matrix]], defining a [[Riemannian metric]] on the ''N''-[[dimension]]al [[parameter space]], thus connecting Fisher information to [[differential geometry]]. In that context, this metric is known as the [[Fisher information metric]], and the topic is called [[information geometry]].
| |
| | |
| Under certain regularity conditions, the Fisher Information Matrix may also be written as
| |
| | |
| :<math>
| |
| {\left(\mathcal{I} \left(\theta \right) \right)}_{i, j}
| |
| =
| |
| - \operatorname{E}
| |
| \left[\left.
| |
| \frac{\partial^2}{\partial\theta_i \, \partial\theta_j} \log f(X;\theta)
| |
| \right|\theta\right]\,.
| |
| </math>
| |
| | |
| The metric is interesting in several ways; it can be derived as the [[Hessian matrix|Hessian]] of the [[relative entropy]]; it can be understood as a metric induced from the [[Euclidean metric]], after appropriate change of variable; in its complex-valued form, it is the [[Fubini-Study metric]].
| |
| | |
| === Orthogonal parameters ===
| |
| We say that two parameters ''θ<sub>i</sub>'' and ''θ<sub>j</sub>'' are orthogonal if the element of the ''i''th row and ''j''th column of the Fisher information matrix is zero. Orthogonal parameters are easy to deal with in the sense that their [[maximum likelihood| maximum likelihood estimates]] are independent and can be calculated separately. When dealing with research problems, it is very common for the researcher to invest some time searching for an orthogonal parametrization of the densities involved in the problem.{{Citation needed|date=August 2010}}
| |
| | |
| === Multivariate normal distribution ===
| |
| The FIM for a ''N''-variate [[multivariate normal distribution]] has a special form. Let <math>\mu(\theta) = \begin{bmatrix}
| |
| \mu_{1}(\theta), \mu_{2}(\theta), \dots , \mu_{N}(\theta) \end{bmatrix}^\mathrm{T},</math> and let Σ(''θ'') be the [[covariance matrix]]. Then the typical element <math>\mathcal{I}_{m,n}</math>, 0 ≤ ''m'', ''n'' < ''K'', of the FIM for {{nowrap|''X'' ∼ ''N''(''μ''(''θ''), Σ(''θ''))}} is:
| |
| | |
| :<math>
| |
| \mathcal{I}_{m,n}
| |
| =
| |
| \frac{\partial \mu^\mathrm{T}}{\partial \theta_m}
| |
| \Sigma^{-1}
| |
| \frac{\partial \mu}{\partial \theta_n}
| |
| +
| |
| \frac{1}{2}
| |
| \operatorname{tr}
| |
| \left(
| |
| \Sigma^{-1}
| |
| \frac{\partial \Sigma}{\partial \theta_m}
| |
| \Sigma^{-1}
| |
| \frac{\partial \Sigma}{\partial \theta_n}
| |
| \right),
| |
| </math>
| |
| | |
| where <math>(..)^\mathrm{T}</math> denotes the [[transpose]] of a vector, tr(..) denotes the [[trace (matrix)|trace]] of a [[square matrix]], and:
| |
| | |
| *<math>
| |
| \frac{\partial \mu}{\partial \theta_m}
| |
| =
| |
| \begin{bmatrix}
| |
| \frac{\partial \mu_1}{\partial \theta_m} &
| |
| \frac{\partial \mu_2}{\partial \theta_m} &
| |
| \cdots &
| |
| \frac{\partial \mu_N}{\partial \theta_m}
| |
| \end{bmatrix}^\mathrm{T};
| |
| </math>
| |
| | |
| *<math>
| |
| \frac{\partial \Sigma}{\partial \theta_m}
| |
| =
| |
| \begin{bmatrix}
| |
| \frac{\partial \Sigma_{1,1}}{\partial \theta_m} &
| |
| \frac{\partial \Sigma_{1,2}}{\partial \theta_m} &
| |
| \cdots &
| |
| \frac{\partial \Sigma_{1,N}}{\partial \theta_m} \\ \\
| |
| \frac{\partial \Sigma_{2,1}}{\partial \theta_m} &
| |
| \frac{\partial \Sigma_{2,2}}{\partial \theta_m} &
| |
| \cdots &
| |
| \frac{\partial \Sigma_{2,N}}{\partial \theta_m} \\ \\
| |
| \vdots & \vdots & \ddots & \vdots \\ \\
| |
| \frac{\partial \Sigma_{N,1}}{\partial \theta_m} &
| |
| \frac{\partial \Sigma_{N,2}}{\partial \theta_m} &
| |
| \cdots &
| |
| \frac{\partial \Sigma_{N,N}}{\partial \theta_m}
| |
| \end{bmatrix}.
| |
| </math>
| |
| | |
| Note that a special, but very common, case is the one where
| |
| {{nowrap|Σ(''θ'') {{=}} Σ}}, a constant. Then
| |
| | |
| :<math>
| |
| \mathcal{I}_{m,n}
| |
| =
| |
| \frac{\partial \mu^\mathrm{T}}{\partial \theta_m}
| |
| \Sigma^{-1}
| |
| \frac{\partial \mu}{\partial \theta_n}.\
| |
| </math>
| |
| | |
| In this case the Fisher information matrix may be identified with the coefficient matrix of the normal equations of [[least squares]] estimation theory.
| |
| | |
| Another special case is that the mean and covariance depends on two different vector parameters, say, β and θ. This is especially popular in the analysis of spatial data, which uses a linear model with correlated residuals. We have
| |
| | |
| <math>\mathcal{I}\left( \beta ,\theta \right)=\text{diag}\left( \mathcal{I}\left( \beta \right),\mathcal{I}\left( \theta \right) \right)</math>
| |
| | |
| where
| |
| | |
| <math>\mathcal{I}{{\left( \beta \right)}_{m,n}}=\frac{\partial {{\mu }^{\text{T}}}}{\partial {{\beta }_{m}}}{{\Sigma }^{-1}}\frac{\partial \mu }{\partial {{\beta }_{n}}}</math>,
| |
| | |
| <math>\mathcal{I}{{\left( \theta \right)}_{m,n}}=\frac{1}{2}\operatorname{tr}\left( {{\Sigma }^{-1}}\frac{\partial \Sigma }{\partial {{\theta }_{m}}}{{\Sigma }^{-1}}\frac{\partial \Sigma }{\partial {{\theta }_{n}}} \right)</math>
| |
| | |
| The proof of this special case is given in literature.<ref>Maximum likelihood estimation of models for residual covariance in spatial regression, K. V. Mardia and R. J. Marshall, Biometrika (1984), 71, 1, pp. 135-46</ref> Using the same technique in this paper, it's not difficult to prove the original result.
| |
| | |
| ==Properties==
| |
| | |
| ===Reparametrization===
| |
| The Fisher information depends on the parametrization of the problem. If θ and η are two scalar parametrizations of an estimation problem, and θ is a [[continuously differentiable]] function of η, then
| |
| :<math>{\mathcal I}_\eta(\eta) = {\mathcal I}_\theta(\theta(\eta)) \left( \frac{{\mathrm d} \theta}{{\mathrm d} \eta} \right)^2</math>
| |
| where <math>{\mathcal I}_\eta</math> and <math>{\mathcal I}_\theta</math> are the Fisher information measures of η and θ, respectively.<ref>Lehmann and Casella, eq. (2.5.11).</ref>
| |
| | |
| In the vector case, suppose <math>{\boldsymbol \theta}</math> and <math>{\boldsymbol \eta}</math> are ''k''-vectors which parametrize an estimation problem, and suppose that <math>{\boldsymbol \theta}</math> is a continuously differentiable function of <math>{\boldsymbol \eta}</math>, then,<ref>Lehmann and Casella, eq. (2.6.16)</ref>
| |
| :<math>{\mathcal I}_{\boldsymbol \eta}({\boldsymbol \eta}) = {\boldsymbol J}^{\mathrm T} {\mathcal I}_{\boldsymbol \theta} ({\boldsymbol \theta}({\boldsymbol \eta})) {\boldsymbol J}
| |
| </math>
| |
| where the (''i'', ''j'')th element of the ''k'' × ''k'' [[Jacobian matrix]] <math>\boldsymbol J</math> is defined by
| |
| :<math>J_{ij} = \frac{\partial \theta_i}{\partial \eta_j}\,,</math>
| |
| and where <math>{\boldsymbol J}^{\mathrm T}</math> is the matrix transpose of <math>{\boldsymbol J}</math>. | |
| | |
| In [[information geometry]], this is seen as a change of coordinates on a [[Riemannian manifold]], and the intrinsic properties of curvature are unchanged under different parametrization. In general, the Fisher information matrix provides a Riemannian metric (more precisely, the Fisher-Rao metric) for the manifold of thermodynamic states, and can be used as an information-geometric complexity measure for a classification of [[phase transitions]], e.g., the scalar curvature of the thermodynamic metric tensor diverges at (and only at) a phase transition point.<ref>W. Janke, D. A. Johnston, and R. Kenna, Physica A 336, 181 (2004).</ref>
| |
| | |
| In the thermodynamic context, the Fisher information matrix is directly related to the rate of change in the corresponding [[Order parameter#Order parameters|order parameters]].<ref>M. Prokopenko, J. T. Lizier, O. Obst, and X. R. Wang, Relating Fisher information to order parameters, Physical Review E, 84, 041116, 2011.</ref> In particular, such relations identify second-order phase transitions via divergences of individual elements of the Fisher information matrix.
| |
| | |
| ==Applications==
| |
| | |
| ===Optimal design of experiments===
| |
| Fisher information is widely used in [[Optimal design|optimal experimental design]]. Because of the reciprocity of estimator-variance and Fisher information, '''''minimizing'' the ''variance''''' corresponds to '''''maximizing'' the ''information'''''.
| |
| | |
| When the [[Linear model|linear]] (or [[nonlinear regression|linearized]]) [[statistical model]] has several [[parameter]]s, the [[Expected value|mean]] of the parameter-estimator is a [[column vector|vector]] and its [[covariance matrix|variance]] is a [[Matrix (mathematics)|matrix]]. The [[inverse matrix]] of the variance-matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using [[statistical theory]], statisticians compress the information-matrix using real-valued [[summary statistics]]; being real-valued functions, these "information criteria" can be maximized.
| |
| | |
| Traditionally, statisticians have evaluated estimators and designs by considering some [[summary statistics|summary statistic]] of the covariance matrix (of a [[Expected value|mean]]-[[unbiased]] [[estimator]]), usually with positive real values (like the [[determinant]] or [[matrix trace]]). Working with positive real-numbers brings several advantages: If the estimator of a single parameter has a positive variance, then the variance and the Fisher information are both positive real numbers; hence they are members of the convex cone of nonnegative real numbers (whose nonzero members have reciprocals in this same cone).
| |
| For several parameters, the covariance-matrices and information-matrices are elements of the convex cone of nonnegative-definite symmetric matrices in a [[partial order|partially]] [[ordered vector space]], under the [[Charles Loewner|Loewner]] (Löwner) order. This cone is closed under matrix-matrix addition, under matrix-inversion, and under the multiplication of positive real-numbers and matrices. An exposition of matrix theory and the Loewner-order appears in Pukelsheim.<ref>Friedrick Pukelsheim, Optimal Design of Experiments, 1993</ref>
| |
| | |
| The traditional optimality-criteria are the [[information]]-matrix's [[Invariant theory|invariants]]; algebraically, the traditional optimality-criteria are [[Functional (mathematics)|functionals]] of the [[eigenvalue]]s of the (Fisher) information matrix: see [[optimal design]].
| |
| | |
| ===Jeffreys prior in Bayesian statistics===
| |
| In [[Bayesian statistics]], the Fisher information is used to calculate the [[Jeffreys prior]], which is a standard, non-informative prior for continuous distribution parameters.<ref>Bayesian theory, Jose M. Bernardo and Adrian FM. Smith, John Wiley & Sons, 1994</ref>
| |
| | |
| ==Relation to relative entropy==
| |
| {{See also|Fisher information metric}}
| |
| Fisher information is related to [[relative entropy]].<ref>[http://books.google.com/books?id=gqI-pAP2JZ8C&lpg=PA87&vq=kullback%20information&pg=PA87 Gourieroux (1995), page 87]</ref> Consider a family of probability distributions <math>f(x; \theta)</math> where <math>\theta</math> is a parameter which lies in a range of values. Then the relative entropy, or [[Kullback-Leibler divergence]], between two distributions in the family can be written as
| |
| | |
| :<math>
| |
| D(\theta'||\theta) = \int f(x; \theta)\log\frac{f(x;\theta)}{f(x; \theta')} dx
| |
| </math>
| |
| | |
| And the fisher information is:
| |
| :<math>
| |
| \mathcal{I}(\theta) = \left(\frac{d^2}{d\theta'_i d\theta'_j}D(\theta'|\theta)\right)_{\theta=\theta'}
| |
| </math>
| |
| | |
| If we consider <math>\theta</math> fixed, the relative entropy between two distributions of the same family is minimized at <math>\theta'=\theta</math>. For <math>\theta'</math> close to <math>\theta</math> one may expand the previous expression in a series up to second order:
| |
| | |
| :<math>
| |
| D(\theta'||\theta) = \frac{1}{2}(\theta'-\theta)^2\underbrace{\left(\frac{d^2}{d\theta'_i d\theta'_j}D(\theta'|\theta)\right)_{\theta=\theta'}}_{\text{Fisher info.}}+...
| |
| </math>
| |
| | |
| Thus the Fisher information represents the [[curvature]] of the relative entropy.
| |
| | |
| ==Distinction from the Hessian of the entropy==
| |
| In certain cases, the Fisher Information matrix is the negative of the Hessian of the [[Shannon entropy]].{{citation needed|date=February 2012}}{{clarify|reason=other cases? |date=February 2012}} The cases where this explicitly holds are given below. A distribution's Shannon entropy
| |
| :<math>\mathcal{H} = - \int f(X; \theta) \log f(X; \theta) dX</math>
| |
| has as the negative of the <math>(i,j)</math> entry of its Hessian:
| |
| | |
| :<math>-\frac{\partial^2}{\partial\theta_i \partial\theta_j} \mathcal{H}
| |
| = \int \left[ \frac{\partial^2 f(X; \theta)}{\partial\theta_i \, \partial\theta_j} \left(1 + \log f(X; \theta) \right) + \frac{1}{f(X; \theta)} \frac{\partial f(X; \theta)}{\partial\theta_i} \frac{\partial f(X; \theta)}{\partial\theta_j} \right] dX\,.</math>
| |
| | |
| In contrast, the <math>(i,j)</math> entry of the Fisher information matrix is
| |
| | |
| :<math>\mathcal{I}_{ij}(\theta)
| |
| = \int f(X; \theta) \frac{\partial \log f(X; \theta)}{\partial\theta_i} \frac{\partial \log f(X; \theta)}{\partial\theta_j} \,dX
| |
| = \int \frac{1}{f(X; \theta)} \frac{\partial f(X; \theta)}{\partial\theta_i} \frac{\partial f(X; \theta)}{\partial\theta_j} \,dX\,.</math>
| |
| | |
| The difference between the negative Hessian and the Fisher information is{{clarify|reason=why is this being said?|date=February 2012}}
| |
| | |
| :<math>-\frac{\partial^2}{\partial\theta_i \, \partial\theta_j} \mathcal{H} - \mathcal{I}_{ij}(\theta)
| |
| = \int \frac{\partial^2 f(X; \theta)}{\partial\theta_i \, \partial\theta_j} \left(1 + \log f(X; \theta) \right) dX\,.</math>
| |
| | |
| ===Equality===
| |
| In particular, the Fisher Information matrix will be the same as the negative of the Hessian of the entropy in situations where <math>\frac{\partial^2 f(X; \theta)}{\partial\theta_i \, \partial\theta_j}</math> is zero for all ''i'', ''j'', ''X'', and θ. For instance, a two-dimensional example that makes the two equal is
| |
| | |
| :<math>f(X; \theta) = \theta_1 g_1(X) + \theta_2 g_2(X) + (1-\theta_1-\theta_2) g_3(X)\,,</math>
| |
| | |
| where ''g''<sub>1</sub>(''X''), ''g''<sub>2</sub>(''X''), and ''g''<sub>3</sub>(''X'') are probability distributions.
| |
| | |
| ===Inequality===
| |
| A one-dimensional example where the Fisher Information differs from the negative Hessian is <math>f(X; \theta) = \frac{e^{-(X-\theta)^2/2}}{\sqrt{2 \pi}}</math>. In this case, the entropy ''H'' is independent of the distribution mean ''θ''. Thus, the second derivative of the entropy with respect to θ is zero. However, for the Fisher information, we have <math>\mathcal{I}(\theta) = 1\,.</math>
| |
| | |
| ==See also==
| |
| *[[Observed information]]
| |
| *[[Fisher information metric]]
| |
| *[[Formation matrix]]
| |
| *[[Information geometry]]
| |
| *[[Jeffreys prior]]
| |
| *[[Cramér–Rao bound]]
| |
| | |
| Other measures employed in [[information theory]]:
| |
| *[[Entropy (information theory)]]
| |
| *[[Kullback–Leibler divergence]]
| |
| *[[Self-information]]
| |
| | |
| ==Notes==
| |
| {{Reflist}}
| |
| | |
| ==References==
| |
| | |
| * {{cite journal|doi=10.2307/2339293| authorlink=Francis Ysidro Edgeworth|first1=F. Y.|last1= Edgeworth | title=On the Probable Errors of Frequency-Constants|jstor=2339293| journal=[[Journal of the Royal Statistical Society]]| volume=71| issue=3 |date=Sep 1908|pages=499–512}}
| |
| | |
| * {{cite journal|doi=10.2307/2339378| authorlink=Francis Ysidro Edgeworth|first1=F. Y.|last1= Edgeworth | title=On the Probable Errors of Frequency-Constants|jstor=2339378| journal=Journal of the Royal Statistical Society| volume=71| issue=4 |date=Dec 1908|pages=651–678}}
| |
| | |
| * Frieden, B. Roy (2004) ''Science from Fisher Information: A Unification''. Cambridge Univ. Press. ISBN 0-521-00911-1.
| |
| | |
| * {{cite journal | title=On the History of Maximum Likelihood in Relation to Inverse Probability and Least Squares| author= Hald, A. |journal=Statistical Science|volume= 14| issue=2 |date=May 1999 | pages =214–222 |jstor=2676741}}
| |
| | |
| * {{Cite book| author=Hald, A. |year=1998| title=A History of Mathematical Statistics from 1750 to 1930| publisher=Wiley| location=New York| isbn=0-471-17912-4}}
| |
| | |
| * {{Cite book
| |
| | last = Lehmann
| |
| | first = E. L.
| |
| | authorlink= Erich Leo Lehmann
| |
| | coauthors = Casella, G.
| |
| | title = Theory of Point Estimation
| |
| | year = 1998
| |
| | publisher = Springer
| |
| | isbn = 0-387-98502-6
| |
| | edition= 2nd
| |
| }}
| |
| | |
| * {{Cite book
| |
| |first=Lucien
| |
| |last=Le Cam
| |
| |authorlink=Lucien Le Cam
| |
| |title = Asymptotic Methods in Statistical Decision Theory
| |
| |year = 1986
| |
| |publisher = Springer-Verlag
| |
| |isbn=0-387-96307-3
| |
| }}
| |
| | |
| * {{cite journal |doi=10.1214/aos/1176343457 |title=F. Y. Edgeworth and R. A. Fisher on the Efficiency of Maximum Likelihood Estimation | author=Pratt, John W.| journal=The Annals of Statistics| volume=4| issue=3 |date=May 1976|pages=501–514 |jstor=2958222}}
| |
| | |
| * {{cite journal|doi=10.1214/aos/1176343456|author=[[Leonard J. Savage]] |title=On Rereading R. A. Fisher | journal=The Annals of Statistics| volume=4 | issue=3 |date=May 1976 |pages=441–500 |jstor=2958221}}
| |
| | |
| *{{Cite book
| |
| | last = Schervish
| |
| | first = Mark J.
| |
| | title = Theory of Statistics
| |
| | publisher = Springer
| |
| | year = 1995
| |
| | location = New York
| |
| | chapter = Section 2.3.1
| |
| | isbn = 0-387-94546-6
| |
| | nopp = true}}
| |
| | |
| * {{Cite book|author=[[Stephen Stigler]]|title=The History of Statistics: The Measurement of Uncertainty before 1900|year=1986| isbn=0-674-40340-1|publisher=Harvard University Press}}{{page needed|date=February 2012}}
| |
| | |
| * {{cite journal |doi=10.2307/2344804 |title=[[Francis Ysidro Edgeworth]], Statistician| author=[[Stephen M. Stigler]] |jstor=2344804|journal= Journal of the Royal Statistical Society, Series A| volume=141 |issue= 3 |year= 1978 | pages =287–322 }}
| |
| | |
| * {{Cite book|author=[[Stephen Stigler]]|title=Statistics on the Table: The History of Statistical Concepts and Methods|year=1999| isbn=0-674-83601-4|publisher=Harvard University Press}} {{page needed|date=February 2012}}
| |
| | |
| *{{Cite book
| |
| | last = Van Trees
| |
| | first = H. L.
| |
| | title = Detection, Estimation, and Modulation Theory, Part I
| |
| | publisher = Wiley
| |
| | year = 1968
| |
| | location = New York
| |
| | isbn = 0-471-09517-6
| |
| }}
| |
| | |
| == External links ==
| |
| * [http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=20008&objectType=File Fisher4Cast: a Matlab, GUI-based Fisher information tool] for research and teaching, primarily aimed at cosmological forecasting applications.
| |
| *[http://www4.utsouthwestern.edu/wardlab/fandplimittool.asp FandPLimitTool] a GUI-based software to calculate the Fisher information and [[CRLB]] with application to single-molecule microscopy.
| |
| * [http://www.stat.tamu.edu/~suhasini/teaching613/inference.pdf] lectures on statistical inference
| |
| | |
| {{DEFAULTSORT:Fisher Information}}
| |
| [[Category:Estimation theory]]
| |
| [[Category:Information theory]]
| |
| [[Category:Design of experiments]]
| |