|
|
Line 1: |
Line 1: |
| '''[[C R Rao| Rao's]] score test''', or the '''score test''' (often known as the '''Lagrange multiplier test''' in [[econometrics]]<ref name="Bera">{{cite doi|10.1016/S0378-3758(00)00343-8}}
| | Name: Jesenia Clucas<br>Age: 31<br>Country: Germany<br>Town: Waldmohr <br>ZIP: 66914<br>Street: Anhalter Strasse 83<br><br>My website :: Fifa 15 Coin Generator - [http://www.bigsnowgames.com/profile/hasupple Www.Bigsnowgames.Com], |
| Engle, Robert F (1984) . Wald, Likelihood Ratio and Lagrange Multiplier tests in Econometrics. in Handbook of Econometrics, Volume II, Edited by Z. Griliches and M.D. Intriligator. Elsevier Science Publishers BV. </ref>) is a [[statistical test]] of a [[Statistical_hypothesis_testing#Definition_of_terms|simple]] [[null hypothesis]] that a parameter of interest <math>\theta</math> is equal to some particular value <math>\theta_0</math>. It is the most [[statistical power|powerful]] test when the true value of <math>\theta</math> is close to <math>\theta_0</math>. The main advantage of the Score-test is that it does not require an estimate of the information under the alternative hypothesis or unconstrained maximum likelihood. This makes testing feasible when the unconstrained maximum likelihood estimate is a boundary point in the parameter space.
| |
| | |
| ==Single parameter test==
| |
| ===The statistic===
| |
| Let <math>L</math> be the [[likelihood function]] which depends on a univariate parameter <math>\theta</math> and let <math>x</math> be the data. The [[score (statistics)|score]] is <math>U(\theta)</math> where
| |
| :<math> | |
| U(\theta)=\frac{\partial \log L(\theta | x)}{\partial \theta}.
| |
| </math>
| |
| | |
| The observed information is<ref>Lehmann and Casella, eq. (2.5.16).</ref>
| |
| | |
| :<math> | |
| \mathcal{I}(\theta) = - \operatorname{E} \left[\left. \frac{\partial^2}{\partial\theta^2} \log L(X;\theta)\right|\theta \right]\,.
| |
| </math>
| |
| | |
| The statistic to test <math>\mathcal{H}_0:\theta=\theta_0</math> is
| |
| <math>
| |
| S(\theta_0) = \frac{U(\theta_0)^2}{I(\theta_0)}
| |
| </math>
| |
| | |
| which has an [[asymptotic distribution]] of <math>\chi^2_1</math>, when <math>\mathcal{H}_0</math> is true.
| |
| | |
| ====Note on notation====
| |
| Note that some texts use an alternative notation, in which the statistic
| |
| <math>
| |
| S^*(\theta)=\sqrt{ S(\theta) }
| |
| </math> is tested against a normal distribution. This approach is equivalent and gives identical results.
| |
| | |
| ===Justification===
| |
| <!-- A justification for the score test can be developed by expanding <math>U(\theta)</math> in Taylor series about the true value of <math>\theta</math> Hmmm, is this correct? I thought the result comes about by straightword application of CLT, observing that fisher info is the score function's covariance matrix -->
| |
| {{Expand section|date=June 2008}}
| |
| | |
| ===The case of a likelihood with nuisance parameters===
| |
| {{Empty section|date=June 2008}}
| |
| | |
| ===As most powerful test for small deviations===
| |
| :<math> | |
| \left(\frac{\partial \log L(\theta | x)}{\partial \theta}\right)_{\theta=\theta_0} \geq C
| |
| </math>
| |
| Where <math>L</math> is the [[likelihood function]],
| |
| <math>\theta_0</math> is the value of the parameter of interest under the
| |
| null hypothesis, and <math>C</math> is a constant set depending on
| |
| the size of the test desired (i.e. the probability of rejecting <math>H_0</math> if
| |
| <math>H_0</math> is true; see [[Type I error]]).
| |
| | |
| The score test is the most powerful test for small deviations from <math>H_0</math>.
| |
| To see this, consider testing <math>\theta=\theta_0</math> versus
| |
| <math>\theta=\theta_0+h</math>. By the [[Neyman-Pearson lemma]], the most powerful test has the form
| |
| | |
| :<math> | |
| \frac{L(\theta_0+h|x)}{L(\theta_0|x)} \geq K;
| |
| </math> | |
| | |
| Taking the log of both sides yields
| |
| | |
| :<math> | |
| \log L(\theta_0 + h | x ) - \log L(\theta_0|x) \geq \log K.
| |
| </math>
| |
| | |
| The score test follows making the substitution
| |
| | |
| : <math> | |
| \log L(\theta_0+h|x) \approx \log L(\theta_0|x) + h\times
| |
| \left(\frac{\partial \log L(\theta | x)}{\partial \theta}\right)_{\theta=\theta_0}
| |
| </math>
| |
| | |
| and identifying the <math>C</math> above with <math>\log(K)</math>.
| |
| | |
| ===Relationship with other hypothesis tests===
| |
| The [[Likelihood-ratio test|likelihood ratio test]], the [[Wald test]], and the Score test are asymptotically equivalent tests of hypotheses. When testing nested models, the statistics for each test converge to a Chi-squared distribution with degrees of freedom equal to the difference in degrees of freedom in the two models.
| |
| | |
| ==Multiple parameters==
| |
| A more general score test can be derived when there is more than one parameter. Suppose that <math>\hat{\theta}_0</math> is the [[maximum likelihood]] estimate of <math>\theta</math> under the null hypothesis <math>H_0</math>. Then
| |
| | |
| :<math>
| |
| U^T(\hat{\theta}_0) I^{-1}(\hat{\theta}_0) U(\hat{\theta}_0) \sim \chi^2_k
| |
| </math>
| |
| | |
| asymptotically under <math>H_0</math>, where <math>k</math> is the number of constraints imposed by the null hypothesis and
| |
| | |
| :<math>
| |
| U(\hat{\theta}_0) = \frac{\partial \log L(\hat{\theta}_0 | x)}{\partial \theta}
| |
| </math>
| |
| | |
| and
| |
| | |
| :<math>
| |
| I(\hat{\theta}_0) = -E\left(\frac{\partial^2 \log L(\hat{\theta}_0 | x)}{\partial \theta \partial \theta'} \right).
| |
| </math>
| |
| | |
| This can be used to test <math>H_0</math>.
| |
| | |
| ==Special cases==
| |
| | |
| In many situations, the score statistic reduces to another commonly used statistic.<ref>{{cite book |editor-last=Cook |editor-first=T. D. |editor2-last=DeMets |editor2-first=D. L. |year=2007 |title=Introduction to Statistical Methods for Clinical Trials |publisher=Chapman and Hall |isbn=1-58488-027-9 |pages=296–297 }}</ref>
| |
| | |
| When the data follows a normal distribution, the score statistic is the same as the [[t statistic]].{{clarify|reason=this can't always be true ... eg when null hypothesis is on the variance|date=March 2011}}
| |
| | |
| When the data consists of binary observations, the score statistic is the same as the chi-squared statistic in the [[Pearson's chi-squared test]].
| |
| | |
| When the data consists of failure time data in two groups, the score statistic for the [[Proportional hazards models|Cox partial likelihood]] is the same as the log-rank statistic in the [[log-rank test]]. Hence the log-rank test for difference in survival between two groups is most powerful when the proportional hazards assumption holds.
| |
| | |
| ==See also==
| |
| *[[Fisher information]]
| |
| *[[Uniformly most powerful test]]
| |
| *[[Score (statistics)]]
| |
| | |
| {{Refimprove|date=March 2011}}
| |
| {{More footnotes|date=March 2011}}
| |
| | |
| ==References==
| |
| {{Reflist}}
| |
| | |
| {{DEFAULTSORT:Score Test}}
| |
| [[Category:Statistical tests]]
| |
Name: Jesenia Clucas
Age: 31
Country: Germany
Town: Waldmohr
ZIP: 66914
Street: Anhalter Strasse 83
My website :: Fifa 15 Coin Generator - Www.Bigsnowgames.Com,