|
|
Line 1: |
Line 1: |
| {{Cleanup|date=April 2008}}
| | Hello, dear friend! I am Olen. I smile that I could unify to the entire world. I live in Netherlands, in the GE region. I dream to head to the various nations, to obtain familiarized with fascinating people.<br><br>my blog post - [http://Youtu.be/1Dq7r4V9Bco marketing lead generation] |
| A '''chi-squared test''', also referred to as '''chi-square test''' or '''<math alt="χw²">\chi^2</math> test''', is any [[statistical]] [[hypothesis test]] in which the [[sampling distribution]] of the test statistic is a [[chi-squared distribution]] when the [[null hypothesis]] is true. Also considered a chi-squared test is a test in which this is ''asymptotically'' true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-squared distribution as closely as desired by making the sample size large enough.
| |
| | |
| Some examples of chi-squared tests where the [[chi-squared distribution]] is only approximately valid:
| |
| *[[Pearson's chi-squared test]], also known as the chi-squared goodness-of-fit test or chi-squared test for independence. When the chi-squared test is mentioned without any modifiers or without other precluding context, this test is usually meant (for an exact test used in place of <math alt="χ²">\chi^2</math>, see [[Fisher's exact test]]).
| |
| *[[Yates's correction for continuity]], also known as Yates' chi-squared test.
| |
| *[[Cochran–Mantel–Haenszel statistics|Cochran–Mantel–Haenszel chi-squared test]].
| |
| *[[McNemar's test]], used in certain 2 × 2 tables with pairing
| |
| *[[Tukey's test of additivity]]
| |
| *The [[portmanteau test]] in [[time-series analysis]], testing for the presence of [[autocorrelation]]
| |
| *[[Likelihood-ratio test]]s in general statistical modelling, for testing whether there is evidence of the need to move from a simple model to a more complicated one (where the simple model is nested within the complicated one).
| |
| | |
| One case where the distribution of the [[test statistic]] is an exact [[chi-squared distribution]] is the test that the variance of a normally distributed population has a given value based on a [[sample variance]]. Such a test is uncommon in practice because values of variances to test against are seldom known exactly.
| |
| | |
| ==Chi-squared test for variance in a normal population==
| |
| If a sample of size ''n'' is taken from a population having a [[normal distribution]], then there is a result (see [[Variance#Distribution of the sample variance|distribution of the sample variance]]) which allows a test to be made of whether the variance of the population has a pre-determined value. For example, a manufacturing process might have been in stable condition for a long period, allowing a value for the variance to be determined essentially without error. Suppose that a variant of the process is being tested, giving rise to a small sample of ''n'' product items whose variation is to be tested. The test statistic ''T'' in this instance could be set to be the sum of squares about the sample mean, divided by the nominal value for the variance (i.e. the value to be tested as holding). Then ''T'' has a chi-squared distribution with ''n'' − 1 [[Degrees of freedom (statistics)|degrees of freedom]]. For example if the sample size is 21, the acceptance region for ''T'' for a significance level of 5% is the interval 9.59 to 34.17.
| |
| <!--
| |
| ==Chi-squared test for contingency table example==
| |
| [[Dispute: This example is actually for a goodness-of-fit test, and NOT a test of independence in a contingency table]] [[Dispute claim is valid]]
| |
| | |
| A chi-squared test may be applied on a [[contingency table]] for testing a null hypothesis of independence of rows and columns.
| |
| | |
| As an example of the use of the chi-squared test, a fair coin is one where heads and tails are equally likely to turn up after it is flipped. Suppose one is given a coin and asked to test if it is fair. After 200 trials, heads turn up 153 times and tails result 147 times. The following is a chi-squared analysis, where the null hypothesis is that the coin is fair:
| |
| | |
| {|class="wikitable" align="center"
| |
| |+ Chi-squared calculation of coin tosses
| |
| | |
| |
| | | Heads
| |
| | Tails
| |
| | Total
| |
| |-
| |
| | | Observed
| |
| | | 53
| |
| | | 47
| |
| | | 100
| |
| |-
| |
| | | Expected
| |
| | | 50
| |
| | | 50
| |
| | | 100
| |
| |-
| |
| | | (''O'' − ''E'')<sup>2</sup>
| |
| | | 9
| |
| | | 9
| |
| | |
| |
| |-
| |
| | | {{lang|el|χ}}<sup>2</sup> = (''O'' − ''E'')<sup>2</sup>/''E''
| |
| | | 0.18
| |
| | | 0.18
| |
| | | 0.36
| |
| |}
| |
| In this case, the test has one [[Degrees of freedom (statistics)|degree of freedom]] and the chi-squared value is 0.36. In order to see whether this result is [[statistically significant]], the [[p-value]] (the probability that at least as extreme a result is observed when the null hypothesis is true) must be calculated or looked up in a chart. The p-value, Prob(χ<sup>2</sup> ≥ 0.36), is found to be 0.5485. There is thus a probability of about 55% of seeing data that deviates at least this much from the expected results if indeed the coin is fair. This probability is not considered statistically significant evidence of an unfair coin.-->
| |
| | |
| ==See also==
| |
| {{Portal|Statistics}}
| |
| * [[Nomogram#Chi-squared_test_computation_nomogram|Chi-squared test nomogram]]
| |
| * [[G-test|''G''-test]]
| |
| * [[Minimum chi-square estimation]]
| |
| * The [[Wald test]] can be evaluated against a chi-squared distribution.
| |
| | |
| ==References==
| |
| * {{MathWorld | urlname=Chi-SquaredTest | title=Chi-Squared Test}}
| |
| * Corder, G.W., Foreman, D.I. (2009).'' Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach'' Wiley, ISBN 978-0-470-45461-9
| |
| * Greenwood, P.E., Nikulin, M.S. (1996) ''A guide to chi-squared testing''. Wiley, New York. ISBN 0-471-55779-X
| |
| * Nikulin, M.S. (1973). "Chi-squared test for normality". In: ''Proceedings of the International Vilnius Conference on Probability Theory and Mathematical Statistics'', v.2, pp. 119–122.
| |
| * Bagdonavicius, V., Nikulin, M.S. (2011) "Chi-square goodness-of-fit test for right censored data". ''The International Journal of Applied Mathematics and Statistics'', p. 30-50.{{full|date=January 2013}}
| |
| | |
| {{Statistics}}
| |
| | |
| {{DEFAULTSORT:Chi-Squared Test}}
| |
| [[Category:Statistical tests]]
| |
| [[Category:Non-parametric statistics]]
| |
| [[Category:Categorical data]]
| |
Hello, dear friend! I am Olen. I smile that I could unify to the entire world. I live in Netherlands, in the GE region. I dream to head to the various nations, to obtain familiarized with fascinating people.
my blog post - marketing lead generation