Gauss–Kuzmin–Wirsing operator

From formulasearchengine
Revision as of 16:56, 1 November 2013 by en>Alkauskas (→‎Further reading)
Jump to navigation Jump to search
A two-tailed test, here the normal distribution.
File:P-value Graph.png
A one-tailed test, showing the p-value as the size of one tail.

In statistical significance testing, a one-tailed test or two-tailed test are alternative ways of computing the statistical significance of a data set in terms of a test statistic, depending on whether only one direction is considered extreme (and unlikely) or both directions are considered equally likely. Alternative names are one-sided and two-sided tests; the terminology "tail" is because the extremes of distributions are often small, as in 5 the normal distribution or "bell curve", pictured above right.

If the test statistic is always positive (or zero), only the one-tailed test is generally applicable, while if the test statistic can assume positive and negative values, both the one-tailed and two-tailed test are of use.

Applications

One-tailed tests are used for asymmetric distributions that have a single tail, such as the chi-squared distribution, which are common in measuring goodness-of-fit, or for one side of a distribution that has two tails, such as the normal distribution, which is common in estimating location; this corresponds to specifying a direction. Two-tailed tests are only applicable when there are two tails, such as in the normal distribution, and correspond to considering either direction significant.

In the approach of Ronald Fisher, the null hypothesis H0 will be rejected when the p-value of the test statistic is sufficiently extreme (vis-a-vis the test statistic's sampling distribution) and thus judged unlikely to be the result of chance. In a one-tailed test, "extreme" is decided beforehand as either meaning "sufficiently small" or meaning "sufficiently large" – values in the other direction are considered not significant. In a two-tailed test, "extreme" means "either sufficiently small or sufficiently large", and values in either direction are considered significant.[1] For a given test statistic there is a single two-tailed test, and two one-tailed tests, one each for either direction. Given data of a given significance level in a two-tailed test for a test statistic, in the corresponding one-tailed tests for the same test statistic it will be considered either twice as significant (half the p-value), if the data is in the direction specified by the test, or not significant at all (p-value above 0.5), if the data is in the direction opposite that specified by the test.

For example, if flipping a coin, testing whether it is biased towards heads is a one-tailed test, and getting data of "all heads" would be seen as highly significant, while getting data of "all tails" would be not significant at all (p = 1). By contrast, testing whether it is biased in either direction is a two-tailed test, and either "all heads" or "all tails" would both be seen as highly significant data. In medical testing, one is generally interested in whether a treatment results in outcomes that are better than chance, and thus uses a one-tailed test; using a two-tailed test corresponds instead to testing whether the treatment results in outcomes that are different from chance, either better or worse. In the archetypal lady tasting tea experiment, Fisher tested whether the lady in question was better than chance at distinguishing two types of tea preparation, not whether her ability was different from chance, and thus he used a one-tailed test.

Coin flipping example

Mining Engineer (Excluding Oil ) Truman from Alma, loves to spend time knotting, largest property developers in singapore developers in singapore and stamp collecting. Recently had a family visit to Urnes Stave Church. In coin flipping, the null hypothesis is a sequence of Bernoulli trials with probability 0.5, yielding a random variable X which is 1 for heads and 0 for tails, and a common test statistic is the sample mean (of the number of heads) If testing for whether the coin is biased towards heads, a one-tailed test would be used – only large numbers of heads would be significant. In that case a data set of five heads (HHHHH), with sample mean of 1, has a chance of occurring, and thus would have and would be significant (rejecting the null hypothesis) if using 0.05 as the cutoff. However, if testing for whether the coin is biased towards heads or tails, a two-tailed test would be used, and a data set of five heads (sample mean 1) is as extreme as a data set of five tails (sample mean 0), so the p-val

History

p-value of chi-squared distribution for different number of degrees of freedom

The p-value was introduced by Karl Pearson in Template:Harv in the Pearson's chi-squared test, where he defined P (original notation) as the probability that the statistic would be at or above a given level. This is a one-tailed definition, and the chi-squared distribution is asymmetric, only assuming positive or zero values, and has only one tail, the upper one. It measures goodness of fit of data with a theoretical distribution, with zero corresponding to exact agreement with the theoretical distribution; the p-value thus measures how likely the fit would be this bad or worse.

Normal distribution, showing two tails

The distinction between one-tailed and two-tailed tests was popularized by Ronald Fisher in the influential Template:Harv, where he applied it especially to the normal distribution, which is a symmetric distribution with two equal tails. The normal distribution is a common measure of location, rather than goodness-of-fit, and has two tails, corresponding to the estimate of location being above or below the theoretical location (e.g., sample mean compared with theoretical mean). In the case of a symmetric distribution such as the normal distribution, the one-tailed p-value is exactly half the two-tailed p-value:Template:Sfn 36 year-old Diving Instructor (Open water ) Vancamp from Kuujjuaq, spends time with pursuits for instance gardening, public listed property developers in singapore developers in singapore and cigar smoking. Of late took some time to go China Danxia.

Fisher emphasized the importance of measuring the tail – the observed value of the test statistic and all more extreme – rather than simply the probability of specific outcome itself, in his The Design of Experiments (1935).Template:Sfn He explains this as because a specific set of data may be unlikely (in the null hypothesis), but more extreme outcomes likely, so seen in this light, the specific but not extreme unlikely data should not be considered significant.

Relation to hypothesis testing

47 year-old Podiatrist Hyslop from Alert Bay, has lots of hobbies and interests that include fencing, property developers in condo new launch singapore and handball. Just had a family trip to Monasteries of Haghpat and Sanahin. p-values and one-tailed/two-tailed tests are a concept in the significance testing of Fisher, which only uses a null hypothesis, and either rejects it or not. p-values are not used in the hypothesis testing of Jerzy Neyman and Egon Pearson, which instead compares the null hypothesis to an alternative hypothesis, and chooses between them. However, these approaches are frequently confused and conflated – see statistical hypothesis testing – and thus p-values and one-tailed or two-tailed tests of significance may be incorrectly used in Neyman–Pearson-style hypothesis testing.

This is a mistaken interpretation, but it is a common mistake; this results in a confusing mixture of terminology, as follows – note that "significance level" is used in different senses in Fisher and in Neyman–Pearson, while "alternative hypothesis" is used only in Neyman–Pearson. In this context a one-tailed test is interpreted as using an "alternative hypothesis" that some parameter is greater than it is in the null hypothesis (or less), while a two-tailed test is interpreted as using as "alternative hypothesis" that the parameter is different from what it is in the null hypothesis. For example, if the null hypothesis is that the mean is some value then the one-tailed test "corresponds to" the alternative hypothesis (or ), while the two-tailed test "corresponds to" the alternative hypothesis While Fisher rejected the notion of an alternative hypothesis, Neyman accused him of subconsciously harboring an alternative hypothesis when choosing how to evaluate the null hypothesis, of which this one-tailed/two-tailed choice is one example.

Further, since in the Neyman–Pearson approach "significance levels" (in the sense of false positive/type I error rate, rather than in the Fisher sense of p-value of the test statistic), which are denoted by α, share the "significance level" name and are also conventionally 0.05, theses two concepts may be confused. In this case the cut-offs in the tails are denoted by α, and then compared with the p-value of the data, using α/2 at each end in the two-tailed test. This is incorrect – p-values are not simply related to false positives and cannot be compared with α, as discussed at p-value – but this notation is very common.

Specific tests

If the test statistic follows a Student's t distribution in the null hypothesis – which is common where the underlying variable follows a normal distribution with unknown scaling factor, then the test is referred to as a one-tailed or two-tailed t-test. If the test is performed using the actual population mean and variance, rather than an estimate from a sample, it would be called a one-tailed or two-tailed or Z-test.

The statistical tables for t and for Z provide critical values for both one- and two-tailed tests. That is, they provide the critical values that cut off an entire region at one or the other end of the sampling distribution as well as the critical values that cut off the regions (of half the size) at both ends of the sampling distribution.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro. Template:Refbegin

Template:Refend

  1. John E. Freund, (1984) Modern Elementary Statistics, sixth edition. Prentice hall. ISBN 0-13-593525-3 (Section "Inferences about Means", chapter "Significance Tests", page 289.)