Kip Thorne: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>JeanneMish
m External links: since 01.01.1999 [http://www.ras.ru/win/db/show_per.asp?P=.id-53153.ln-ru.dl-.pr-inf.uk-12]
 
en>Monkbot
Line 1: Line 1:
Adrianne Swoboda is what it [http://www.guardian.Co.uk/search?q=husband husband] loves to name her though she has no plans to really like being termed like that. Software generating is what she totally does but she's always wanted her own business. To drive is something that she's been doing not that long ago. Idaho is where her home is generally and she will usually never move. Go to her website to pick out more: http://prometeu.net<br><br>my web-site :: [http://prometeu.net Clash Of Clans Cheats]
The '''power''' of a [[Statistical hypothesis testing|statistical test]] is the probability that the test will reject the [[null hypothesis]] when the alternative hypothesis is true (i.e. the probability of not committing a [[Type I and type II errors|Type II error]]). That is,
 
:<math> \mbox{statistical power} = \mathbb P\big( \mbox{we reject the null hypothesis} \big| \mbox{the null hypothesis is false} \big) </math>
 
The power is in general a function of the possible distributions, often determined by a parameter, under the alternative hypothesis. As the power increases, the chances of a Type II error occurring decrease. The probability of a Type II error occurring is referred to as the [[Type_I_and_type_II_errors#False_negative_rate|false negative rate]] (β) and the power is equal to 1−β. The power is also known as the [[sensitivity and specificity|sensitivity]].
 
Power analysis can be used to calculate the minimum [[sample size]] required so that one can be reasonably likely to detect an effect of a given [[effect size|size]].  Power analysis can also be used to calculate the minimum effect size that is likely to be detected in a study using a given sample size. In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric and a [[nonparametric test]] of the same hypothesis.
 
There is also the concept of a power function of a test, which is the probability of rejecting the null when the null is true.
<ref>http://www.encyclopediaofmath.org/index.php/Power_function_of_a_test</ref>
 
==Background==
[[Statistical test]]s use data from [[Sampling (statistics)|sample]]s to assess, or make [[statistical inference|inferences]] about, a [[statistical population]].  In the concrete setting of a two-sample comparison, the goal is to assess whether the mean values of some attribute obtained for individuals in two sub-populations differ. For example, to test the null hypothesis that the [[mean]] [[Score (statistics)|score]]s of men and women on a test do not differ, samples of men and women are drawn, the test is administered to them, and the mean score of one group is compared to that of the other group using a statistical test such as the two-sample ''z''-test. The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations. Note that power is the probability of finding a difference that does exist, as opposed to the likelihood of declaring a difference that does not exist (which is known as a [[Type I error]], or "false positive").
 
==Factors influencing power==
 
Statistical power may depend on a number of factors.  Some of these factors may be particular to a specific testing situation, but at a minimum, power nearly always depends on the following three factors:
 
* the [[statistical significance]] criterion used in the test
* the magnitude of the effect of interest in the population
* the [[sample size]] used to detect the effect
 
A '''significance criterion''' is a statement of how unlikely a positive result must be, if the null hypothesis of no effect is true, for the null hypothesis to be rejected. The most commonly used criteria are probabilities of 0.05 (5%, 1 in 20), 0.01 (1%, 1 in 100), and 0.001 (0.1%, 1 in 1000). If the criterion is 0.05, the probability of the data implying an effect at least as large as the observed effect when the null hypothesis is true must be less than 0.05, for the null hypothesis of no effect to be rejected. One easy way to increase the power of a test is to carry out a less conservative test by using a larger significance criterion, for example 0.10 instead of 0.05. This increases the chance of rejecting the null hypothesis (i.e. obtaining a statistically significant result) when the null hypothesis is false, that is, reduces the risk of a [[Type I and type II errors|Type II error]] (false negative regarding whether an effect exists).  But it also increases the risk of obtaining a statistically significant result (i.e. rejecting the null hypothesis) when the null hypothesis is not false; that is, it increases the risk of a [[Type I and type II errors|Type I error]] (false positive).
 
The '''magnitude of the effect''' of interest in the population can be quantified in terms of an [[effect size]], where there is greater power to detect larger effects.  An effect size can be a direct estimate of the quantity of interest, or it can be a standardized measure that also accounts for the variability in the population.  For example, in an analysis comparing outcomes in a treated and control population, the difference of outcome means <span style="text-decoration: overline">Y</span>&nbsp;&minus;&nbsp;<span style="text-decoration: overline">X</span> would be a direct measure of the effect size, whereas (<span style="text-decoration: overline">Y</span>&nbsp;&minus;&nbsp;<span style="text-decoration: overline">X</span>)/σ where σ is the common standard deviation of the outcomes in the treated and control groups, would be a standardized effect size.  If constructed appropriately, a standardized effect size, along with the sample size, will completely determine the power.  An unstandardized (direct) effect size will rarely be sufficient to determine the power, as it does not contain information about the variability in the measurements.
 
The '''sample size''' determines the amount of sampling error inherent in a test result. Other things being equal, effects are harder to detect in smaller samples. Increasing sample size is often the easiest way to boost the statistical power of a test.
 
The precision with which the data are measured also influences statistical power. Consequently, power can often be improved by reducing the measurement error in the data.  A related concept is to improve the "reliability" of the measure being assessed (as in [[Reliability (psychometric)|psychometric reliability]]).
 
The [[design of experiments|design]] of an experiment or observational study often influences the power.  For example, in a two-sample testing situation with a given total sample size ''n'', it is optimal to have equal numbers of observations from the two populations being compared (as long as the variances in the two populations are the same).  In regression analysis and [[Analysis of Variance]], there is an extensive theory, and practical strategies, for improving the power based on optimally setting the values of the independent variables in the model.
 
==Interpretation==
 
Although there are no formal standards for power (sometimes referred to as π), most researchers assess the power of their tests using π=0.80 as a standard for adequacy. This convention implies a four-to-one trade off between β-risk and α-risk. (β is the probability of a Type II error; α is the probability of a Type I error, 0.2 and 0.05 are conventional values for β and α). However, there will be times when this 4-to-1 weighting is inappropriate. In medicine, for example, tests are often designed in such a way that no false negatives (Type II errors) will be produced. But this inevitably raises the risk of obtaining a false positive (a Type I error). The rationale is that it is better to tell a healthy patient "we may have found something - let's test further", than to tell a diseased patient "all is well".<ref>{{Cite book
| author = Ellis, Paul D.
| title = The Essential Guide to Effect Sizes: An Introduction to Statistical Power, Meta-Analysis and the Interpretation of Research Results
| publisher = Cambridge University Press
| location = United Kingdom
| year = 2010
}}</ref>
 
Power analysis is appropriate when the concern is with the correct rejection, or not, of a null hypothesis. In many contexts, the issue is less about determining if there is or is not a difference but rather with getting a more refined [[estimation theory|estimate]] of the population effect size. For example, if we were expecting a population [[Pearson product-moment correlation coefficient|correlation]] between intelligence and job performance of around .50, a sample size of 20 will give us approximately 80% power (alpha = .05, two-tail) to reject the null hypothesis of zero correlation. However, in doing this study we are probably more interested in knowing whether the correlation is .30 or .60 or .50. In this context we would need a much larger sample size in order to reduce the confidence interval of our estimate to a range that is acceptable for our purposes. Techniques similar to those employed in a traditional power analysis can be used to determine the sample size required for the width of a confidence interval to be less than a given value.
 
Many statistical analyses involve the estimation of several unknown quantities. In simple cases, all but one of these quantities is a [[nuisance parameter]]. In this setting, the only relevant power pertains to the single quantity that will undergo formal statistical inference. In some settings, particularly if the goals are more "exploratory", there may be a number of quantities of interest in the analysis. For example, in a multiple [[regression analysis]] we may include several covariates of potential interest. In situations such as this where several hypotheses are under consideration, it is common that the powers associated with the different hypotheses differ. For instance, in multiple regression analysis, the power for detecting an effect of a given size is related to the variance of the covariate. Since different covariates will have different variances, their powers will differ as well.
 
Any statistical analysis involving [[multiple comparisons|multiple hypotheses]] is subject to inflation of the type I error rate if appropriate measures are not taken. Such measures typically involve applying a higher threshold of stringency to reject a hypothesis in order to compensate for the multiple comparisons being made (e.g. as in the [[Bonferroni method]]). In this situation, the power analysis should reflect the multiple testing approach to be used. Thus, for example, a given study may be well powered to detect a certain effect size when only one test is to be made, but the same effect size may have much lower power if several tests are to be performed.
 
It is also important to consider the [[statistical power]] of a hypothesis test when interpreting its results. A test's power is the probability of correctly rejecting the null hypothesis when it is false; a test's power is influenced by the choice of significance level for the test, the size of the effect being measured, and the amount of data available. A hypothesis test may fail to reject the null, for example, if a true difference exists between two populations being compared by a [[Student's t-test|t-test]] but the effect is small and the sample size is too small to distinguish the effect from random chance.<ref>{{Cite book|title=The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results | last=Ellis | first=Paul | year=2010 | isbn=978-0521142465 | publisher=Cambridge University Press | page=52}}</ref> Many [[clinical trial]]s, for instance, have low statistical power to detect differences in [[adverse effect]]s of treatments, since such effects are rare and the number of affected patients is very small.<ref>{{Cite doi|10.1016/j.jclinepi.2008.08.005}}</ref>
 
==''A priori'' vs. ''post hoc'' analysis==
Power analysis can either be done before (''a priori'' or prospective power analysis) or after (''post hoc'' or retrospective power analysis) data are collected. ''A priori'' power analysis is conducted prior to the research study, and is typically used in [[estimating sample sizes|estimating sufficient sample sizes]] to achieve adequate power. ''Post-hoc'' power analysis is conducted after a study has been completed, and uses the obtained sample size and effect size to determine what the power was in the study, assuming the effect size in the sample is equal to the effect size in the population. Whereas the utility of prospective power analysis in experimental design is universally accepted, the usefulness of retrospective techniques is controversial.<ref>Thomas, L. (1997) [http://eprints.st-andrews.ac.uk/archive/00000417/01/ThomasCB1997.pdf Retrospective power analysis]. ''[[Conservation Biology (journal)|Conservation Biology]]'' 11(1):276–280</ref> Falling for the temptation to use the statistical analysis of the collected data to estimate the power will result in uninformative and misleading values.  In particular, it has been shown <ref name=HH1>Hoenig and Heisey (2001)The Abuse of Power''[[The American Statistician (journal)|The American Statistician]]'' 55(1):19-24 [http://dx.doi.org/10.1198/000313001300339897]</ref> that post-hoc power in its simplest form is a one-to-one function of the p-value attained.  This has been extended<ref name=HH1/> to show that all post-hoc power analyses suffer from what is called the "power approach paradox" (PAP), in which a study with a null result is thought to show MORE evidence that the null hypothesis is actually true when the p-value is smaller, since the apparent power to detect an actual effect would be higher.  In fact, a smaller p-value is properly understood to make the null hypothesis LESS likely to be true.
 
==Application==
Funding agencies, ethics boards and research review panels frequently request that a researcher perform a power analysis, for example to determine the minimum number of animal test subjects needed for an experiment to be informative. In [[frequentist statistics]], an underpowered study is unlikely to allow one to choose between hypotheses at the desired significance level. In [[Bayesian statistics]], hypothesis testing of the type used in classical power analysis is not done. In the Bayesian framework, one updates his or her prior beliefs using the data obtained in a given study.  In principle, a study that would be deemed underpowered from the perspective of hypothesis testing could still be used in such an updating process. However, power remains a useful measure of how much a given experiment size can be expected to refine one's beliefs. A study with low power is unlikely to lead to a large change in beliefs.
 
==Example==
We study the effect of a treatment on some quantity, and compare research subjects by measuring the quantity before and after the treatment, analyzing the data using a paired [[t-test]].  Let <math>A_i</math> and <math>B_i</math> denote the pre-treatment and post-treatment measures on subject ''i'' respectively.  The possible effect of the treatment should be visible in the differences <math>D_i=B_i-A_i</math>, which  we assume to be independently distributed, all with the same expected value and variance.
 
We proceed by analyzing ''D'' as in a one-sided t-test.  The null hypothesis will be: <math>\mathrm{E}[D]=0</math> (no effect), where <math> \mathrm{E}[ ] </math> denotes the [[expected value]] of a quantity.  In this case, the alternative is <math>\mathrm{E}[D]>0</math> (positive effect). The [[test statistic]] is:
 
:<math>T=\sqrt{n}\frac{\bar{D}}{\hat{\sigma}_D}.</math>
 
where ''n'' is the sample size, <math>\bar{D}</math> is the average of the <math>D_i</math> and <math>\hat{\sigma}_D^2</math> is the sample variance. The null hypothesis is rejected at level 0.05 when
 
:<math>T > 1.64,</math>
 
with 1.64 the approximate decision threshold for a level 0.05 test based on a normal approximation to the test statistic, i.e. 1.64 is obtained from the [[Quantile function]] (also called inversed Cumulative distribution function) evaluated at 1−0.05=0.95.
 
Now suppose that the alternative hypothesis is true and <math>\mathrm{E}[D]=\tau</math>.  Then the power is
 
:<math>
\begin{array}{ccl}
\pi(\tau)&=&P(\sqrt{n}\bar{D}/\hat{\sigma}_D > 1.64|\tau) \\
&=&P\left(\sqrt{n}(\bar{D}-\tau+\tau)/\hat{\sigma}_D > 1.64\right|\tau)\\
&=& P\left(\sqrt{n}(\bar{D}-\tau)/\hat{\sigma}_D > 1.64-\sqrt{n}\tau/\hat{\sigma}_D\right|\tau)\\
\end{array}
</math>
 
Since <math>\sqrt{n}(\bar{D}-\tau)/\hat{\sigma}_D</math> approximately follows a standard [[normal distribution]] when the alternative hypothesis is true, the approximate power can be calculated as
 
:<math>\pi(\tau)\approx 1-\Phi(1.64-\tau\sqrt{n}/\hat{\sigma}_D).</math>
 
Note that according to this formula the power increases with the values of the parameter <math>\tau</math>. For a specific value of <math>\tau</math> a higher power may be obtained by increasing the sample size ''n''.
 
It is, of course, not possible to guarantee a sufficient large power for all values of <math>\tau</math>, as <math>\tau</math> may be very close to 0. In fact the minimum  ([[infimum]]) value of the power is equal to the size of the test, in this example 0.05. However it is of no importance to distinguish between <math>\tau=0</math> and small positive values. If it is desirable to have enough power, say at least 0.90, to detect values of <math>\tau >1</math>, the required sample size can be calculated approximately:
:<math>
\pi(1)\approx 1-\Phi(1.64-\sqrt{n}/\hat{\sigma}_D) >0{.}90\ ,
</math>
from which it follows that
:<math>\Phi(1.64-\sqrt{n}/\hat{\sigma}_D) <0.10.</math>
 
Hence
:<math>\sqrt{n}/\hat{\sigma}_D > 1.64- z_{0.10}=1.64+1.28\approx 3</math>
or
 
:<math>n>9\,\hat{\sigma}_D^2.</math>
 
==Software for Power and Sample Size Calculations==
 
Numerous programs are available for performing [http://www.epibiostat.ucsf.edu/biostat/sampsize.html power and sample size calculations.] These include [http://www.statistical-solutions-software.com/nquery-advisor-nterim/ nQuery Advisor], [[PASS Sample Size Software| PASS]], [[PS Power and Sample Size|PS]], [[R (programming language)|R]], [http://homepage.stat.uiowa.edu/~rlenth/Power/ Russ Lenth's power and sample-size page], [http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_clientpss_sect002.htm SAS Power and sample size], and [[Stata]]. A large set of power and sample size routines are included in [[R (programming language)|R]] and [[Stata]], which are [[List of statistical packages|comprehensive statistical packages]]. The other programs listed above are specialized for these calculations and are easier to use by people who are not familiar with the more general packages. nQuery, PASS, SAS and Stata are commercial products. The other programs listed above are freely available.
 
==See also==
{{Portal|Statistics}}
{{Wikiversity}}
*[[Effect size]]
*[[Sample size]]
*[[Neyman–Pearson lemma]]
*[[Uniformly most powerful test]]
 
{{More footnotes|date=January 2010}}
 
==Notes==
{{Reflist}}
 
==References==
*{{cite book |authorlink=Jacob Cohen (statistician) |last=Cohen |first=J. |title=Statistical Power Analysis for the Behavioral Sciences |edition=2nd |year=1988 |isbn=0-8058-0283-5 }}
*{{cite book |last=Aberson |first=C. L. |title=Applied Power Analysis for the Behavioral Science |year=2010 |isbn=1-84872-835-2 }}
 
==External links==
* [http://www.ncss.com/software/pass/ PASS – Commercial sample size software from NCSS, LLC]
* [http://www.indiana.edu/~statmath/stat/all/power/power.pdf Hypothesis Testing and Statistical Power of a Test]
* [http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/ G*Power – A free program for Statistical Power Analysis for Mac OS and MS-DOS]
* [http://myweb.polyu.edu.hk/~mspaul/calculator/calculator.html Effect Size Calculators] Calculate ''d'' and ''r'' from a variety of statistics.
* [http://cran.r-project.org/web/packages/pwr/index.html R/Splus package of power analysis functions along the lines of Cohen (1988)]
* [http://www.southampton.ac.uk/~cpd/anovas/datasets/index.htm  Examples of all ANOVA and ANCOVA models with up to three treatment factors, including tools to estimate design power]
* [http://www.danielsoper.com/statcalc/calc01.aspx Free A-priori Sample Size Calculator for Multiple Regression] from Daniel Soper's ''Free Statistics Calculators'' website. Computes the minimum required sample size for a study, given the alpha level, the number of predictors, the anticipated effect size, and the desired statistical power level.
* [http://www.stat.uiowa.edu/~rlenth/Power/index.html Power calculator from Russ Lenth, University of Iowa]
 
'''Further Explanations'''
*[http://effectsizefaq.com/ EffectSizeFAQ.com]
 
{{Statistics|collection}}
 
{{DEFAULTSORT:Statistical Power}}
[[Category:Hypothesis testing]]
[[Category:Statistical terminology]]

Revision as of 05:56, 24 January 2014

The power of a statistical test is the probability that the test will reject the null hypothesis when the alternative hypothesis is true (i.e. the probability of not committing a Type II error). That is,

statistical power=(we reject the null hypothesis|the null hypothesis is false)

The power is in general a function of the possible distributions, often determined by a parameter, under the alternative hypothesis. As the power increases, the chances of a Type II error occurring decrease. The probability of a Type II error occurring is referred to as the false negative rate (β) and the power is equal to 1−β. The power is also known as the sensitivity.

Power analysis can be used to calculate the minimum sample size required so that one can be reasonably likely to detect an effect of a given size. Power analysis can also be used to calculate the minimum effect size that is likely to be detected in a study using a given sample size. In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric and a nonparametric test of the same hypothesis.

There is also the concept of a power function of a test, which is the probability of rejecting the null when the null is true. [1]

Background

Statistical tests use data from samples to assess, or make inferences about, a statistical population. In the concrete setting of a two-sample comparison, the goal is to assess whether the mean values of some attribute obtained for individuals in two sub-populations differ. For example, to test the null hypothesis that the mean scores of men and women on a test do not differ, samples of men and women are drawn, the test is administered to them, and the mean score of one group is compared to that of the other group using a statistical test such as the two-sample z-test. The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations. Note that power is the probability of finding a difference that does exist, as opposed to the likelihood of declaring a difference that does not exist (which is known as a Type I error, or "false positive").

Factors influencing power

Statistical power may depend on a number of factors. Some of these factors may be particular to a specific testing situation, but at a minimum, power nearly always depends on the following three factors:

A significance criterion is a statement of how unlikely a positive result must be, if the null hypothesis of no effect is true, for the null hypothesis to be rejected. The most commonly used criteria are probabilities of 0.05 (5%, 1 in 20), 0.01 (1%, 1 in 100), and 0.001 (0.1%, 1 in 1000). If the criterion is 0.05, the probability of the data implying an effect at least as large as the observed effect when the null hypothesis is true must be less than 0.05, for the null hypothesis of no effect to be rejected. One easy way to increase the power of a test is to carry out a less conservative test by using a larger significance criterion, for example 0.10 instead of 0.05. This increases the chance of rejecting the null hypothesis (i.e. obtaining a statistically significant result) when the null hypothesis is false, that is, reduces the risk of a Type II error (false negative regarding whether an effect exists). But it also increases the risk of obtaining a statistically significant result (i.e. rejecting the null hypothesis) when the null hypothesis is not false; that is, it increases the risk of a Type I error (false positive).

The magnitude of the effect of interest in the population can be quantified in terms of an effect size, where there is greater power to detect larger effects. An effect size can be a direct estimate of the quantity of interest, or it can be a standardized measure that also accounts for the variability in the population. For example, in an analysis comparing outcomes in a treated and control population, the difference of outcome means Y − X would be a direct measure of the effect size, whereas (Y − X)/σ where σ is the common standard deviation of the outcomes in the treated and control groups, would be a standardized effect size. If constructed appropriately, a standardized effect size, along with the sample size, will completely determine the power. An unstandardized (direct) effect size will rarely be sufficient to determine the power, as it does not contain information about the variability in the measurements.

The sample size determines the amount of sampling error inherent in a test result. Other things being equal, effects are harder to detect in smaller samples. Increasing sample size is often the easiest way to boost the statistical power of a test.

The precision with which the data are measured also influences statistical power. Consequently, power can often be improved by reducing the measurement error in the data. A related concept is to improve the "reliability" of the measure being assessed (as in psychometric reliability).

The design of an experiment or observational study often influences the power. For example, in a two-sample testing situation with a given total sample size n, it is optimal to have equal numbers of observations from the two populations being compared (as long as the variances in the two populations are the same). In regression analysis and Analysis of Variance, there is an extensive theory, and practical strategies, for improving the power based on optimally setting the values of the independent variables in the model.

Interpretation

Although there are no formal standards for power (sometimes referred to as π), most researchers assess the power of their tests using π=0.80 as a standard for adequacy. This convention implies a four-to-one trade off between β-risk and α-risk. (β is the probability of a Type II error; α is the probability of a Type I error, 0.2 and 0.05 are conventional values for β and α). However, there will be times when this 4-to-1 weighting is inappropriate. In medicine, for example, tests are often designed in such a way that no false negatives (Type II errors) will be produced. But this inevitably raises the risk of obtaining a false positive (a Type I error). The rationale is that it is better to tell a healthy patient "we may have found something - let's test further", than to tell a diseased patient "all is well".[2]

Power analysis is appropriate when the concern is with the correct rejection, or not, of a null hypothesis. In many contexts, the issue is less about determining if there is or is not a difference but rather with getting a more refined estimate of the population effect size. For example, if we were expecting a population correlation between intelligence and job performance of around .50, a sample size of 20 will give us approximately 80% power (alpha = .05, two-tail) to reject the null hypothesis of zero correlation. However, in doing this study we are probably more interested in knowing whether the correlation is .30 or .60 or .50. In this context we would need a much larger sample size in order to reduce the confidence interval of our estimate to a range that is acceptable for our purposes. Techniques similar to those employed in a traditional power analysis can be used to determine the sample size required for the width of a confidence interval to be less than a given value.

Many statistical analyses involve the estimation of several unknown quantities. In simple cases, all but one of these quantities is a nuisance parameter. In this setting, the only relevant power pertains to the single quantity that will undergo formal statistical inference. In some settings, particularly if the goals are more "exploratory", there may be a number of quantities of interest in the analysis. For example, in a multiple regression analysis we may include several covariates of potential interest. In situations such as this where several hypotheses are under consideration, it is common that the powers associated with the different hypotheses differ. For instance, in multiple regression analysis, the power for detecting an effect of a given size is related to the variance of the covariate. Since different covariates will have different variances, their powers will differ as well.

Any statistical analysis involving multiple hypotheses is subject to inflation of the type I error rate if appropriate measures are not taken. Such measures typically involve applying a higher threshold of stringency to reject a hypothesis in order to compensate for the multiple comparisons being made (e.g. as in the Bonferroni method). In this situation, the power analysis should reflect the multiple testing approach to be used. Thus, for example, a given study may be well powered to detect a certain effect size when only one test is to be made, but the same effect size may have much lower power if several tests are to be performed.

It is also important to consider the statistical power of a hypothesis test when interpreting its results. A test's power is the probability of correctly rejecting the null hypothesis when it is false; a test's power is influenced by the choice of significance level for the test, the size of the effect being measured, and the amount of data available. A hypothesis test may fail to reject the null, for example, if a true difference exists between two populations being compared by a t-test but the effect is small and the sample size is too small to distinguish the effect from random chance.[3] Many clinical trials, for instance, have low statistical power to detect differences in adverse effects of treatments, since such effects are rare and the number of affected patients is very small.[4]

A priori vs. post hoc analysis

Power analysis can either be done before (a priori or prospective power analysis) or after (post hoc or retrospective power analysis) data are collected. A priori power analysis is conducted prior to the research study, and is typically used in estimating sufficient sample sizes to achieve adequate power. Post-hoc power analysis is conducted after a study has been completed, and uses the obtained sample size and effect size to determine what the power was in the study, assuming the effect size in the sample is equal to the effect size in the population. Whereas the utility of prospective power analysis in experimental design is universally accepted, the usefulness of retrospective techniques is controversial.[5] Falling for the temptation to use the statistical analysis of the collected data to estimate the power will result in uninformative and misleading values. In particular, it has been shown [6] that post-hoc power in its simplest form is a one-to-one function of the p-value attained. This has been extended[6] to show that all post-hoc power analyses suffer from what is called the "power approach paradox" (PAP), in which a study with a null result is thought to show MORE evidence that the null hypothesis is actually true when the p-value is smaller, since the apparent power to detect an actual effect would be higher. In fact, a smaller p-value is properly understood to make the null hypothesis LESS likely to be true.

Application

Funding agencies, ethics boards and research review panels frequently request that a researcher perform a power analysis, for example to determine the minimum number of animal test subjects needed for an experiment to be informative. In frequentist statistics, an underpowered study is unlikely to allow one to choose between hypotheses at the desired significance level. In Bayesian statistics, hypothesis testing of the type used in classical power analysis is not done. In the Bayesian framework, one updates his or her prior beliefs using the data obtained in a given study. In principle, a study that would be deemed underpowered from the perspective of hypothesis testing could still be used in such an updating process. However, power remains a useful measure of how much a given experiment size can be expected to refine one's beliefs. A study with low power is unlikely to lead to a large change in beliefs.

Example

We study the effect of a treatment on some quantity, and compare research subjects by measuring the quantity before and after the treatment, analyzing the data using a paired t-test. Let Ai and Bi denote the pre-treatment and post-treatment measures on subject i respectively. The possible effect of the treatment should be visible in the differences Di=BiAi, which we assume to be independently distributed, all with the same expected value and variance.

We proceed by analyzing D as in a one-sided t-test. The null hypothesis will be: E[D]=0 (no effect), where E[] denotes the expected value of a quantity. In this case, the alternative is E[D]>0 (positive effect). The test statistic is:

T=nD¯σ^D.

where n is the sample size, D¯ is the average of the Di and σ^D2 is the sample variance. The null hypothesis is rejected at level 0.05 when

T>1.64,

with 1.64 the approximate decision threshold for a level 0.05 test based on a normal approximation to the test statistic, i.e. 1.64 is obtained from the Quantile function (also called inversed Cumulative distribution function) evaluated at 1−0.05=0.95.

Now suppose that the alternative hypothesis is true and E[D]=τ. Then the power is

π(τ)=P(nD¯/σ^D>1.64|τ)=P(n(D¯τ+τ)/σ^D>1.64|τ)=P(n(D¯τ)/σ^D>1.64nτ/σ^D|τ)

Since n(D¯τ)/σ^D approximately follows a standard normal distribution when the alternative hypothesis is true, the approximate power can be calculated as

π(τ)1Φ(1.64τn/σ^D).

Note that according to this formula the power increases with the values of the parameter τ. For a specific value of τ a higher power may be obtained by increasing the sample size n.

It is, of course, not possible to guarantee a sufficient large power for all values of τ, as τ may be very close to 0. In fact the minimum (infimum) value of the power is equal to the size of the test, in this example 0.05. However it is of no importance to distinguish between τ=0 and small positive values. If it is desirable to have enough power, say at least 0.90, to detect values of τ>1, the required sample size can be calculated approximately:

π(1)1Φ(1.64n/σ^D)>0.90,

from which it follows that

Φ(1.64n/σ^D)<0.10.

Hence

n/σ^D>1.64z0.10=1.64+1.283

or

n>9σ^D2.

Software for Power and Sample Size Calculations

Numerous programs are available for performing power and sample size calculations. These include nQuery Advisor, PASS, PS, R, Russ Lenth's power and sample-size page, SAS Power and sample size, and Stata. A large set of power and sample size routines are included in R and Stata, which are comprehensive statistical packages. The other programs listed above are specialized for these calculations and are easier to use by people who are not familiar with the more general packages. nQuery, PASS, SAS and Stata are commercial products. The other programs listed above are freely available.

See also

Sportspersons Hyslop from Nicolet, usually spends time with pastimes for example martial arts, property developers condominium in singapore singapore and hot rods. Maintains a trip site and has lots to write about after touring Gulf of Porto: Calanche of Piana. Template:Wikiversity

Template:More footnotes

Notes

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

References

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534

External links

Further Explanations

Template:Statistics

  1. http://www.encyclopediaofmath.org/index.php/Power_function_of_a_test
  2. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  3. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  4. Template:Cite doi
  5. Thomas, L. (1997) Retrospective power analysis. Conservation Biology 11(1):276–280
  6. 6.0 6.1 Hoenig and Heisey (2001)The Abuse of PowerThe American Statistician 55(1):19-24 [1]