|
|
Line 1: |
Line 1: |
| {{No footnotes|date=July 2010}}
| | Hi there. Let me start by introducing the author, her title is Sophia Boon but she never really liked that name. Alaska is the only place I've been residing in but now I'm considering other choices. Distributing production is exactly where her primary earnings arrives [http://help.ksu.edu.sa/node/65129 are psychics real] from. To perform lacross is the factor I love most of all.<br><br>My web-site; [http://www.weddingwall.com.au/groups/easy-advice-for-successful-personal-development-today/ good psychic] readings online [[http://www.prayerarmor.com/uncategorized/dont-know-which-kind-of-hobby-to-take-up-read-the-following-tips/ www.prayerarmor.com]] |
| | |
| In [[statistics]], the '''variance inflation factor (VIF)''' quantifies the severity of [[multicollinearity]] in an [[ordinary least squares]] [[Linear regression|regression]] analysis. It provides an index that measures how much the [[variance]] (the square of the estimate's [[standard deviation]]) of an estimated regression coefficient is increased because of collinearity.
| |
| | |
| ==Definition==
| |
| | |
| Consider the following [[linear model]] with ''k'' independent variables:
| |
| | |
| : ''Y'' = ''β''<sub>0</sub> + ''β''<sub>1</sub> ''X''<sub>1</sub> + ''β''<sub>2</sub> ''X'' <sub>2</sub> + ... + ''β''<sub>''k''</sub> ''X''<sub>''k''</sub> + ''ε''. | |
| | |
| The [[Standard error (statistics)|standard error]] of the estimate of ''β''<sub>''j''</sub> is the square root of the ''j''+1, ''j''+1 element of ''s<sup>2</sup>''(''X''<sup>′</sup>''X'')<sup>−1</sup>, where ''s'' is the root mean squared error (RMSE) (note that RMSE<sup>2</sup> is an unbiased estimator of the true variance of the error term, <math> \sigma^2 </math>); ''X'' is the regression [[design matrix]] — a matrix such that ''X''<sub>''i'', ''j''+1</sub> is the value of the ''j''<sup>''th''</sup> independent variable for the ''i''<sup>''th''</sup> case or observation, and such that ''X''<sub>''i'', 1</sub> equals 1 for all ''i''. It turns out that the square of this standard error, the estimated variance of the estimate of ''β''<sub>''j''</sub>, can be equivalently expressed as{{Citation needed|date=July 2010}}
| |
| | |
| :<math>
| |
| {\rm \hat{var}}(\hat{\beta}_j) = \frac{s^2}{(n-1)\widehat{\rm var}(X_j)}\cdot \frac{1}{1-R_j^2},
| |
| </math>
| |
| | |
| where ''R''<sub>''j''</sub><sup>2</sup> is the [[coefficient of determination|multiple ''R''<sup>2</sup>]] for the regression of ''X''<sub>''j''</sub> on the other covariates (a regression that does not involve the response variable ''Y''). This identity separates the influences of several distinct factors on the variance of the coefficient estimate:
| |
| | |
| * ''s''<sup>2</sup>: greater scatter in the data around the regression surface leads to proportionately more variance in the coefficient estimates
| |
| | |
| * ''n'': greater sample size results in proportionately less variance in the coefficient estimates
| |
| | |
| * <math>\widehat{\rm var}(X_j)</math>: greater variability in a particular covariate leads to proportionately less variance in the corresponding coefficient estimate
| |
| | |
| The remaining term, 1 / (1 − ''R''<sub>''j''</sub><sup>2</sup>) is the VIF. It reflects all other factors that influence the uncertainty in the coefficient estimates. The VIF equals 1 when the vector ''X''<sub>''j''</sub> is [[orthogonal]] to each column of the design matrix for the regression of ''X''<sub>''j''</sub> on the other covariates. By contrast, the VIF is greater than 1 when the vector ''X''<sub>''j''</sub> is not orthogonal to all columns of the design matrix for the regression of ''X''<sub>''j''</sub> on the other covariates. Finally, note that the VIF is invariant to the scaling of the variables (that is, we could scale each variable ''X''<sub>''j''</sub> by a constant ''c''<sub>''j''</sub> without changing the VIF).
| |
| | |
| ==Calculation and analysis==
| |
| | |
| The VIF can be calculated and analyzed in three steps:
| |
| | |
| === Step one ===
| |
| Calculate ''k'' different VIFs, one for each ''X''<sub>''i''</sub> by first running an ordinary least square regression that has ''X''<sub>''i''</sub> as a function of all the other explanatory variables in the first equation.<br /> If ''i'' = 1, for example, the equation would be
| |
| :<math>X_1=\alpha_2 X_2 + \alpha_3 X_3 + \cdots + \alpha_k X_k + c_0 +e</math>
| |
| | |
| where ''c''<sub>0</sub> is a constant and ''e'' is the [[errors and residuals in statistics|error term]].
| |
| | |
| === Step two ===
| |
| | |
| Then, calculate the VIF factor for <math>\hat\beta_i</math> with the following formula:
| |
| | |
| : <math>\mathrm{VIF}= \frac{1}{1-R^2_i}</math> | |
| | |
| where ''R''<sup>2</sup><sub>''i''</sub> is the [[coefficient of determination]] of the regression equation in step one, but with <math> X_i </math> on the left hand side, and all other predictor variables (all the other X variables) on the right hand side.
| |
| | |
| === Step three ===
| |
| Analyze the magnitude of [[multicollinearity]] by considering the size of the <math>\operatorname{VIF}(\hat \beta_i)</math>. A common rule of thumb is that if <math>\operatorname{VIF}(\hat \beta_i) > 5</math> then multicollinearity is high. Also 10 has been proposed (see Kutner book referenced below) as a cut off value.
| |
| | |
| Some software calculates the tolerance which is just the reciprocal of the VIF. The choice of which to use is a matter of personal preference of the researcher.
| |
| | |
| == Interpretation ==
| |
| The square root of the variance inflation factor tells you how much larger the standard error is, compared with what it would be if that variable were uncorrelated with the other predictor variables in the model.
| |
| | |
| '''Example'''<br />
| |
| If the variance inflation factor of a predictor variable were 5.27 (√5.27 = 2.3) this means that the standard error for the coefficient of that predictor variable is 2.3 times as large as it would be if that predictor variable were uncorrelated with the other predictor variables.
| |
| | |
| ==References==
| |
| *Longnecker, M.T & Ott, R.L :''A First Course in Statistical Methods'', page 615. Thomson Brooks/Cole, 2004.
| |
| *Studenmund, A.H: ''Using Econometrics: A practical guide'', 5th Edition, page 258–259. Pearson International Edition, 2006.
| |
| *Hair JF, Anderson R, Tatham RL, Black WC: ''Multivariate Data Analysis''. Prentice Hall: Upper Saddle River, N.J. 2006.
| |
| *Marquardt, D.W. 1970 "Generalized Inverses, Ridge Regression, Biased Linear Estimation, and Nonlinear Estimation", ''Technometrics'' 12(3), 591, 605–07
| |
| *Allison, P.D. ''Multiple Regression: a primer'', page 142. Pine Forge Press: Thousand Oaks, C.A. 1999.
| |
| *Kutner MH, Nachtsheim CJ, Neter J, ''Applied Linear Regression Models'', 4th edition, McGraw-Hill Irwin, 2004.
| |
| | |
| [[Category:Regression diagnostics]]
| |
| [[Category:Statistical ratios]]
| |
| | |
| [[de:Multikollinearit%C3%A4t#Varianzinflationsfaktor]]
| |
Hi there. Let me start by introducing the author, her title is Sophia Boon but she never really liked that name. Alaska is the only place I've been residing in but now I'm considering other choices. Distributing production is exactly where her primary earnings arrives are psychics real from. To perform lacross is the factor I love most of all.
My web-site; good psychic readings online [www.prayerarmor.com]