IMD3: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Ketiltrout
hatnote
 
en>Glenn
Line 1: Line 1:
I woke up the other day and noticed - I have been single for a little while  [http://minioasis.com luke bryan meet and greet] today and after much bullying from pals I today find myself signed-up for web dating. They [https://www.flickr.com/search/?q=assured assured] me that there are a lot of regular, sweet and enjoyable visitors to meet, therefore here goes the message!<br>My buddies and fam are wonderful and spending time together at pub gigs or meals is consistently vital. I have never been in to dance clubs as I realize that one may not own a decent dialogue using the sound. I additionally got two undoubtedly cheeky and really adorable canines that are always ready to meet up new folks.<br>I try to stay as physically fit as possible staying at  luke bryan concert tickets 2014, [http://www.museodecarruajes.org http://www.museodecarruajes.org], the gym many times a week. I love  Luke Bryan Tour 2014 Tickets ([http://www.hotelsedinburgh.org Www.Hotelsedinburgh.Org]) my sports and endeavor to [https://www.google.com/search?hl=en&gl=us&tbm=nws&q=perform perform] or view while many a potential. I will often at Hawthorn matches being wintertime. Note: If you will contemplated shopping a hobby I don't mind, I have noticed the carnage of wrestling fits at stocktake revenue.<br><br>My web site: [http://lukebryantickets.flicense.com cheap luke bryan concert tickets]
'''Multicollinearity''' is a statistical phenomenon in which two or more predictor [[Variable (mathematics)|variables]] in a [[multiple regression]] model are highly [[Correlation and dependence|correlated]], meaning that one can be linearly predicted from the others with a non-trivial degree of accuracy. In this situation the [[Regression coefficient|coefficient estimates]] of the multiple regression may change erratically in response to small changes in the model or the data. Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data themselves; it only affects calculations regarding [[Dependent and independent variables#Use in statistics|individual predictors]]. That is, a multiple regression model with correlated predictors can indicate how well the entire bundle of predictors predicts the [[Dependent variable#Use in statistics|outcome variable]], but it may not give valid results about any individual predictor, or about which predictors are redundant with respect to others.
 
A high degree of multicollinearity can also prevent computer software packages from performing the [[matrix inversion]] required for computing the regression coefficients, or it may make the results of that inversion inaccurate.
 
Note that in statements of the assumptions underlying regression analyses such as [[ordinary least squares]], the phrase "no multicollinearity" is sometimes used to mean the absence of '''perfect multicollinearity''', which is an exact (non-stochastic) linear relation among the [[Explanatory variable#Independent variable|regressors]].
 
==Definition==
 
'''Collinearity''' is a linear association between ''two'' [[explanatory variable]]s. Two variables are perfectly collinear if there is an exact linear relationship between the two. For example, <math> X_{1} </math> and <math> X_{2} </math> are perfectly collinear if there exist parameters <math>\lambda_0</math> and <math>\lambda_1</math> such that, for all observations ''i'', we have
 
: <math> X_{2i} = \lambda_0 + \lambda_1 X_{1i}. </math>
 
'''Multicollinearity''' refers to a situation in which two or more explanatory variables in a [[multiple regression]] model are highly linearly related. We have perfect multicollinearity if, for example as in the equation above, the correlation between two independent variables is equal to 1 or -1. In practice, we rarely face perfect multicollinearity in a data set. More commonly, the issue of multicollinearity arises when there is an approximate linear relationship among two or more independent variables.
 
Mathematically, a set of variables is perfectly multicollinear if there exist one or more exact linear relationships among some of the variables. For example, we may have
 
: <math>
\lambda_0 + \lambda_1 X_{1i} + \lambda_2 X_{2i} + \cdots + \lambda_k X_{ki} = 0
</math>
 
holding for all observations ''i'',  where <math> \lambda_j </math> are constants and <math> X_{ji} </math> is the ''i''<sup>th</sup> observation on the ''j''<sup>th</sup> explanatory variable. We can explore one issue caused by multicollinearity by examining the process of attempting to obtain estimates for the parameters of the multiple regression equation
 
: <math> Y_{i} = \beta _0 + \beta _1 X_{1i} + \cdots + \beta _k X_{ki} + \varepsilon _{i}. </math>
 
The [[ordinary least squares]] estimates involve inverting the matrix
 
: <math> X^{T} X </math>
 
where
 
: <math> X = \begin{bmatrix}
 
      1 & X_{11} & \cdots & X_{k1}  \\
 
      \vdots & \vdots & & \vdots \\
 
      1 & X_{1N} & \cdots & X_{kN}
 
\end{bmatrix}. </math>
 
If there is an exact linear relationship (perfect multicollinearity) among the independent variables, the rank of X (and therefore of X<sup>T</sup>X) is less than k+1, and the matrix X<sup>T</sup>X will not be invertible.
 
In most applications, perfect multicollinearity is unlikely. An analyst is more likely to face a high degree of multicollinearity. For example, suppose that instead of the above equation holding, we have that equation in modified form with an error term <math>v_i</math>:
 
:<math>
\lambda_0 + \lambda_1 X_{1i} + \lambda_2 X_{2i} + \cdots + \lambda_k X_{ki} + v_i = 0.
</math>
 
In this case, there is no exact linear relationship among the variables, but the <math> X_j </math> variables are nearly perfectly multicollinear if the variance of <math>v_i</math> is small for some set of values for the <math>\lambda</math>'s. In this case, the matrix X<sup>T</sup>X has an inverse, but is ill-conditioned so that a given computer algorithm may or may not be able to compute an approximate inverse, and if it does so the resulting computed inverse may be highly sensitive to slight variations in the data (due to magnified effects of rounding error) and so may be very inaccurate.
 
==Detection of multicollinearity==
 
Indicators that multicollinearity may be present in a model:
 
# Large changes in the estimated regression coefficients when a predictor variable is added or deleted
# Insignificant regression coefficients for the affected variables in the multiple regression, but a rejection of the joint hypothesis that those coefficients are all zero (using an F-test)
# If a multivariable regression finds an insignificant coefficient of a particular explanator, yet a [[simple linear regression]] of the explained variable on this explanatory variable shows its coefficient to be significantly different from zero, this situation indicates multicollinearity in the multivariable regression.
# Some authors have suggested a formal detection-tolerance or the [[variance inflation factor]] (VIF) for multicollinearity:<br><math>\mathrm{tolerance} = 1-R_{j}^2,\quad \mathrm{VIF} = \frac{1}{\mathrm{tolerance}},</math><br>where <math>R_{j}^2</math> is the coefficient of determination of a regression of explanator ''j'' on all the other explanators.  A tolerance of less than 0.20 or 0.10 and/or a VIF of 5 or 10 and above indicates a multicollinearity problem.<ref>{{cite doi|10.1007/s11135-006-9018-6}}</ref>
# '''Condition Number Test''': The standard measure of [[Condition number|ill-conditioning]] in a matrix is the condition index.  It will indicate that the inversion of the matrix is numerically unstable with finite-precision numbers ( standard computer floats and doubles ).  This indicates the potential sensitivity of the computed inverse to small changes in the original matrix. The Condition Number is computed by finding the square root of (the maximum eigenvalue divided by the minimum eigenvalue). If the Condition Number is above 30, the regression is said to have significant multicollinearity.
# '''Farrar-Glauber Test''':<ref>{{cite jstor|1937887}}</ref> If the variables are found to be orthogonal, there is no multicollinearity; if the variables are not orthogonal, then multicollinearity is present. C. Robert Wichers <ref>{{cite jstor|1923926}}</ref> has argued that Farrar-Glauber partial correlation test is ineffective in that a given partial correlation may be compatible with different multicollinearity patterns. The Farrar–Glauber test has also been criticized by other researchers.<ref>{{cite jstor|1923925}}</ref><ref>O'Hagan, John W & McCabe, Brendan, (1975). "Tests for the Severity of Multicollinearity in Regression Analysis: A Comment," The Review of Economics and Statistics, MIT Press, vol. 57(3), pages 368-70, August.</ref>
# Construction of a correlation matrix among the explanatory variables will yield indications as to the likelihood that any given couplet of right-hand-side variables are creating multicollinearity problems.  Correlation values (off-diagonal elements) of at least .4 are sometimes interpreted as indicating a multicollinearity problem.
 
==Consequences of multicollinearity==
 
As mentioned above, one consequence of a high degree of multicollinearity is that, even if the matrix X<sup>T</sup>X is invertible, a computer algorithm may be unsuccessful in obtaining an approximate inverse, and if it does obtain one it may be numerically inaccurate.  But even in the presence of an accurate X<sup>T</sup>X matrix, the following consequences arise:
 
In the presence of multicollinearity, the estimate of one variable's impact on the dependent variable <math>Y</math> while controlling for the others tends to be less precise than if predictors were uncorrelated with one another. The usual interpretation of a regression coefficient is that it provides an estimate of the effect of a one unit change in an independent variable, <math>X_{1}</math>, holding the other variables constant. If <math>X_{1}</math> is highly correlated with another independent variable, <math>X_{2}</math>, in the given data set, then we have  a set of observations for which <math>X_{1}</math> and <math>X_{2}</math> have a particular linear stochastic relationship. We don't have a set of observations for which all changes in <math>X_{1}</math> are independent of changes in <math>X_{2}</math>, so we have an imprecise estimate of the effect of independent changes in <math>X_{1}</math>.
 
In some sense, the collinear variables contain the same information about the dependent variable. If nominally "different" measures actually quantify the same phenomenon then they are redundant. Alternatively, if the variables are accorded different names and perhaps employ different numeric measurement scales but are highly correlated with each other, then they suffer from redundancy.
 
One of the features of multicollinearity is that the standard errors of the affected coefficients tend to be large. In that case, the test of the hypothesis that the coefficient is equal to zero may lead to a failure to reject a false null hypothesis of no effect of the explanator.
 
A principal danger of such data redundancy is that of [[overfitting]] in [[regression analysis]] models. The best regression models are those in which the predictor variables each correlate highly with the dependent (outcome) variable but correlate at most only minimally with each other. Such a model is often called "low noise" and will be statistically robust (that is, it will predict reliably across numerous samples of variable sets drawn from the same statistical population).
 
So long as the underlying specification is correct, multicollinearity does not actually bias results; it just produces large [[Standard error (statistics)|standard errors]] in the related independent variables.  If, however, there are other problems (such as omitted variables) which introduce bias, multicollinearity can multiply (by orders of magnitude) the effects of that bias.{{citation needed|date=February 2013}} More importantly, the usual use of regression is to take coefficients from the model and then apply them to other data.  If the pattern of multicollinearity in the new data differs from that in the data that was fitted, such extrapolation may introduce large errors in the predictions.<ref>{{cite book |last=Chatterjee |first=S. |last2=Hadi |first2=A. S. |last3=Price |first3=B. |year=2000 |title=Regression Analysis by Example |edition=Third |publisher=John Wiley and Sons |isbn=0-471-31946-5 }}</ref>
 
==Remedies for multicollinearity==
 
# Make sure you have not fallen into the [[Dummy variable (statistics)|dummy variable trap]]; including a dummy variable for every category (e.g., summer, autumn, winter, and spring) and including a constant term in the regression together guarantee perfect multicollinearity.
# Try seeing what happens if you use independent subsets of your data for estimation and apply those estimates to the whole data set.  Theoretically you should obtain somewhat higher variance from the smaller datasets used for estimation, but the expectation of the coefficient values should be the same.  Naturally, the observed coefficient values will vary, but look at how much they vary.
# Leave the model as is, despite multicollinearity. The presence of multicollinearity doesn't affect the efficacy of extrapolating the fitted model to new data provided that the predictor variables follow the same pattern of multicollinearity in the new data as in the data on which the regression model is based.<ref>{{cite book| last = Gujarati| first = Damodar| authorlink = Damodar N. Gujarati| title = Basic Econometrics| edition = 4th| publisher = McGraw−Hill| pages = 363–363| chapter = Multicollinearity: what happens if the regressors are correlated?}}</ref>
# Drop one of the variables. An explanatory variable may be dropped to produce a model with significant coefficients. However, you lose information (because you've dropped a variable). Omission of a relevant variable results in biased coefficient estimates for the remaining explanatory variables.
# Obtain more data, if possible. This is the preferred solution. More data can produce more precise parameter estimates (with lower standard errors), as seen from the formula in [[variance inflation factor]] for the variance of the estimate of a regression coefficient in terms of the sample size and the degree of multicollinearity.
# Mean-center the predictor variables. Generating polynomial terms (i.e., for <math>x_1</math>, <math>x_1^2</math>, <math>x_1^3</math>, etc.) can cause some multicollinearity if the variable in question has a limited range (e.g., [2,4]).  Mean-centering will eliminate this special kind of multicollinearity.  However, in general, this has no effect. It can be useful in overcoming problems arising from rounding and other computational steps if a carefully designed computer program is not used.
# Standardize your independent variables. This may help reduce a false flagging of a condition index above 30.
# It has also been suggested that using the [[Shapley value]], a game theory tool, the model could account for the effects of multicollinearity.  The Shapley value assigns a value for each predictor and assesses all possible combinations of importance.<ref>Lipovestky and Conklin, 2001,"Analysis of Regression in Game Theory Approach". Applied Stochastic Models and Data Analysis 17 (2001): 319-330."</ref>
# [[Ridge regression]] or [[principal component regression]] can be used.
# If the correlated explanators are different lagged values of the same underlying explanator, then a [[distributed lag]] technique can be used, imposing a general structure on the relative values of the coefficients to be estimated.
 
Note that one technique that does not work in offsetting the effects of multicollinearity is [[orthogonality|orthogonalizing]] the explanatory variables (linearly transforming them so that the transformed variables are uncorrelated with each other): By the [[Frisch–Waugh–Lovell theorem]], using projection matrices to make the explanatory variables orthogonal to each other will lead to the same results as running the regression with all non-orthogonal explanators included.
 
==Examples of contexts in which multicollinearity arises==
 
===Survival analysis===
 
Multicollinearity may represent a serious issue in [[survival analysis]]. The problem is that time-varying covariates may change their value over the time line of the study. A special procedure is recommended to assess the impact of multicollinearity on the results. See Van den Poel & Larivière (2004) for a detailed discussion.<ref>{{cite doi|10.1016/S0377-2217(03)00069-9}}</ref>
 
===Interest rates for different terms to maturity===
 
In various situations it might be hypothesized that multiple interest rates of various terms to maturity all influence some economic decision, such as the amount of money or some other financial asset to hold, or the amount of fixed investment spending to engage in. In this case, including these various interest rates will in general create a substantial multicollinearity problem because interest rates tend to move together. If in fact each of the interest rates has its own separate effect on the dependent variable, it can be extremely difficult to separate out their effects.
 
==Extension==
 
The concept of ''lateral collinearity'' expands on the traditional view of multicollinearity, comprising also  collinearity between explanatory and criteria (i.e., explained) variables, in the sense that they may be measuring almost the same thing as each other.<ref>Kock, N., & Lynn, G.S. (2012). [http://www.scriptwarp.com/warppls/pubs/Kock_Lynn_2012.pdf Lateral collinearity and misleading results in variance-based SEM: An illustration and recommendations.] Journal of the Association for Information Systems, 13(7), 546-580.</ref>
 
==References==
{{Reflist}}
 
==Further reading==
* {{cite book |last=Asteriou |first=Dimitros |last2=Hall |first2=Stephen G. |title=Applied Econometrics |location= |publisher=Palgrave MacMillan |year=2011 |edition=Second |isbn=978-0-230-27182-1 |chapter=Multicollinearity |pages=95–108 }}
* {{cite book |last=Pedace |first=Roberto |title=Econometrics for Dummies |location=Hoboken, NJ |publisher=Wiley |year=2013 |isbn=978-1-118-53384-0 |chapter=Multicollinearity |pages=175–190 }}
 
==External links==
* [http://jeff560.tripod.com/m.html Earliest Uses: The entry on Multicollinearity has some historical information.]
 
{{Use dmy dates|date=November 2010}}
 
[[Category:Regression analysis]]

Revision as of 23:39, 25 July 2013

Multicollinearity is a statistical phenomenon in which two or more predictor variables in a multiple regression model are highly correlated, meaning that one can be linearly predicted from the others with a non-trivial degree of accuracy. In this situation the coefficient estimates of the multiple regression may change erratically in response to small changes in the model or the data. Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data themselves; it only affects calculations regarding individual predictors. That is, a multiple regression model with correlated predictors can indicate how well the entire bundle of predictors predicts the outcome variable, but it may not give valid results about any individual predictor, or about which predictors are redundant with respect to others.

A high degree of multicollinearity can also prevent computer software packages from performing the matrix inversion required for computing the regression coefficients, or it may make the results of that inversion inaccurate.

Note that in statements of the assumptions underlying regression analyses such as ordinary least squares, the phrase "no multicollinearity" is sometimes used to mean the absence of perfect multicollinearity, which is an exact (non-stochastic) linear relation among the regressors.

Definition

Collinearity is a linear association between two explanatory variables. Two variables are perfectly collinear if there is an exact linear relationship between the two. For example, X1 and X2 are perfectly collinear if there exist parameters λ0 and λ1 such that, for all observations i, we have

X2i=λ0+λ1X1i.

Multicollinearity refers to a situation in which two or more explanatory variables in a multiple regression model are highly linearly related. We have perfect multicollinearity if, for example as in the equation above, the correlation between two independent variables is equal to 1 or -1. In practice, we rarely face perfect multicollinearity in a data set. More commonly, the issue of multicollinearity arises when there is an approximate linear relationship among two or more independent variables.

Mathematically, a set of variables is perfectly multicollinear if there exist one or more exact linear relationships among some of the variables. For example, we may have

λ0+λ1X1i+λ2X2i++λkXki=0

holding for all observations i, where λj are constants and Xji is the ith observation on the jth explanatory variable. We can explore one issue caused by multicollinearity by examining the process of attempting to obtain estimates for the parameters of the multiple regression equation

Yi=β0+β1X1i++βkXki+εi.

The ordinary least squares estimates involve inverting the matrix

XTX

where

X=[1X11Xk11X1NXkN].

If there is an exact linear relationship (perfect multicollinearity) among the independent variables, the rank of X (and therefore of XTX) is less than k+1, and the matrix XTX will not be invertible.

In most applications, perfect multicollinearity is unlikely. An analyst is more likely to face a high degree of multicollinearity. For example, suppose that instead of the above equation holding, we have that equation in modified form with an error term vi:

λ0+λ1X1i+λ2X2i++λkXki+vi=0.

In this case, there is no exact linear relationship among the variables, but the Xj variables are nearly perfectly multicollinear if the variance of vi is small for some set of values for the λ's. In this case, the matrix XTX has an inverse, but is ill-conditioned so that a given computer algorithm may or may not be able to compute an approximate inverse, and if it does so the resulting computed inverse may be highly sensitive to slight variations in the data (due to magnified effects of rounding error) and so may be very inaccurate.

Detection of multicollinearity

Indicators that multicollinearity may be present in a model:

  1. Large changes in the estimated regression coefficients when a predictor variable is added or deleted
  2. Insignificant regression coefficients for the affected variables in the multiple regression, but a rejection of the joint hypothesis that those coefficients are all zero (using an F-test)
  3. If a multivariable regression finds an insignificant coefficient of a particular explanator, yet a simple linear regression of the explained variable on this explanatory variable shows its coefficient to be significantly different from zero, this situation indicates multicollinearity in the multivariable regression.
  4. Some authors have suggested a formal detection-tolerance or the variance inflation factor (VIF) for multicollinearity:
    tolerance=1Rj2,VIF=1tolerance,
    where Rj2 is the coefficient of determination of a regression of explanator j on all the other explanators. A tolerance of less than 0.20 or 0.10 and/or a VIF of 5 or 10 and above indicates a multicollinearity problem.[1]
  5. Condition Number Test: The standard measure of ill-conditioning in a matrix is the condition index. It will indicate that the inversion of the matrix is numerically unstable with finite-precision numbers ( standard computer floats and doubles ). This indicates the potential sensitivity of the computed inverse to small changes in the original matrix. The Condition Number is computed by finding the square root of (the maximum eigenvalue divided by the minimum eigenvalue). If the Condition Number is above 30, the regression is said to have significant multicollinearity.
  6. Farrar-Glauber Test:[2] If the variables are found to be orthogonal, there is no multicollinearity; if the variables are not orthogonal, then multicollinearity is present. C. Robert Wichers [3] has argued that Farrar-Glauber partial correlation test is ineffective in that a given partial correlation may be compatible with different multicollinearity patterns. The Farrar–Glauber test has also been criticized by other researchers.[4][5]
  7. Construction of a correlation matrix among the explanatory variables will yield indications as to the likelihood that any given couplet of right-hand-side variables are creating multicollinearity problems. Correlation values (off-diagonal elements) of at least .4 are sometimes interpreted as indicating a multicollinearity problem.

Consequences of multicollinearity

As mentioned above, one consequence of a high degree of multicollinearity is that, even if the matrix XTX is invertible, a computer algorithm may be unsuccessful in obtaining an approximate inverse, and if it does obtain one it may be numerically inaccurate. But even in the presence of an accurate XTX matrix, the following consequences arise:

In the presence of multicollinearity, the estimate of one variable's impact on the dependent variable Y while controlling for the others tends to be less precise than if predictors were uncorrelated with one another. The usual interpretation of a regression coefficient is that it provides an estimate of the effect of a one unit change in an independent variable, X1, holding the other variables constant. If X1 is highly correlated with another independent variable, X2, in the given data set, then we have a set of observations for which X1 and X2 have a particular linear stochastic relationship. We don't have a set of observations for which all changes in X1 are independent of changes in X2, so we have an imprecise estimate of the effect of independent changes in X1.

In some sense, the collinear variables contain the same information about the dependent variable. If nominally "different" measures actually quantify the same phenomenon then they are redundant. Alternatively, if the variables are accorded different names and perhaps employ different numeric measurement scales but are highly correlated with each other, then they suffer from redundancy.

One of the features of multicollinearity is that the standard errors of the affected coefficients tend to be large. In that case, the test of the hypothesis that the coefficient is equal to zero may lead to a failure to reject a false null hypothesis of no effect of the explanator.

A principal danger of such data redundancy is that of overfitting in regression analysis models. The best regression models are those in which the predictor variables each correlate highly with the dependent (outcome) variable but correlate at most only minimally with each other. Such a model is often called "low noise" and will be statistically robust (that is, it will predict reliably across numerous samples of variable sets drawn from the same statistical population).

So long as the underlying specification is correct, multicollinearity does not actually bias results; it just produces large standard errors in the related independent variables. If, however, there are other problems (such as omitted variables) which introduce bias, multicollinearity can multiply (by orders of magnitude) the effects of that bias.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park. More importantly, the usual use of regression is to take coefficients from the model and then apply them to other data. If the pattern of multicollinearity in the new data differs from that in the data that was fitted, such extrapolation may introduce large errors in the predictions.[6]

Remedies for multicollinearity

  1. Make sure you have not fallen into the dummy variable trap; including a dummy variable for every category (e.g., summer, autumn, winter, and spring) and including a constant term in the regression together guarantee perfect multicollinearity.
  2. Try seeing what happens if you use independent subsets of your data for estimation and apply those estimates to the whole data set. Theoretically you should obtain somewhat higher variance from the smaller datasets used for estimation, but the expectation of the coefficient values should be the same. Naturally, the observed coefficient values will vary, but look at how much they vary.
  3. Leave the model as is, despite multicollinearity. The presence of multicollinearity doesn't affect the efficacy of extrapolating the fitted model to new data provided that the predictor variables follow the same pattern of multicollinearity in the new data as in the data on which the regression model is based.[7]
  4. Drop one of the variables. An explanatory variable may be dropped to produce a model with significant coefficients. However, you lose information (because you've dropped a variable). Omission of a relevant variable results in biased coefficient estimates for the remaining explanatory variables.
  5. Obtain more data, if possible. This is the preferred solution. More data can produce more precise parameter estimates (with lower standard errors), as seen from the formula in variance inflation factor for the variance of the estimate of a regression coefficient in terms of the sample size and the degree of multicollinearity.
  6. Mean-center the predictor variables. Generating polynomial terms (i.e., for x1, x12, x13, etc.) can cause some multicollinearity if the variable in question has a limited range (e.g., [2,4]). Mean-centering will eliminate this special kind of multicollinearity. However, in general, this has no effect. It can be useful in overcoming problems arising from rounding and other computational steps if a carefully designed computer program is not used.
  7. Standardize your independent variables. This may help reduce a false flagging of a condition index above 30.
  8. It has also been suggested that using the Shapley value, a game theory tool, the model could account for the effects of multicollinearity. The Shapley value assigns a value for each predictor and assesses all possible combinations of importance.[8]
  9. Ridge regression or principal component regression can be used.
  10. If the correlated explanators are different lagged values of the same underlying explanator, then a distributed lag technique can be used, imposing a general structure on the relative values of the coefficients to be estimated.

Note that one technique that does not work in offsetting the effects of multicollinearity is orthogonalizing the explanatory variables (linearly transforming them so that the transformed variables are uncorrelated with each other): By the Frisch–Waugh–Lovell theorem, using projection matrices to make the explanatory variables orthogonal to each other will lead to the same results as running the regression with all non-orthogonal explanators included.

Examples of contexts in which multicollinearity arises

Survival analysis

Multicollinearity may represent a serious issue in survival analysis. The problem is that time-varying covariates may change their value over the time line of the study. A special procedure is recommended to assess the impact of multicollinearity on the results. See Van den Poel & Larivière (2004) for a detailed discussion.[9]

Interest rates for different terms to maturity

In various situations it might be hypothesized that multiple interest rates of various terms to maturity all influence some economic decision, such as the amount of money or some other financial asset to hold, or the amount of fixed investment spending to engage in. In this case, including these various interest rates will in general create a substantial multicollinearity problem because interest rates tend to move together. If in fact each of the interest rates has its own separate effect on the dependent variable, it can be extremely difficult to separate out their effects.

Extension

The concept of lateral collinearity expands on the traditional view of multicollinearity, comprising also collinearity between explanatory and criteria (i.e., explained) variables, in the sense that they may be measuring almost the same thing as each other.[10]

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

Further reading

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534

External links

30 year-old Entertainer or Range Artist Wesley from Drumheller, really loves vehicle, property developers properties for sale in singapore singapore and horse racing. Finds inspiration by traveling to Works of Antoni Gaudí.

  1. Template:Cite doi
  2. Template:Cite jstor
  3. Template:Cite jstor
  4. Template:Cite jstor
  5. O'Hagan, John W & McCabe, Brendan, (1975). "Tests for the Severity of Multicollinearity in Regression Analysis: A Comment," The Review of Economics and Statistics, MIT Press, vol. 57(3), pages 368-70, August.
  6. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  7. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  8. Lipovestky and Conklin, 2001,"Analysis of Regression in Game Theory Approach". Applied Stochastic Models and Data Analysis 17 (2001): 319-330."
  9. Template:Cite doi
  10. Kock, N., & Lynn, G.S. (2012). Lateral collinearity and misleading results in variance-based SEM: An illustration and recommendations. Journal of the Association for Information Systems, 13(7), 546-580.