|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| {{Regression bar}}
| | I [http://lukebryantickets.pyhgy.com where to buy luke bryan tickets] woke up last week and noticed - I've been solitary for a while today and following much bullying from friends I now locate myself signed up for online dating. They guaranteed me that there are a lot of sweet, normal and fun folks to fulfill, therefore the pitch is gone by here!<br>I make an effort to keep as physically fit as potential being at the gymnasium several times weekly. I appreciate my athletics and make an effort to perform or watch while many a potential. Being wintertime I will often at Hawthorn fits. Note: I've seen the carnage of fumbling fits at stocktake revenue, In case that you really contemplated shopping an activity [http://Www.answers.com/topic/I+don%27t I don't] mind.<br>My buddies and fam are amazing and hanging out together at pub gigs or dishes is obviously a necessity. As I discover that you can do not get a decent dialogue using the sound I haven't ever been into nightclubs. Additionally, I got 2 really [http://minioasis.com luke bryan concert dates for 2014] cute and definitely cheeky canines who are invariably enthusiastic to meet up fresh people.<br><br><br><br>Feel free to surf to my weblog :: [http://www.banburycrossonline.com luke bryan tickets tampa] |
| [[Image:Linear regression.svg|thumb|right|400px|Example of [[simple linear regression]], which has one independent variable]] | |
| | |
| In [[statistics]], '''linear regression''' is an approach to modeling the relationship between a scalar [[dependent variable]] ''y'' and one or more [[explanatory variable]]s denoted ''X''. The case of one explanatory variable is called ''[[simple linear regression]]''. For more than one explanatory variable, it is called ''multiple linear regression''. (This term should be distinguished from ''[[multivariate linear regression]]'', where multiple correlated dependent variables are predicted,{{Citation needed|date=April 2012}}<!-- this appears to be a conflation of PLS or related projective methods--> rather than a single scalar variable.)
| |
| | |
| In linear regression, [[data]] are modeled using [[linear predictor function]]s, and unknown model [[parameters]] are [[estimation theory|estimated]] from the data. Such models are called ''[[linear model]]s''. Most commonly, linear regression refers to a model in which the [[conditional expectation|conditional mean]] of ''y'' given the value of ''X'' is an [[affine transformation|affine function]] of ''X''. Less commonly, linear regression could refer to a model in which the [[median]], or some other [[quantile]] of the conditional distribution of ''y'' given ''X'' is expressed as a linear function of ''X''. Like all forms of [[regression analysis]], ''linear regression'' focuses on the [[conditional probability distribution]] of ''y'' given ''X'', rather than on the [[joint probability distribution]] of ''y'' and ''X'', which is the domain of [[multivariate analysis]].
| |
| | |
| Linear regression was the first type of [[regression analysis]] to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine.
| |
| | |
| Linear regression has many practical uses. Most applications fall into one of the following two broad categories:
| |
| * If the goal is prediction, or forecasting, or reduction, linear regression can be used to fit a predictive model to an observed data set of ''y'' and ''X'' values. After developing such a model, if an additional value of ''X'' is then given without its accompanying value of ''y'', the fitted model can be used to make a prediction of the value of ''y''.
| |
| * Given a variable ''y'' and a number of variables ''X''<sub>1</sub>, ..., ''X''<sub>''p''</sub> that may be related to ''y'', linear regression analysis can be applied to quantify the strength of the relationship between ''y'' and the ''X''<sub>''j''</sub>, to assess which ''X''<sub>''j''</sub> may have no relationship with ''y'' at all, and to identify which subsets of the ''X''<sub>''j''</sub> contain redundant information about ''y''.
| |
| | |
| Linear regression models are often fitted using the [[least squares]] approach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some other [[norm (mathematics)|norm]] (as with [[least absolute deviations]] regression), or by minimizing a penalized version of the least squares [[loss function]] as in [[ridge regression]]. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous.
| |
| | |
| ==Introduction to linear regression==
| |
| Given a [[data]] set <math>\{y_i,\, x_{i1}, \ldots, x_{ip}\}_{i=1}^n</math> of ''n'' [[statistical unit]]s, a linear regression model assumes that the relationship between the dependent variable ''y<sub>i</sub>'' and the ''p''-vector of regressors ''x<sub>i</sub>'' is [[linear function|linear]]. This relationship is modelled through a ''disturbance term'' or ''error variable'' ''ε<sub>i</sub>'' — an unobserved [[random variable]] that adds noise to the linear relationship between the dependent variable and regressors. Thus the model takes the form
| |
| : <math>
| |
| y_i = \beta_1 x_{i1} + \cdots + \beta_p x_{ip} + \varepsilon_i
| |
| = \mathbf{x}^{\rm T}_i\boldsymbol\beta + \varepsilon_i,
| |
| \qquad i = 1, \ldots, n,
| |
| </math>
| |
| where <sup>T</sup> denotes the [[transpose]], so that '''''x'''<sub>i</sub>''<sup>T</sup>'''''β''''' is the [[inner product]] between [[coordinate vector|vectors]] '''''x'''<sub>i</sub>'' and '''''β'''''.
| |
| | |
| Often these ''n'' equations are stacked together and written in vector form as
| |
| : <math>
| |
| \mathbf{y} = \mathbf{X}\boldsymbol\beta + \boldsymbol\varepsilon, \,
| |
| </math>
| |
| where | |
| : <math>
| |
| \mathbf{y} = \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix}, \quad
| |
| \mathbf{X} = \begin{pmatrix} \mathbf{x}^{\rm T}_1 \\ \mathbf{x}^{\rm T}_2 \\ \vdots \\ \mathbf{x}^{\rm T}_n \end{pmatrix}
| |
| = \begin{pmatrix} x_{11} & \cdots & x_{1p} \\
| |
| x_{21} & \cdots & x_{2p} \\
| |
| \vdots & \ddots & \vdots \\
| |
| x_{n1} & \cdots & x_{np}
| |
| \end{pmatrix}, \quad
| |
| \boldsymbol\beta = \begin{pmatrix} \beta_1 \\ \beta_2 \\ \vdots \\ \beta_p \end{pmatrix}, \quad
| |
| \boldsymbol\varepsilon = \begin{pmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_n \end{pmatrix}.
| |
| </math>
| |
| Some remarks on terminology and general use:
| |
| * <math>y_i\,</math> is called the ''regressand'', ''endogenous variable'', ''response variable'', ''measured variable'', or ''dependent variable'' <!-- "predicted variable" was also included in this list in previous edit; however i think that more common use for "predicted variable" is for <math>\hat{y}=\mathbf{x}'\hat{\boldsymbol\beta}</math> --> (see [[dependent and independent variables]].) The decision as to which variable in a data set is modeled as the dependent variable and which are modeled as the independent variables may be based on a presumption that the value of one of the variables is caused by, or directly influenced by the other variables. Alternatively, there may be an operational reason to model one of the variables in terms of the others, in which case there need be no presumption of causality.
| |
| * <math>x_{i1},\, x_{i2},\, \ldots,\, x_{ip}\,</math> are called ''regressors'', ''exogenous variables'', ''explanatory variables'', ''covariates'', ''input variables'', ''predictor variables'', or ''independent variables'' (see [[dependent and independent variables]], but not to be confused with [[independent random variables]]). The matrix <math>\mathbf{X}</math> is sometimes called the [[design matrix]].
| |
| ** Usually a constant is included as one of the regressors. For example we can take ''x''<sub>''i''1</sub> = 1 for ''i'' = 1, ..., ''n''. The corresponding element of '''''β''''' is called the ''intercept''. Many statistical inference procedures for linear models require an intercept to be present, so it is often included even if theoretical considerations suggest that its value should be zero.
| |
| ** Sometimes one of the regressors can be a non-linear function of another regressor or of the data, as in [[polynomial regression]] and [[segmented regression]]. The model remains linear as long as it is linear in the parameter vector '''''β'''''.
| |
| ** The regressors ''x''<sub>''ij''</sub> may be viewed either as [[random variables]], which we simply observe, or they can be considered as predetermined fixed values which we can choose. Both interpretations may be appropriate in different cases, and they generally lead to the same estimation procedures; however different approaches to asymptotic analysis are used in these two situations.
| |
| * <math>\boldsymbol\beta\,</math> is a ''p''-dimensional ''parameter vector''. Its elements are also called ''effects'', or ''regression coefficients''. Statistical [[estimation theory|estimation]] and [[statistical inference|inference]] in linear regression focuses on '''''β'''''.
| |
| * <math>\varepsilon_i\,</math> is called the ''error term'', ''disturbance term'', or ''noise''. This variable captures all other factors which influence the dependent variable ''y''<sub>''i''</sub> other than the regressors '''''x'''''<sub>''i''</sub>. The relationship between the error term and the regressors, for example whether they are [[correlation|correlated]], is a crucial step in formulating a linear regression model, as it will determine the method to use for estimation.
| |
| | |
| '''Example'''. Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascent ''h<sub>i</sub>'' at various moments in time ''t<sub>i</sub>''. Physics tells us that, ignoring the drag, the relationship can be modelled as
| |
| : <math>
| |
| h_i = \beta_1 t_i + \beta_2 t_i^2 + \varepsilon_i, | |
| </math>
| |
| where ''β''<sub>1</sub> determines the initial velocity of the ball, ''β''<sub>2</sub> is proportional to the [[standard gravity]], and ''ε''<sub>''i''</sub> is due to measurement errors. Linear regression can be used to estimate the values of ''β''<sub>1</sub> and ''β''<sub>2</sub> from the measured data. This model is non-linear in the time variable, but it is linear in the parameters ''β''<sub>1</sub> and ''β''<sub>2</sub>; if we take regressors '''''x'''''<sub>''i''</sub> = (''x''<sub>''i''1</sub>, ''x''<sub>''i''2</sub>) = (''t''<sub>''i''</sub>, ''t''<sub>''i''</sub><sup>2</sup>), the model takes on the standard form
| |
| : <math>
| |
| h_i = \mathbf{x}^{\rm T}_i\boldsymbol\beta + \varepsilon_i.
| |
| </math>
| |
| | |
| ===Assumptions===
| |
| Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables, the response variables and their relationship. Numerous extensions have been developed that allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated entirely. Some methods are general enough that they can relax multiple assumptions at once, and in other cases this can be achieved by combining different extensions. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to get an accurate model.
| |
| | |
| The following are the major assumptions made by standard linear regression models with standard estimation techniques (e.g. [[ordinary least squares]]):
| |
| *'''Weak exogeneity'''. This essentially means that the predictor variables ''x'' can be treated as fixed values, rather than [[random variable]]s. This means, for example, that the predictor variables are assumed to be error-free, that is they are not contaminated with measurement errors. Although not realistic in many settings, dropping this assumption leads to significantly more difficult [[errors-in-variables model]]s.
| |
| *'''Linearity'''. This means that the mean of the response variable is a [[linear combination]] of the parameters (regression coefficients) and the predictor variables. Note that this assumption is much less restrictive than it may at first seem. Because the predictor variables are treated as fixed values (see above), linearity is really only a restriction on the parameters. The predictor variables themselves can be arbitrarily transformed, and in fact multiple copies of the same underlying predictor variable can be added, each one transformed differently. This trick is used, for example, in [[polynomial regression]], which uses linear regression to fit the response variable as an arbitrary [[polynomial]] function (up to a given rank) of a predictor variable. This makes linear regression an extremely powerful inference method. In fact, models such as polynomial regression are often "too powerful", in that they tend to [[overfit]] the data. As a result, some kind of [[regularization (mathematics)|regularization]] must typically be used to prevent unreasonable solutions coming out of the estimation process. Common examples are [[ridge regression]] and [[lasso regression]]. [[Bayesian linear regression]] can also be used, which by its nature is more or less immune to the problem of overfitting. (In fact, [[ridge regression]] and [[lasso regression]] can both be viewed as special cases of Bayesian linear regression, with particular types of [[prior distribution]]s placed on the regression coefficients.)
| |
| | |
| *'''Constant variance''' (aka '''[[homoscedasticity]]'''). This means that different response variables have the same [[variance]] in their errors, regardless of the values of the predictor variables. In practice this assumption is invalid (i.e. the errors are [[heteroscedasticity|heteroscedastic]]) if the response variables can vary over a wide scale. In order to determine for heterogeneous error variance, or when a pattern of residuals violates model assumptions of homoscedasticity (error is equally variable around the 'best-fitting line' for all points of x), it is prudent to look for a "fanning effect" between residual error and predicted values. This is to say there will be a systematic change in the absolute or squared residuals when plotted against the predicting outcome. Error will not be evenly distributed across the regression line. Heteroscedasticity will result in the averaging over of distinguishable variances around the points to get a single variance that is inaccurately representing all the variances of the line. In effect, residuals appear clustered and spread apart on their predicted plots for larger and smaller values for points along the linear regression line, and the mean squared error for the model will be wrong. Typically, for example, a response variable whose mean is large will have a greater variance than one whose mean is small. For example, a given person whose income is predicted to be $100,000 may easily have an actual income of $80,000 or $120,000 (a [[standard deviation]] of around $20,000), while another person with a predicted income of $10,000 is unlikely to have the same $20,000 standard deviation, which would imply their actual income would vary anywhere between -$10,000 and $30,000. (In fact, as this shows, in many cases – often the same cases where the assumption of normally distributed errors fails – the variance or standard deviation should be predicted to be proportional to the mean, rather than constant.) Simple linear regression estimation methods give less precise parameter estimates and misleading inferential quantities such as standard errors when substantial heteroscedasticity is present. However, various estimation techniques (e.g. [[weighted least squares]] and [[heteroscedasticity-consistent standard errors]]) can handle heteroscedasticity in a quite general way. [[Bayesian linear regression]] techniques can also be used when the variance is assumed to be a function of the mean. It is also possible in some cases to fix the problem by applying a transformation to the response variable (e.g. fit the [[logarithm]] of the response variable using a linear regression model, which implies that the response variable has a [[log-normal distribution]] rather than a [[normal distribution]]).
| |
| | |
| *'''Independence''' of errors. This assumes that the errors of the response variables are uncorrelated with each other. (Actual [[Independence (probability theory)|statistical independence]] is a stronger condition than mere lack of correlation and is often not needed, although it can be exploited if it is known to hold.) Some methods (e.g. [[generalized least squares]]) are capable of handling correlated errors, although they typically require significantly more data unless some sort of [[regularization (mathematics)|regularization]] is used to bias the model towards assuming uncorrelated errors. [[Bayesian linear regression]] is a general way of handling this issue.
| |
| *'''Lack of multicollinearity''' in the predictors. For standard [[least squares]] estimation methods, the design matrix ''X'' must have full [[column rank]] ''p'',; otherwise, we have a condition known as [[multicollinearity]] in the predictor variables. This can be triggered by having two or more perfectly correlated predictor variables (e.g. if the same predictor variable is mistakenly given twice, either without transforming one of the copies or by transforming one of the copies linearly). It can also happen if there is too little data available compared to the number of parameters to be estimated (e.g. fewer data points than regression coefficients). In the case of multicollinearity, the parameter vector ''β'' will be [[non-identifiable]] — it has no unique solution. At most we will be able to identify some of the parameters, i.e. narrow down its value to some linear subspace of '''R'''<sup>''p''</sup>. See [[partial least squares regression]]. Methods for fitting linear models with multicollinearity have been developed;<ref name="tibs_lasso" /><ref name="efron_lars" /><ref name="hawkins_pcr" /><ref name="joliffe_pcr" /> some require additional assumptions such as "effect sparsity" — that a large fraction of the effects are exactly zero. Note that the more computationally expensive iterated algorithms for parameter estimation, such as those used in [[generalized linear model]]s, do not suffer from this problem — and in fact it's quite normal to when handling [[categorical data|categorically-valued]] predictors to introduce a separate [[indicator variable]] predictor for each possible category, which inevitably introduces multicollinearity.
| |
| | |
| Beyond these assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods:
| |
| * The statistical relationship between the error terms and the regressors plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and consistent.
| |
| * The arrangement, or [[probability distribution]] of the predictor variables ''x'' has a major influence on the precision of estimates of ''β''. [[Sampling (statistics)|Sampling]] and [[design of experiments]] are highly-developed subfields of statistics that provide guidance for collecting data in such a way to achieve a precise estimate of ''β''.
| |
| | |
| ===Interpretation===
| |
| [[Image:Anscombe's quartet 3.svg|right|425px|thumb|The sets in the [[Anscombe's quartet]] have the same linear regression line but are themselves very different.]]
| |
| A fitted linear regression model can be used to identify the relationship between a single predictor variable ''x''<sub>''j''</sub> and the response variable ''y'' when all the other predictor variables in the model are "held fixed". Specifically, the interpretation of ''β''<sub>''j''</sub> is the [[expected value|expected]] change in ''y'' for a one-unit change in ''x''<sub>''j''</sub> when the other covariates are held fixed—that is, the expected value of the [[partial derivative]] of ''y'' with respect to ''x''<sub>''j''</sub>. This is sometimes called the ''unique effect'' of ''x''<sub>''j''</sub> on ''y''. In contrast, the ''marginal effect'' of ''x''<sub>''j''</sub> on ''y'' can be assessed using a [[Pearson correlation|correlation coefficient]] or [[simple linear regression]] model relating ''x''<sub>''j''</sub> to ''y''; this effect is the [[total derivative]] of ''y'' with respect to ''x''<sub>''j''</sub>.
| |
| | |
| Care must be taken when interpreting regression results, as some of the regressors may not allow for marginal changes (such as dummy variables, or the intercept term), while others cannot be held fixed (recall the example from the introduction: it would be impossible to "hold ''t<sub>i</sub>'' fixed" and at the same time change the value of ''t<sub>i</sub>''<sup>2</sup>).
| |
| | |
| It is possible that the unique effect can be nearly zero even when the marginal effect is large. This may imply that some other covariate captures all the information in ''x''<sub>''j''</sub>, so that once that variable is in the model, there is no contribution of ''x''<sub>''j''</sub> to the variation in ''y''. Conversely, the unique effect of ''x''<sub>''j''</sub> can be large while its marginal effect is nearly zero. This would happen if the other covariates explained a great deal of the variation of ''y'', but they mainly explain variation in a way that is complementary to what is captured by ''x''<sub>''j''</sub>. In this case, including the other variables in the model reduces the part of the variability of ''y'' that is unrelated to ''x''<sub>''j''</sub>, thereby strengthening the apparent relationship with ''x''<sub>''j''</sub>.
| |
| | |
| The meaning of the expression "held fixed" may depend on how the values of the predictor variables arise. If the experimenter directly sets the values of the predictor variables according to a study design, the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been "held fixed" by the experimenter. Alternatively, the expression "held fixed" can refer to a selection that takes place in the context of data analysis. In this case, we "hold a variable fixed" by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable. This is the only interpretation of "held fixed" that can be used in an observational study.
| |
| | |
| The notion of a "unique effect" is appealing when studying a complex system where multiple interrelated components influence the response variable. In some cases, it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable. However, it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design.<ref>{{cite book | title=Regression Analysis: A Constructive Critique | author=Berk, Richard A. | publisher=Sage | doi=10.1177/0734016807304871}}</ref>
| |
| | |
| ==Extensions==
| |
| Numerous extensions of linear regression have been developed, which allow some or all of the assumptions underlying the basic model to be relaxed.
| |
| | |
| ===Simple and multiple regression===
| |
| The very simplest case of a single [[scalar (mathematics)|scalar]] predictor variable ''x'' and a single scalar response variable ''y'' is known as ''simple linear regression''. The extension to multiple and/or [[Euclidean vector|vector]]-valued predictor variables (denoted with a capital ''X'') is known as ''multiple linear regression''. Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable ''y'' is still a scalar.
| |
| | |
| ===General linear models===
| |
| The [[general linear model]] considers the situation when the response variable ''Y'' is not a scalar but a vector. Conditional linearity of ''E''(''y''|''x'') = ''Bx'' is still assumed, with a matrix ''B'' replacing the vector ''β'' of the classical linear regression model. Multivariate analogues of OLS and GLS have been developed.
| |
| | |
| ===Heteroscedastic models===
| |
| Various models have been created that allow for [[heteroscedasticity]], i.e. the errors for different response variables may have different [[variance]]s. For example, [[weighted least squares]] is a method for estimating linear regression models when the response variables may have different error variances, possibly with correlated errors. (See also [[Linear least squares (mathematics)#Weighted linear least squares]], and [[generalized least squares]].) [[Heteroscedasticity-consistent standard errors]] is an improved method for use with uncorrelated but potentially heteroscedastic errors.
| |
| | |
| ===Generalized linear models===
| |
| [[Generalized linear model]]s (GLMs) are a framework for modeling a response variable ''y'' that is bounded or discrete. This is used, for example:
| |
| *when modeling positive quantities (e.g. prices or populations) that vary over a large scale — which are better described using a [[skewed distribution]] such as the [[log-normal distribution]] or [[Poisson distribution]] (although GLMs are not used for log-normal data, instead the response variable is simply transformed using the logarithm function);
| |
| *when modeling [[categorical data]], such as the choice of a given candidate in an election (which is better described using a [[Bernoulli distribution]]/[[binomial distribution]] for binary choices, or a [[categorical distribution]]/[[multinomial distribution]] for multi-way choices), where there are a fixed number of choices that cannot be meaningfully ordered;
| |
| *when modeling [[ordinal data]], e.g. ratings on a scale from 0 to 5, where the different outcomes can be ordered but where the quantity itself may not have any absolute meaning (e.g. a rating of 4 may not be "twice as good" in any objective sense as a rating of 2, but simply indicates that it is better than 2 or 3 but not as good as 5).
| |
| Generalized linear models allow for an arbitrary ''link function'' ''g'' that relates the [[mean]] of the response variable to the predictors, i.e. ''E''(''y'') = ''g''(''β''′''x''). The link function is often related to the distribution of the response, and in particular it typically has the effect of transforming between the <math>(-\infty,\infty)</math> range of the linear predictor and the range of the response variable.
| |
| | |
| Some common examples of GLMs are:
| |
| *[[Poisson regression]] for count data.
| |
| *[[Logistic regression]] and [[probit regression]] for binary data.
| |
| *[[Multinomial logistic regression]] and [[multinomial probit]] regression for categorical data.
| |
| *[[Ordered probit]] regression for ordinal data.
| |
| | |
| Single index models{{Clarify|date=March 2012}} allow some degree of nonlinearity in the relationship between ''x'' and ''y'', while preserving the central role of the linear predictor ''β''′''x'' as in the classical linear regression model. Under certain conditions, simply applying OLS to data from a single-index model will consistently estimate ''β'' up to a proportionality constant.<ref>{{cite journal | title=The Identification of a Particular Nonlinear Time Series System | last=Brillinger | first=David R. | journal=Biometrika | volume=64 | year=1977 | pages=509–515 | doi=10.1093/biomet/64.3.509 | issue=3 | jstor=2345326}}</ref>
| |
| | |
| ===Hierarchical linear models===
| |
| [[Hierarchical linear models]] (or ''multilevel regression'') organizes the data into a hierarchy of regressions, for example where ''A'' is regressed on ''B'', and ''B'' is regressed on ''C''. It is often used where the data have a natural hierarchical structure such as in educational statistics, where students are nested in classrooms, classrooms are nested in schools, and schools are nested in some administrative grouping, such as a school district. The response variable might be a measure of student achievement such as a test score, and different covariates would be collected at the classroom, school, and school district levels.
| |
| | |
| ===Errors-in-variables===
| |
| [[Errors-in-variables model]]s (or "measurement error models") extend the traditional linear regression model to allow the predictor variables ''X'' to be observed with error. This error causes standard estimators of ''β'' to become biased. Generally, the form of bias is an attenuation, meaning that the effects are biased toward zero.
| |
| | |
| ===Others===
| |
| * In [[Dempster–Shafer theory]], or a [[linear belief function]] in particular, a linear regression model may be represented as a partially swept matrix, which can be combined with similar matrices representing observations and other assumed normal distributions and state equations. The combination of swept or unswept matrices provides an alternative method for estimating linear regression models.
| |
| | |
| ==Estimation methods==
| |
| [[File:Thiel-Sen estimator.svg|thumb|Comparison of the [[Theil–Sen estimator]] (black) and [[simple linear regression]] (blue) for a set of points with outliers.]]
| |
| A large number of procedures have been developed for [[parameter]] estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as [[consistent estimator|consistency]] and asymptotic [[efficiency (statistics)|efficiency]].
| |
| | |
| Some of the more common estimation techniques for linear regression are summarized below.
| |
| | |
| ===Least-squares estimation and related techniques===
| |
| {{unordered list<!-- the reason for {{unordered list}} instead of wiki-formatting is that
| |
| some items on this list are spanning several paragraphs.
| |
| -->
| |
| |1= '''[[Ordinary least squares]]''' (OLS) is the simplest and thus most common estimator. It is conceptually simple and computationally straightforward. OLS estimates are commonly used to analyze both [[experiment]]al and [[observational study|observational]] data.
| |
| | |
| The OLS method minimizes the sum of squared [[Errors and residuals in statistics|residuals]], and leads to a closed-form expression for the estimated value of the unknown parameter ''β'':
| |
| : <math>
| |
| \hat{\boldsymbol\beta} = (\mathbf{X}^{\rm T}\mathbf{X})^{-1} \mathbf{X}^{\rm T}\mathbf{y}
| |
| = \big(\, \tfrac{1}{n}{\textstyle\sum} \mathbf{x}_i \mathbf{x}^{\rm T}_i \,\big)^{-1}
| |
| \big(\, \tfrac{1}{n}{\textstyle\sum} \mathbf{x}_i y_i \,\big).
| |
| </math>
| |
| | |
| The estimator is [[bias of an estimator|unbiased]] and [[consistent estimator|consistent]] if the errors have finite variance and are uncorrelated with the regressors<ref>{{cite journal | last=Lai | first=T.L. | coauthors=Robbins,H; Wei, C.Z. | journal=[[Proceedings of the National Academy of Sciences|PNAS]] | year=1978 | volume=75 | title=Strong consistency of least squares estimates in multiple regression | issue=7 |pages=3034–3036 |doi= 10.1073/pnas.75.7.3034|jstor=68164 | bibcode=1978PNAS...75.3034L | last2=Robbins | last3=Wei }}</ref>
| |
| : <math>
| |
| \operatorname{E}[\,\mathbf{x}_i\varepsilon_i\,] = 0.
| |
| </math>
| |
| It is also [[efficiency (statistics)|efficient]] under the assumption that the errors have finite variance and are [[Homoscedasticity|homoscedastic]], meaning that E[''ε<sub>i</sub><sup>2</sup>''{{!}}'''x'''<sub>''i''</sub>] does not depend on ''i''. The condition that the errors are uncorrelated with the regressors will generally be satisfied in an experiment, but in the case of observational data, it is difficult to exclude the possibility of an omitted covariate ''z'' that is related to both the observed covariates and the response variable. The existence of such a covariate will generally lead to a correlation between the regressors and the response variable, and hence to an inconsistent estimator of '''β'''. The condition of homoscedasticity can fail with either experimental or observational data. If the goal is either inference or predictive modeling, the performance of OLS estimates can be poor if [[multicollinearity]] is present, unless the sample size is large.
| |
| | |
| In [[simple linear regression]], where there is only one regressor (with a constant), the OLS coefficient estimates have a simple form that is closely related to the [[Pearson correlation coefficient|correlation coefficient]] between the covariate and the response.
| |
| | |
| |2= '''[[Generalized least squares]]''' (GLS) is an extension of the OLS method, that allows efficient estimation of ''β'' when either [[heteroscedasticity]], or correlations, or both are present among the error terms of the model, as long as the form of heteroscedasticity and correlation is known independently of the data. To handle heteroscedasticity when the error terms are uncorrelated with each other, GLS minimizes a weighted analogue to the sum of squared residuals from OLS regression, where the weight for the ''i''<sup>th</sup> case is inversely proportional to var(''ε<sub>i</sub>''). This special case of GLS is called "weighted least squares". The GLS solution to estimation problem is
| |
| : <math>
| |
| \hat{\boldsymbol\beta} = (\mathbf{X}^{\rm T}\boldsymbol\Omega^{-1}\mathbf{X})^{-1}\mathbf{X}^{\rm T}\boldsymbol\Omega^{-1}\mathbf{y},
| |
| </math>
| |
| where '''Ω''' is the covariance matrix of the errors. GLS can be viewed as applying a linear transformation to the data so that the assumptions of OLS are met for the transformed data. For GLS to be applied, the covariance structure of the errors must be known up to a multiplicative constant.
| |
| | |
| |3= '''[[Percentage least squares]]''' focuses on reducing percentage errors, which is useful in the field of forecasting or time series analysis. It is also useful in situations where the dependent variable has a wide range without constant variance, as here the larger residuals at the upper end of the range would dominate if OLS were used. When the percentage or relative error is normally distributed, least squares percentage regression provides maximum likelihood estimates. Percentage regression is linked to a multiplicative error model, whereas OLS is linked to models containing an additive error term.<ref>{{cite journal | url = http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1406472 | title=Least Squares Percentage Regression | author = Tofallis, C | journal = Journal of Modern Applied Statistical Methods | volume=7 | year = 2009 | pages=526–534 | doi = 10.2139/ssrn.1406472 }}</ref>
| |
| | |
| |4= '''[[Iteratively reweighted least squares]]''' (IRLS) is used when [[heteroscedasticity]], or correlations, or both are present among the error terms of the model, but where little is known about the covariance structure of the errors independently of the data.<ref>{{cite journal | title=The Unifying Role of Iterative Generalized Least Squares in Statistical Algorithms | last=del Pino | first=Guido | journal=Statistical Science | volume=4 | year=1989 | pages=394–403 | doi=10.1214/ss/1177012408 | issue=4 | jstor=2245853}}</ref> In the first iteration, OLS, or GLS with a provisional covariance structure is carried out, and the residuals are obtained from the fit. Based on the residuals, an improved estimate of the covariance structure of the errors can usually be obtained. A subsequent GLS iteration is then performed using this estimate of the error structure to define the weights. The process can be iterated to convergence, but in many cases, only one iteration is sufficient to achieve an efficient estimate of ''β''.<ref>{{cite journal | title=Adapting for Heteroscedasticity in Linear Models | last=Carroll | first=Raymond J. | journal=The Annals of Statistics | volume=10 | year=1982 | pages=1224–1233 | doi=10.1214/aos/1176345987 | issue=4 | jstor=2240725}}</ref><ref>{{cite journal | title=Robust, Smoothly Heterogeneous Variance Regression | last=Cohen | first=Michael | coauthors=Dalal, Siddhartha R.; Tukey,John W. | journal=Journal of the Royal Statistical Society, Series C | volume=42 | year=1993 | pages=339–353 | issue=2 | jstor=2986237}}</ref>
| |
| | |
| |5= '''[[Instrumental variables]]''' regression (IV) can be performed when the regressors are correlated with the errors. In this case, we need the existence of some auxiliary ''instrumental variables'' '''z'''<sub>''i''</sub> such that E['''z'''<sub>''i''</sub>''ε''<sub>''i''</sub>] = 0. If '''Z''' is the matrix of instruments, then the estimator can be given in closed form as
| |
| : <math>
| |
| \hat{\boldsymbol\beta} = (\mathbf{X}^{\rm T}\mathbf{Z}(\mathbf{Z}^{\rm T}\mathbf{Z})^{-1}\mathbf{Z}^{\rm T}\mathbf{X})^{-1}\mathbf{X}^{\rm T}\mathbf{Z}(\mathbf{Z}^{\rm T}\mathbf{Z})^{-1}\mathbf{Z}^{\rm T}\mathbf{y}.
| |
| </math>
| |
| | |
| |6= '''Optimal instruments''' regression is an extension of classical IV regression to the situation where E[''ε<sub>i</sub>''{{!}}'''z'''<sub>''i''</sub>] = 0.
| |
| | |
| |7= '''[[Total least squares]]''' (TLS)<ref>{{cite journal | title=Total Least Squares: State-of-the-Art Regression in Numerical Analysis | last=Nievergelt | first=Yves | journal=SIAM Review | volume=36 | year=1994 |pages=258–264 | doi=10.1137/1036055 | issue=2 | jstor=2132463}}</ref> is an approach to least squares estimation of the linear regression model that treats the covariates and response variable in a more geometrically symmetric manner than OLS. It is one approach to handling the "errors in variables" problem, and is sometimes used when the covariates are assumed to be error-free.
| |
| }}
| |
| | |
| ===Maximum-likelihood estimation and related techniques===
| |
| | |
| * '''[[Maximum likelihood estimation]]''' can be performed when the distribution of the error terms is known to belong to a certain parametric family ''ƒ<sub>θ</sub>'' of [[probability distribution]]s.<ref>{{cite journal | title=Robust Statistical Modeling Using the t Distribution | last=Lange | first=Kenneth L. | coauthors=Little, Roderick J. A.; Taylor,Jeremy M. G. | journal=Journal of the American Statistical Association | volume=84 | year=1989 | pages=881–896 | doi=10.2307/2290063 | issue=408 | jstor=2290063}}</ref> When ''f''<sub>θ</sub> is a normal distribution with zero [[expected value|mean]] and variance θ, the resulting estimate is identical to the OLS estimate. GLS estimates are maximum likelihood estimates when ε follows a multivariate normal distribution with a known covariance matrix.
| |
| * '''[[Ridge regression]]''',<ref>{{cite journal | title=Geometry of Ridge Regression Illustrated | last=Swindel | first=Benee F. | journal=The American Statistician | volume=35 | year=1981 | pages=12–15 | doi=10.2307/2683577 | issue=1 | jstor=2683577}}</ref><ref>{{cite journal | title=Ridge Regression and James-Stein Estimation: Review and Comments | last=Draper | first=Norman R. | coauthors=van Nostrand,R. Craig | journal=Technometrics | volume=21 | year=1979 | pages=451–466 | doi=10.2307/1268284 | issue=4 | jstor=1268284}}</ref><ref>{{cite journal | title=Practical Use of Ridge Regression: A Challenge Met | last=Hoerl | first=Arthur E. | coauthors=Kennard,Robert W.; Hoerl,Roger W. | journal=Journal of the Royal Statistical Society, Series C | volume=34 | year=1985 | pages=114–120 | issue=2 | jstor=2347363}}</ref> and other forms of penalized estimation such as '''[[Least squares#Lasso method|Lasso regression]]''',<ref name="tibs_lasso">{{cite journal | title=Regression Shrinkage and Selection via the Lasso | last=Tibshirani | first=Robert | journal=Journal of the Royal Statistical Society, Series B | volume=58 | year=1996 | pages=267–288 | issue=1 | jstor=2346178}}</ref> deliberately introduce [[bias of an estimator|bias]] into the estimation of ''β'' in order to reduce the [[variance|variability]] of the estimate. The resulting estimators generally have lower [[mean squared error]] than the OLS estimates, particularly when [[multicollinearity]] is present. They are generally used when the goal is to predict the value of the response variable ''y'' for values of the predictors ''x'' that have not yet been observed. These methods are not as commonly used when the goal is inference, since it is difficult to account for the bias.
| |
| * '''[[Least absolute deviation]]''' (LAD) regression is a [[robust regression|robust estimation]] technique in that it is less sensitive to the presence of outliers than OLS (but is less [[efficiency (statistics)|efficient]] than OLS when no outliers are present). It is equivalent to maximum likelihood estimation under a [[Laplace distribution]] model for ''ε''.<ref>{{cite journal | title=The Minimum Sum of Absolute Errors Regression: A State of the Art Survey | last=Narula | first=Subhash C. | coauthors=Wellington, John F. | journal=International Statistical Review | volume=50 | year=1982 | pages=317–326 | doi=10.2307/1402501 | issue=3 | jstor=1402501}}</ref>
| |
| * '''Adaptive estimation'''. If we assume that error terms are [[Independence (probability theory)|independent]] from the regressors <math>\varepsilon_i \perp \mathbf{x}_i</math>, the optimal estimator is the 2-step MLE, where the first step is used to non-parametrically estimate the distribution of the error term.<ref>{{cite journal|title=Adaptive maximum likelihood estimators of a location parameter|author=Stone, C. J.|journal=The Annals of Statistics|volume=3|issue=2|year=1975|pages=267–284|doi=10.1214/aos/1176343056|jstor=2958945}}</ref>
| |
| | |
| ===Other estimation techniques===
| |
| * '''[[Bayesian linear regression]]''' applies the framework of [[Bayesian statistics]] to linear regression. (See also [[Bayesian multivariate linear regression]].) In particular, the regression coefficients β are assumed to be [[random variable]]s with a specified [[prior distribution]]. The prior distribution can bias the solutions for the regression coefficients, in a way similar to (but more general than) [[ridge regression]] or [[lasso regression]]. In addition, the Bayesian estimation process produces not a single point estimate for the "best" values of the regression coefficients but an entire [[posterior distribution]], completely describing the uncertainty surrounding the quantity. This can be used to estimate the "best" coefficients using the mean, mode, median, any quantile (see [[quantile regression]]), or any other function of the posterior distribution.
| |
| * '''[[Quantile regression]]''' focuses on the conditional quantiles of ''y'' given ''X'' rather than the conditional mean of ''y'' given ''X''. Linear quantile regression models a particular conditional quantile, often the conditional median, as a linear function β<sup>T</sup>''x'' of the predictors.
| |
| * '''[[Mixed model]]s''' are widely used to analyze linear regression relationships involving dependent data when the dependencies have a known structure. Common applications of mixed models include analysis of data involving repeated measurements, such as longitudinal data, or data obtained from cluster sampling. They are generally fit as [[parametric statistics|parametric]] models, using maximum likelihood or Bayesian estimation. In the case where the errors are modeled as [[normal distribution|normal]] random variables, there is a close connection between mixed models and generalized least squares.<ref>{{cite journal | title=Multilevel Mixed Linear Model Analysis Using Iterative Generalized Least Squares | last=Goldstein | first=H. | journal=Biometrika | volume=73 | year=1986 | pages=43–56 | doi=10.1093/biomet/73.1.43 | issue=1 | jstor=2336270}}</ref> [[Fixed effects estimation]] is an alternative approach to analyzing this type of data.
| |
| * '''[[Principal component regression]]''' (PCR)<ref name="hawkins_pcr">{{cite journal | title=On the Investigation of Alternative Regressions by Principal Component Analysis | last=Hawkins | first=Douglas M. | journal=Journal of the Royal Statistical Society, Series C | volume=22 | year=1973 | pages=275–286 | issue=3 | jstor=2346776}}</ref><ref name="joliffe_pcr">{{cite journal | title=A Note on the Use of Principal Components in Regression | last=Jolliffe | first= Ian T. | journal=Journal of the Royal Statistical Society, Series C | volume=31 | year=1982 | pages=300–303 | issue=3 | jstor=2348005}}</ref> is used when the number of predictor variables is large, or when strong correlations exist among the predictor variables. This two-stage procedure first reduces the predictor variables using [[principal component analysis]] then uses the reduced variables in an OLS regression fit. While it often works well in practice, there is no general theoretical reason that the most informative linear function of the predictor variables should lie among the dominant principal components of the multivariate distribution of the predictor variables. The [[partial least squares regression]] is the extension of the PCR method which does not suffer from the mentioned deficiency.
| |
| * '''[[Least-angle regression]]'''<ref name="efron_lars">{{cite journal | title=Least Angle Regression | last=Efron | first= Bradley | coauthors=Hastie, Trevor; Johnstone,Iain; Tibshirani,Robert | journal=The Annals of Statistics | volume=32 | year=2004 | pages=407–451 | doi=10.1214/009053604000000067 | issue=2 | jstor=3448465}}</ref> is an estimation procedure for linear regression models that was developed to handle high-dimensional covariate vectors, potentially with more covariates than observations.
| |
| *The '''[[Theil–Sen estimator]]''' is a simple [[robust regression|robust estimation]] technique that chooses the slope of the fit line to be the median of the slopes of the lines through pairs of sample points. It has similar statistical efficiency properties to simple linear regression but is much less sensitive to [[outlier]]s.<ref>{{Cite journal
| |
| | last = Theil | first = H. | authorlink = Henri Theil
| |
| | journal = Nederl. Akad. Wetensch., Proc.
| |
| | mr = 0036489
| |
| | pages = 386–392, 521–525, 1397–1412
| |
| | title = A rank-invariant method of linear and polynomial regression analysis. I, II, III
| |
| | volume = 53
| |
| | year = 1950
| |
| | postscript = <!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}}}; {{Cite journal
| |
| | last = Sen | first = Pranab Kumar | authorlink = Pranab K. Sen
| |
| | journal = [[Journal of the American Statistical Association]]
| |
| | jstor = 2285891
| |
| | mr = 0258201
| |
| | pages = 1379–1389
| |
| | title = Estimates of the regression coefficient based on Kendall's tau
| |
| | volume = 63
| |
| | year = 1968
| |
| | issue = 324
| |
| | doi = 10.2307/2285891
| |
| | postscript = <!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}}}.</ref>
| |
| * Other robust estimation techniques, including the '''α-trimmed mean''' approach, and '''L-, M-, S-, and R-estimators''' have been introduced.
| |
| | |
| ===Further discussion===
| |
| In [[statistics]] and [[numerical analysis]], the problem of '''numerical methods for linear least squares''' is an important one because linear regression models are one of the most important types of model, both as formal [[statistical model]]s and for exploration of data sets. The majority of [[Comparison of statistical packages|statistical computer packages]] contain facilities for regression analysis that make use of linear least squares computations. Hence it is appropriate that considerable effort has been devoted to the task of ensuring that these computations are undertaken efficiently and with due regard to [[Precision (computer science)|numerical precision]].
| |
| | |
| Individual statistical analyses are seldom undertaken in isolation, but rather are part of a sequence of investigatory steps. Some of the topics involved in considering numerical methods for linear least squares relate to this point. Thus important topics can be
| |
| *Computations where a number of similar, and often nested, models are considered for the same data set. That is, where models with the same [[dependent variable]] but different sets of [[independent variables]] are to be considered, for essentially the same set of data points.
| |
| *Computations for analyses that occur in a sequence, as the number of data points increases.
| |
| *Special considerations for very extensive data sets.
| |
| | |
| Fitting of linear models by least squares often, but not always, arises in the context of [[statistical analysis]]. It can therefore be important that considerations of computational efficiency for such problems extend to all of the auxiliary quantities required for such analyses, and are not restricted to the formal solution of the [[linear least squares (mathematics)|linear least squares]] problem.
| |
| | |
| Matrix calculations, like any others, are affected by [[rounding error]]s. An early summary of these effects, regarding the choice of computational methods for matrix inversion, was provided by Wilkinson.<ref>Wilkinson, J.H. (1963) "Chapter 3: Matrix Computations", ''Rounding Errors in Algebraic Processes'', London: Her Majesty's Stationery Office (National Physical Laboratory, Notes in Applied Science, No.32)</ref>
| |
| | |
| ==Applications of linear regression==
| |
| Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these disciplines.
| |
| | |
| ===Trend line===
| |
| {{Main|Trend estimation}}
| |
| A '''trend line''' represents a trend, the long-term movement in [[time series]] data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.
| |
| | |
| Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.
| |
| | |
| ===Epidemiology===
| |
| Early evidence relating [[tobacco smoking]] to mortality and [[morbidity]] came from [[observational studies]] employing regression analysis. In order to reduce [[spurious correlation]]s when analyzing observational data, researchers usually include several variables in their regression models in addition to the variable of primary interest. For example, suppose we have a regression model in which cigarette smoking is the independent variable of interest, and the dependent variable is lifespan measured in years. Researchers might include socio-economic status as an additional independent variable, to ensure that any observed effect of smoking on lifespan is not due to some effect of education or income. However, it is never possible to include all possible confounding variables in an empirical analysis. For example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason, [[randomized controlled trial]]s are often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data. When controlled experiments are not feasible, variants of regression analysis such as [[instrumental variables]] regression may be used to attempt to estimate causal relationships from observational data.
| |
| | |
| ===Finance===
| |
| The [[capital asset pricing model]] uses linear regression as well as the concept of [[Beta (finance)|beta]] for analyzing and quantifying the systematic risk of an investment. This comes directly from the beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.
| |
| | |
| ===Economics===
| |
| {{Main|Econometrics}}
| |
| | |
| Linear regression is the predominant empirical tool in [[economics]]. For example, it is used to predict [[consumption (economics)|consumption spending]],<ref>{{cite book |last=Deaton |first=Angus |year=1992 |title=Understanding Consumption |publisher=Oxford University Press |isbn=0-19-828824-7 }}</ref> [[fixed investment]] spending, [[inventory investment]], purchases of a country's [[exports]],<ref name=Krugman/> spending on [[imports]],<ref name=Krugman>{{cite book |last=Krugman |first=Paul R. |authorlink=Paul Krugman |first2=M. |last2=Obstfeld |authorlink2=Maurice Obstfeld |first3=Marc J. |last3=Melitz |authorlink3=Marc Melitz |year=2012 |title=International Economics: Theory and Policy |edition=9th global |location=Harlow |publisher=Pearson |isbn=9780273754091 }}</ref> the [[money demand|demand to hold liquid assets]],<ref>{{cite journal |last=Laidler |first=David E. W. |year=1993 |title=The Demand for Money: Theories, Evidence, and Problems |edition=4th |location=New York |publisher=Harper Collins |isbn=0065010981 }}</ref> [[Labour economics|labor demand]],<ref name=Ehrenberg/> and [[labor supply]].<ref name=Ehrenberg>{{cite book |last=Ehrenberg |last2=Smith |title=Modern Labor Economics |publisher=Addison-Wesley |location=London |edition=10th international |year=2008 |isbn=9780321538963 }}</ref>
| |
| | |
| ===Environmental science===
| |
| {{Expand section|date=January 2010}}
| |
| Linear regression finds application in a wide range of environmental science applications. In Canada, the Environmental Effects Monitoring Program uses statistical analyses on fish and [[Benthic zone|benthic]] surveys to measure the effects of pulp mill or metal mine effluent on the aquatic ecosystem.<ref>[http://www.ec.gc.ca/esee-eem/default.asp?lang=En&n=453D78FC-1 EEMP webpage]</ref>
| |
| | |
| ==See also==
| |
| {{Portal|Statistics}}
| |
| * [[Normal equations]]
| |
| * [[Analysis of variance]]
| |
| * [[Anscombe's quartet]]
| |
| * [[Cross-sectional regression]]
| |
| * [[Curve fitting]]
| |
| * [[Empirical Bayes methods]]
| |
| * [[Logistic regression]]
| |
| * [[M-estimator]]
| |
| * [[Nonlinear regression]]
| |
| * [[Nonparametric regression]]
| |
| * [[Multivariate adaptive regression splines]]
| |
| * [[Lack-of-fit sum of squares]]
| |
| * [[Truncated regression model]]
| |
| * [[Censored regression model]]
| |
| * [[Simple linear regression]]
| |
| * [[Segmented regression|Segmented linear regression]]
| |
| * [[Projection pursuit regression]]
| |
| * [[MLPACK (C++ library)|MLPACK]] contains a [[C++]] implementation of linear regression
| |
| | |
| ==Notes==
| |
| {{Reflist|30em}}
| |
| | |
| ==References==
| |
| * Cohen, J., Cohen P., West, S.G., & Aiken, L.S. (2003). ''Applied multiple regression/correlation analysis for the behavioral sciences.'' (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates
| |
| * [[Charles Darwin]]. ''The Variation of Animals and Plants under Domestication''. (1868) ''(Chapter XIII describes what was known about reversion in Galton's time. Darwin uses the term "reversion".)''
| |
| * {{cite book
| |
| |title = Applied Regression Analysis
| |
| |edition = 3rd
| |
| |last1= Draper |first1=N.R. |last2=Smith |first2=H.
| |
| |publisher = John Wiley
| |
| |year = 1998
| |
| |isbn = 0-471-17082-8}}
| |
| * Francis Galton. "Regression Towards Mediocrity in Hereditary Stature," ''Journal of the Anthropological Institute'', 15:246-263 (1886). ''(Facsimile at: [http://www.mugu.com/galton/essays/1880-1889/galton-1886-jaigi-regression-stature.pdf])''
| |
| * Robert S. Pindyck and Daniel L. Rubinfeld (1998, 4h ed.). ''Econometric Models and Economic Forecasts'', ch. 1 (Intro, incl. appendices on Σ operators & derivation of parameter est.) & Appendix 4.3 (mult. regression in matrix form).
| |
| | |
| ==Further reading==
| |
| * {{Cite journal |year=1982 |author=Pedhazur, Elazar J |title=Multiple regression in behavioral research: Explanation and prediction |edition=2nd|place=New York |publisher=Holt, Rinehart and Winston |isbn=0-03-041760-0 |postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->}}
| |
| *{{Cite book
| |
| | last=Barlow
| |
| | first=Jesse L.
| |
| | author-link=
| |
| | chapter=Chapter 9: Numerical aspects of Solving Linear Least Squares Problems
| |
| | editor-last=Rao | editor-first=C.R.
| |
| | title=Computational Statistics | series=Handbook of Statistics | volume=9
| |
| | publisher=North-Holland
| |
| | publication-date=1993
| |
| | isbn=0-444-88096-8
| |
| | postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}
| |
| }}
| |
| *{{Cite book | last1=Björck |first1= Åke | authorlink= | coauthors= | title=Numerical methods for least squares problems | year=1996 | publisher=SIAM | location=Philadelphia | isbn=0-89871-360-9 | pages=}}
| |
| *{{Cite book
| |
| | last=Goodall
| |
| | first=Colin R.
| |
| | author-link=
| |
| | chapter=Chapter 13: Computation using the QR decomposition
| |
| | editor-last=Rao | editor-first=C.R.
| |
| | title=Computational Statistics | series=Handbook of Statistics | volume=9
| |
| | publisher=North-Holland
| |
| | publication-date=1993
| |
| | isbn=0-444-88096-8
| |
| | postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}
| |
| }}
| |
| *{{Cite book
| |
| | last=National Physical Laboratory
| |
| | first=
| |
| | chapter=Chapter 1: Linear Equations and Matrices: Direct Methods
| |
| | title=Modern Computing Methods
| |
| |edition =2nd
| |
| |series= Notes on Applied Science
| |
| | volume=16
| |
| | publisher=Her Majesty's Stationery Office
| |
| | publication-date=1961
| |
| | postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}
| |
| }}
| |
| *{{Cite book
| |
| | last=National Physical Laboratory
| |
| | first=
| |
| | chapter=Chapter 2: Linear Equations and Matrices: Direct Methods on Automatic Computers
| |
| | title=Modern Computing Methods
| |
| |edition =2nd
| |
| |series= Notes on Applied Science
| |
| | volume=16
| |
| | publisher=Her Majesty's Stationery Office
| |
| | publication-date=1961
| |
| | postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}
| |
| }}
| |
| | |
| ==External links==
| |
| {{wikibooks
| |
| |1= R Programming
| |
| |2= Linear Models
| |
| }}
| |
| {{Wikiversity|Linear regression}}
| |
| *[http://knowpapa.com/trend-line/ Online Linear Regression Calculator & Trend Line Graphing Tool]
| |
| * [http://codingplayground.blogspot.it/2013/05/learning-linear-regression-with.html Using gradient descent in C++, Boost, Ublas for linear regression]
| |
| | |
| {{Least Squares and Regression Analysis}}
| |
| {{Statistics|correlation|state=collapsed}}
| |
| | |
| {{DEFAULTSORT:Linear Regression}}
| |
| [[Category:Articles with inconsistent citation formats]]
| |
| [[Category:Regression analysis]]
| |
| [[Category:Estimation theory]]
| |
| [[Category:Parametric statistics]]
| |
| [[Category:Econometrics]]
| |
| | |
| {{Link GA|es}}
| |
| [[tr:Regresyon analizi]]
| |