Papyrus 87: Difference between revisions
en>ZéroBot m r2.7.1) (Robot: Adding es:Papiro 87 |
en>Leszek Jańczuk |
||
Line 1: | Line 1: | ||
In [[statistics]], a '''sum of squares due to lack of fit''', or more tersely a '''lack-of-fit sum of squares''', is one of the components of a partition of the [[Sum of squares (statistics)|sum of squares]] in an [[analysis of variance]], used in the [[numerator]] in an [[F-test]] of the [[null hypothesis]] that says that a proposed model fits well. | |||
== Sketch of the idea == | |||
In order for the lack-of-fit sum of squares to differ from the [[Residual sum of squares|sum of squares of residuals]], there must be [[replication (statistics)|more than one]] value of the [[response variable]] for at least one of the values of the set of predictor variables. For example, consider fitting a line | |||
: <math> y = \alpha x + \beta \, </math> | |||
by the method of [[least squares]]. One takes as estimates of ''α'' and ''β'' the values that minimize the sum of squares of residuals, i.e., the sum of squares of the differences between the observed ''y''-value and the fitted ''y''-value. To have a lack-of-fit sum of squares that differs from the residual sum of squares, one must observe more than one ''y''-value for each of one or more of the ''x''-values. One then partitions the "sum of squares due to error", i.e., the sum of squares of residuals, into two components: | |||
: sum of squares due to error = (sum of squares due to "pure" error) + (sum of squares due to lack of fit). | |||
The sum of squares due to "pure" error is the sum of squares of the differences between each observed ''y''-value and the average of all ''y''-values corresponding to the same ''x''-value. | |||
The sum of squares due to lack of fit is the ''weighted'' sum of squares of differences between each average of ''y''-values corresponding to the same ''x''-value and the corresponding fitted ''y''-value, the weight in each case being simply the number of observed ''y''-values for that ''x''-value.<ref>{{cite book |first=Richard J. |last=Brook |first2=Gregory C. |last2=Arnold |title=Applied Regression Analysis and Experimental Design |publisher=[[CRC Press]] |location= |year=1985 |pages=48–49 |isbn=0824772520 }}</ref><ref>{{cite book |first=John |last=Neter |first2=Michael H. |last2=Kutner |first3=Christopher J. |last3=Nachstheim |first4=William |last4=Wasserman |title=Applied Linear Statistical Models |edition=Fourth |publisher=Irwin |location=Chicago |year=1996 |pages=121–122 |isbn=0256117365 }}</ref> Because it is a property of least squares regression that the vector whose components are "pure errors" and the vector of lack-of-fit components are orthogonal to each other, the following equality holds: | |||
: <math> | |||
\begin{align} | |||
&\sum (\text{observed value} - \text{fitted value})^2 & & \text{(error)} \\ | |||
&\qquad = \sum (\text{observed value} - \text{local average})^2 & & \text{(pure error)} \\ | |||
& {} \qquad\qquad {} + \sum \text{weight}\times (\text{local average} - \text{fitted value})^2. & & \text{(lack of fit)} | |||
\end{align} | |||
</math> | |||
Hence the residual sum of squares has been completely decomposed into two components. | |||
== Mathematical details == | |||
Consider fitting a line with one predictor variable. Define ''i'' as an index of each of the ''n'' distinct ''x'' values, ''j'' as an index of the response variable observations for a given ''x'' value, and ''n''<sub>''i''</sub> as the number of ''y'' values associated with the ''i'' <sup>th</sup> ''x'' value. The value of each response variable observation can be represented by | |||
: <math> Y_{ij} = \alpha x_i + \beta + \varepsilon_{ij},\qquad i = 1,\dots, n,\quad j = 1,\dots,n_i.</math> | |||
Let | |||
: <math> \widehat\alpha, \widehat\beta \,</math> | |||
be the [[least squares]] estimates of the unobservable parameters ''α'' and ''β'' based on the observed values of ''x''<sub> ''i''</sub> and ''Y''<sub> ''i j''</sub>. | |||
Let | |||
: <math> \widehat Y_i = \widehat\alpha x_i + \widehat\beta \,</math> | |||
be the fitted values of the response variable. Then | |||
: <math> \widehat\varepsilon_{ij} = Y_{ij} - \widehat Y_i \,</math> | |||
are the [[errors and residuals in statistics|residuals]], which are observable estimates of the unobservable values of the error term ''ε''<sub> ''ij''</sub>. Because of the nature of the method of least squares, the whole vector of residuals, with | |||
:<math> N = \sum_{i=1}^n n_i </math> | |||
scalar components, necessarily satisfies the two constraints | |||
: <math> \sum_{i=1}^n \sum_{j=1}^{n_i} \widehat\varepsilon_{ij} = 0 \,</math> | |||
: <math> \sum_{i=1}^n \left(x_i \sum_{j=1}^{n_i} \widehat\varepsilon_{ij} \right) = 0. \,</math> | |||
It is thus constrained to lie in an (''N'' − 2)-dimensional subspace of '''R'''<sup> ''N''</sup>, i.e. there are ''N'' − 2 "[[degrees of freedom (statistics)|degrees of freedom]] for error". | |||
Now let | |||
: <math> \overline{Y}_{i\bullet} = \frac{1}{n_i} \sum_{j=1}^{n_i} Y_{ij} </math> | |||
be the average of all ''Y''-values associated with the ''i'' <sup>th</sup> ''x''-value. | |||
We partition the sum of squares due to error into two components: | |||
:<math> | |||
\begin{align} | |||
& \sum_{i=1}^n \sum_{j=1}^{n_i} \widehat\varepsilon_{ij}^{\,2} | |||
= \sum_{i=1}^n \sum_{j=1}^{n_i} \left( Y_{ij} - \widehat Y_i \right)^2 \\ | |||
& = \underbrace{ \sum_{i=1}^n \sum_{j=1}^{n_i} \left(Y_{ij} - \overline Y_{i\bullet}\right)^2 }_\text{(sum of squares due to pure error)} | |||
+ \underbrace{ \sum_{i=1}^n n_i \left( \overline Y_{i\bullet} - \widehat Y_i \right)^2. }_\text{(sum of squares due to lack of fit)} | |||
\end{align} | |||
</math> | |||
== Probability distributions == | |||
=== Sums of squares === | |||
Suppose the [[errors and residuals in statistics|error terms]] ''ε''<sub> ''i j''</sub> are [[statistical independence|independent]] and [[normal distribution|normally distributed]] with [[expected value]] 0 and [[variance]] ''σ''<sup>2</sup>. We treat ''x''<sub> ''i''</sub> as constant rather than random. Then the response variables ''Y''<sub> ''i j''</sub> are random only because the errors ''ε''<sub> ''i j''</sub> are random. | |||
It can be shown to follow that if the straight-line model is correct, then the '''sum of squares due to error''' divided by the error variance, | |||
: <math> \frac{1}{\sigma^2}\sum_{i=1}^n \sum_{j=1}^{n_i} \widehat\varepsilon_{ij}^{\,2} </math> | |||
has a [[chi-squared distribution]] with ''N'' − 2 degrees of freedom. | |||
Moreover, given the total number of observations ''N'', the number of levels of the independent variable ''n,'' and the number of parameters in the model ''p'': | |||
* The sum of squares due to pure error, divided by the error variance ''σ''<sup>2</sup>, has a chi-squared distribution with ''N'' − ''n'' degrees of freedom; | |||
* The sum of squares due to lack of fit, divided by the error variance ''σ''<sup>2</sup>, has a chi-squared distribution with ''n'' − ''p'' degrees of freedom (here ''p'' = 2 as there are two parameters in the straight-line model); | |||
* The two sums of squares are probabilistically independent. | |||
=== The test statistic === | |||
It then follows that the statistic | |||
: <math> | |||
\begin{align} | |||
F & = \frac{ \text{lack-of-fit sum of squares} /\text{degrees of freedom} }{\text{pure-error sum of squares} / \text{degrees of freedom} } \\[8pt] | |||
& = \frac{\left.\sum_{i=1}^n n_i \left( \overline Y_{i\bullet} - \widehat Y_i \right)^2\right/ (n-p)}{\left.\sum_{i=1}^n \sum_{j=1}^{n_i} \left(Y_{ij} - \overline Y_{i\bullet}\right)^2 \right/ (N - n)} | |||
\end{align} | |||
</math> | |||
has an [[F-distribution]] with the corresponding number of degrees of freedom in the numerator and the denominator, provided that the model is correct. If the model is wrong, then the probability distribution of the denominator is still as stated above, and the numerator and denominator are still independent. But the numerator then has a [[noncentral chi-squared distribution]], and consequently the quotient as a whole has a [[non-central F-distribution]]. | |||
One uses this F-statistic to test the [[null hypothesis]] that there is no lack of linear fit. Since the non-central F-distribution is [[stochastic order|stochastically larger]] than the (central) F-distribution, one rejects the null hypothesis if the F-statistic is larger than the critical F value. The critical value corresponds to the [[cumulative distribution function]] of the [[F distribution]] with ''x'' equal to the desired [[confidence level]], and degrees of freedom ''d''<sub>1</sub> = (''n'' − ''p'') and ''d''<sub>2</sub> = (''N'' − ''n''). This critical value can be calculated using online tools<ref name=soperds>{{cite web|last=Soper|first=D.S.|title=Critical F-value Calculator (Online Software)|url=http://www.danielsoper.com/statcalc3|work=Statistics Calculators|accessdate=19 April 2012}}</ref> or found in tables of statistical values.<ref name=lowryr>{{cite web|last=Lowry|first=Richard|title=VassarStats|url=http://vassarstats.net/textbook/apx_d.html|work=Concepts and Applications of Inferential Statistics|accessdate=19 April 2012}}</ref> | |||
The assumptions of [[normal distribution]] of errors and [[independence (probability theory)|independence]] can be shown to entail that this [[lack-of-fit test]] is the [[likelihood-ratio test]] of this null hypothesis. | |||
== See also == | |||
* [[Linear regression]] | |||
== Notes == | |||
{{reflist}} | |||
[[Category:Analysis of variance]] | |||
[[Category:Regression analysis]] | |||
[[Category:Design of experiments]] | |||
[[Category:Least squares]] |
Revision as of 16:04, 26 February 2013
In statistics, a sum of squares due to lack of fit, or more tersely a lack-of-fit sum of squares, is one of the components of a partition of the sum of squares in an analysis of variance, used in the numerator in an F-test of the null hypothesis that says that a proposed model fits well.
Sketch of the idea
In order for the lack-of-fit sum of squares to differ from the sum of squares of residuals, there must be more than one value of the response variable for at least one of the values of the set of predictor variables. For example, consider fitting a line
by the method of least squares. One takes as estimates of α and β the values that minimize the sum of squares of residuals, i.e., the sum of squares of the differences between the observed y-value and the fitted y-value. To have a lack-of-fit sum of squares that differs from the residual sum of squares, one must observe more than one y-value for each of one or more of the x-values. One then partitions the "sum of squares due to error", i.e., the sum of squares of residuals, into two components:
- sum of squares due to error = (sum of squares due to "pure" error) + (sum of squares due to lack of fit).
The sum of squares due to "pure" error is the sum of squares of the differences between each observed y-value and the average of all y-values corresponding to the same x-value.
The sum of squares due to lack of fit is the weighted sum of squares of differences between each average of y-values corresponding to the same x-value and the corresponding fitted y-value, the weight in each case being simply the number of observed y-values for that x-value.[1][2] Because it is a property of least squares regression that the vector whose components are "pure errors" and the vector of lack-of-fit components are orthogonal to each other, the following equality holds:
Hence the residual sum of squares has been completely decomposed into two components.
Mathematical details
Consider fitting a line with one predictor variable. Define i as an index of each of the n distinct x values, j as an index of the response variable observations for a given x value, and ni as the number of y values associated with the i th x value. The value of each response variable observation can be represented by
Let
be the least squares estimates of the unobservable parameters α and β based on the observed values of x i and Y i j.
Let
be the fitted values of the response variable. Then
are the residuals, which are observable estimates of the unobservable values of the error term ε ij. Because of the nature of the method of least squares, the whole vector of residuals, with
scalar components, necessarily satisfies the two constraints
It is thus constrained to lie in an (N − 2)-dimensional subspace of R N, i.e. there are N − 2 "degrees of freedom for error".
Now let
be the average of all Y-values associated with the i th x-value.
We partition the sum of squares due to error into two components:
Probability distributions
Sums of squares
Suppose the error terms ε i j are independent and normally distributed with expected value 0 and variance σ2. We treat x i as constant rather than random. Then the response variables Y i j are random only because the errors ε i j are random.
It can be shown to follow that if the straight-line model is correct, then the sum of squares due to error divided by the error variance,
has a chi-squared distribution with N − 2 degrees of freedom.
Moreover, given the total number of observations N, the number of levels of the independent variable n, and the number of parameters in the model p:
- The sum of squares due to pure error, divided by the error variance σ2, has a chi-squared distribution with N − n degrees of freedom;
- The sum of squares due to lack of fit, divided by the error variance σ2, has a chi-squared distribution with n − p degrees of freedom (here p = 2 as there are two parameters in the straight-line model);
- The two sums of squares are probabilistically independent.
The test statistic
It then follows that the statistic
has an F-distribution with the corresponding number of degrees of freedom in the numerator and the denominator, provided that the model is correct. If the model is wrong, then the probability distribution of the denominator is still as stated above, and the numerator and denominator are still independent. But the numerator then has a noncentral chi-squared distribution, and consequently the quotient as a whole has a non-central F-distribution.
One uses this F-statistic to test the null hypothesis that there is no lack of linear fit. Since the non-central F-distribution is stochastically larger than the (central) F-distribution, one rejects the null hypothesis if the F-statistic is larger than the critical F value. The critical value corresponds to the cumulative distribution function of the F distribution with x equal to the desired confidence level, and degrees of freedom d1 = (n − p) and d2 = (N − n). This critical value can be calculated using online tools[3] or found in tables of statistical values.[4]
The assumptions of normal distribution of errors and independence can be shown to entail that this lack-of-fit test is the likelihood-ratio test of this null hypothesis.
See also
Notes
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
- ↑ 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534 - ↑ 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534 - ↑ Template:Cite web
- ↑ Template:Cite web