|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| {{More footnotes|date=March 2011}}
| | Andrew Simcox is the name his parents gave him and he completely loves this title. Some time ago he selected to live in North Carolina and he doesn't plan on changing it. To climb is something she would never give up. Office supervising is my profession.<br><br>Here is my blog: best psychic readings ([http://115.78.136.107:8081/hssvvn2/member.php?u=128882-MParsons 115.78.136.107]) |
| :''This article deals with the propagation of uncertainty via algebraic manipulations. For the propagation of uncertainty through time, see [[Chaos theory#Sensitivity to initial conditions]].''
| |
| | |
| In [[statistics]], '''propagation of uncertainty''' (or '''propagation of error''') is the effect of [[Variable (mathematics)|variables]]' [[uncertainty|uncertainties]] (or [[Errors and residuals in statistics|errors]]) on the uncertainty of a [[function (mathematics)|function]] based on them. When the variables are the values of experimental measurements they have [[Observational error|uncertainties due to measurement limitations]] (e.g., instrument [[Accuracy and precision|precision]]) which propagate to the combination of variables in the function.
| |
| | |
| The uncertainty is usually defined by the [[absolute error]] Δ''x''. Uncertainties can also be defined by the [[relative error]] (Δ''x'')/''x'', which is usually written as a percentage.
| |
| | |
| Most commonly the error on a quantity, Δ''x'', is given as the [[standard deviation]], ''σ''. Standard deviation is the positive square root of [[variance]], ''σ''<sup>2</sup>. The value of a quantity and its error are often expressed as an interval {{nowrap|x ± Δ''x''}}. If the statistical [[probability distribution]] of the variable is known or can be assumed, it is possible to derive [[confidence limits]] to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one dimensional variable belonging to a [[normal distribution]] are ± one standard deviation from the value, that is, there is approximately a 68% probability that the true value lies in the region {{nowrap|''x'' ± ''σ''}}.
| |
| | |
| If the variables are [[correlated]], then [[covariance]] must be taken into account.
| |
| | |
| ==Linear combinations==
| |
| Let <math>f_k(x_1,x_2,\dots,x_n)</math> be a set of ''m'' functions which are linear combinations of <math>n</math> variables <math>x_1,x_2,\dots,x_n</math> with combination coefficients <math>A_{k1},A_{k2},\dots,A_{kn}, (k=1\dots m)</math>.
| |
| :<math>f_k=\sum_i^n A_{ki} x_i</math> or <math>\mathbf{f}=\mathbf{Ax}\,</math>
| |
| and let the [[variance-covariance matrix]] on x be denoted by <math>\Sigma^x\,</math>.
| |
| :<math>\Sigma^x =
| |
| \begin{pmatrix}
| |
| \sigma^2_1 & \text{cov}_{12} & \text{cov}_{13} & \cdots \\
| |
| \text{cov}_{12} & \sigma^2_2 & \text{cov}_{23} & \cdots\\
| |
| \text{cov}_{13} & \text{cov}_{23} & \sigma^2_3 & \cdots \\
| |
| \vdots & \vdots & \vdots & \ddots \\
| |
| \end{pmatrix}
| |
| </math>
| |
| Then, the variance-covariance matrix <math>\Sigma^f\,</math> of ''f'' is given by
| |
| :<math>\Sigma^f_{ij}= \sum_k^n \sum_\ell^n A_{ik} \Sigma^x_{k\ell} A_{j\ell}: \Sigma^f=\mathbf{A} \Sigma^x \mathbf{A}^\top</math>.
| |
| This is the most general expression for the propagation of error from one set of variables onto another. When the errors on ''x'' are uncorrelated the general expression simplifies to
| |
| :<math>\Sigma^f_{ij}= \sum_k^n A_{ik} \left(\sigma^2_k \right)^x A_{jk}.</math>
| |
| where the ''x'' superscript is merely notation, not exponentiation.
| |
| Note that even though the errors on ''x'' may be uncorrelated, the errors on ''f'' are in general correlated; in other words, even if <math>\Sigma^x</math> is a diagonal matrix, <math>\Sigma^f</math> is in general a full matrix.
| |
| | |
| The general expressions for a single function, ''f'', are a little simpler.
| |
| :<math>f=\sum_i^n a_i x_i: f=\mathbf {a x}\,</math>
| |
| :<math>\sigma^2_f= \sum_i^n \sum_j^n a_i \Sigma^x_{ij} a_j= \mathbf{a \Sigma^x a^t}</math>
| |
| | |
| Each covariance term, <math>M_{ij}</math> can be expressed in terms of the [[Pearson product-moment correlation coefficient|correlation coefficient]] <math>\rho_{ij}\,</math> by <math>M_{ij}=\rho_{ij}\sigma_i\sigma_j\,</math>, so that an alternative expression for the variance of ''f'' is
| |
| :<math>\sigma^2_f= \sum_i^n a_i^2\sigma^2_i+\sum_i^n \sum_{j (j \ne i)}^n a_i a_j\rho_{ij} \sigma_i\sigma_j. </math>
| |
| In the case that the variables ''x'' are uncorrelated this simplifies further to
| |
| :<math>\sigma^{2}_{f}= \sum_i^n a_{i}^{2}\sigma^{2}_{i}.</math>
| |
| | |
| == Non-linear combinations ==
| |
| {{See also|Taylor expansions for the moments of functions of random variables}}
| |
| When ''f'' is a set of non-linear combination of the variables ''x'', an [[interval propagation]] could be performed in order to compute intervals which contain all consistent values for the variables. In a probabilistic approach, the function ''f'' must usually be linearized by approximation to a first-order [[Taylor series]] expansion, though in some cases, exact formulas can be derived that do not depend on the expansion as is the case for the exact variance of products.<ref name="Goodman1960">{{Cite journal
| |
| | last = Goodman
| |
| | first= Leo
| |
| | authorlink = Leo Goodman
| |
| | title = On the Exact Variance of Products
| |
| | journal = Journal of the American Statistical Association
| |
| | year = 1960
| |
| | volume = 55
| |
| | issue = 292
| |
| | pages = 708–713
| |
| | doi = 10.2307/2281592
| |
| | jstor=2281592
| |
| }}</ref> The Taylor expansion would be:
| |
| | |
| :<math>f_k \approx f^0_k+ \sum_i^n \frac{\partial f_k}{\partial {x_i}} x_i </math>
| |
| | |
| where <math>\partial f_k/\partial x_i</math> denotes the [[partial derivative]] of ''f<sub>k</sub>'' with respect to the ''i''-th variable. Or in [[matrix notation]],
| |
| :<math>\mathrm{f} \approx \mathrm{f}^0 + J \mathrm{x}\,</math>
| |
| where ''J'' is the [[Jacobian matrix]]. Since ''f <sup>0</sup>'' is a constant it does not contribute to the error on ''f''. Therefore, the propagation of error follows the linear case, above, but replacing the linear coefficients, ''A<sub>ik</sub>'' and ''A<sub>jk</sub>'' by the partial derivatives, <math>\frac{\partial f_k}{\partial x_i}</math> and <math>\frac{\partial f_k}{\partial x_j}</math>. In matrix notation,
| |
| <ref>Ochoa1,Benjamin; Belongie, Serge [http://vision.ucsd.edu/sites/default/files/ochoa06.pdf "Covariance Propagation for Guided Matching"]</ref>
| |
| :<math>\operatorname{cov}(\mathrm{f}) = J \operatorname{cov}(\mathrm{x}) J^\top</math>.
| |
| That is, the Jacobian of the function is used to transform the rows and columns of the covariance of the argument.
| |
| | |
| === Simplification ===
| |
| Neglecting correlations or for independent variables yields a common formula among engineers and experimental scientists to calculate error propagation, the variance formula:<ref>{{cite journal |last=Ku |first=H. H. |title=Notes on the use of propagation of error formulas |journal=Journal of Research of the National Bureau of Standards |date=October 1966 |volume=70C |issue=4 |url=http://nistdigitalarchives.contentdm.oclc.org/cdm/compoundobject/collection/p13011coll6/id/78003/rec/5 |accessdate=3 October 2012 |page=262 |publisher=National Bureau of Standards |issn=0022-4316}}</ref>
| |
| | |
| <math>s_f = \sqrt{ \left(\frac{\partial f}{\partial {x} }\right)^2 s_x^2 + \left(\frac{\partial f}{\partial {y} }\right)^2 s_y^2 + \left(\frac{\partial f}{\partial {z} }\right)^2 s_z^2 + ...}</math>
| |
| | |
| where <math>s_f</math> represents the standard deviation of the function <math>f</math>, <math>s_x</math> represents the standard deviation of <math>x</math>, <math>s_y</math> represents the standard deviation of <math>y</math>, and so forth. One practical application of this formula in an engineering context is the evaluation of relative uncertainty of the insertion loss for power measurements of random fields.<ref>{{cite journal |last=Arnaut |first=L. R. |title=Measurement uncertainty in reverberation chambers - I. Sample statistics |journal= NPL Technical Report TQE 2, 2nd. ed., sec. 4.1.2.2|date=December 2008 |volume=TQE |issue=2 |page=52 |url=http://publications.npl.co.uk/npl_web/pdf/tqe2.pdf |publisher=National Physical Laboratory |issn=1754-2995 }}</ref>
| |
| | |
| It is important to note that this formula is based on the linear characteristics of the gradient of <math>f</math> and therefore it is a good estimation for the standard deviation of <math>f</math> as long as <math>s_x, s_y, s_z,...</math> are small compared to the partial derivatives.<ref>{{Cite book |last=Clifford |first=A. A. |title=Multivariate error analysis: a handbook of error propagation and calculation in many-parameter systems |publisher=John Wiley & Sons |year=1973 |isbn=0470160551 }}{{page needed|date=October 2012}}</ref>
| |
| | |
| === Example ===
| |
| Any non-linear function, ''f(a,b)'', of two variables, ''a'' and ''b'', can be expanded as
| |
| :<math>f\approx f^0+\frac{\partial f}{\partial a}a+\frac{\partial f}{\partial b}b</math>
| |
| hence:
| |
| :<math>\sigma^2_f\approx\left| \frac{\partial f}{\partial a}\right| ^2\sigma^2_a+\left| \frac{\partial f}{\partial b}\right|^2\sigma^2_b+2\frac{\partial f}{\partial a}\frac{\partial f}{\partial b}\text{cov}_{ab}.</math>
| |
| | |
| In the particular case that <math>f=ab\!</math>, <math>\frac{\partial f}{\partial a}=b, \frac{\partial f}{\partial b}=a</math>. Then
| |
| :<math>\sigma^2_f \approx b^2\sigma^2_a+a^2 \sigma_b^2+2ab\,\text{cov}_{ab}</math>
| |
| or
| |
| :<math>\left(\frac{\sigma_f}{f}\right)^2 \approx \left(\frac{\sigma_a}{a}\right)^2+\left(\frac{\sigma_b}{b}\right)^2+2\left(\frac{\sigma_a}{a}\right)\left(\frac{\sigma_b}{b}\right)\rho_{ab}.</math>
| |
| | |
| ===Caveats and warnings===
| |
| Error estimates for non-linear functions are [[Bias of an estimator|biased]] on account of using a truncated series expansion. The extent of this bias depends on the nature of the function. For example, the bias on the error calculated for log ''x'' increases as ''x'' increases since the expansion to 1+''x'' is a good approximation only when ''x'' is small.
| |
| | |
| In the special case of the inverse <math>1/B</math> where <math>B=N(0,1)</math>, the distribution is a [[Inverse distribution#Reciprocal normal distribution|reciprocal normal distribution]] and there is no definable variance. For such [[inverse distribution]]s and for [[ratio distribution]]s, there can be defined probabilities for intervals which can be computed either by [[Monte Carlo simulation]], or, in some cases, by using the Geary–Hinkley transformation.<ref name="HayyaJ1975On">{{Cite journal
| |
| | last1 = Hayya
| |
| | first1 = Jack
| |
| | authorlink1 = Jack Hayya
| |
| | last2 = Armstrong
| |
| | first2 = Donald
| |
| | last3 = Gressis
| |
| | first3 = Nicolas
| |
| | title = A Note on the Ratio of Two Normally Distributed Variables
| |
| | journal = [[Management Science (journal)|Management Science]]
| |
| |date=July 1975
| |
| | volume = 21
| |
| | issue = 11
| |
| | pages = 1338–1341
| |
| | doi = 10.1287/mnsc.21.11.1338
| |
| | jstor=2629897
| |
| }}</ref>
| |
| The statistics, mean and variance, of the shifted reciprocal function, <math> \frac{1}{p-B} </math>, where <math>B=N(\mu,\sigma)</math> however exist in a [[principal value]] sense if the difference between the shift or pole, <math>p</math>, and the mean <math>\mu</math> is real. The mean of this transformed random variable is then indeed the scaled [[Dawson's function]] <math>\frac{\sqrt{2}}{\sigma} F \left(\frac{p-\mu}{\sqrt{2}\sigma}\right)</math>.<ref name=lecomte2013exact>{{Cite journal
| |
| | last1= Lecomte
| |
| | first1 = Christophe
| |
| | title = Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems
| |
| | journal = Journal of Sound and Vibrations
| |
| | volume = 332
| |
| | issue = 11
| |
| |date=May 2013
| |
| | pages = 2750–2776
| |
| | doi = 10.1016/j.jsv.2012.12.009
| |
| }}</ref> In contrast, if the shift <math>p-\mu</math> is purely complex, the mean exists and is a scaled [[Faddeeva function]] whose exact expression depends on the sign of the imaginary part, <math>
| |
| \operatorname{Im}(p-\mu)</math>.
| |
| In both cases, the variance is a simple function of the mean
| |
| .<ref>{{Cite journal
| |
| | last1= Lecomte
| |
| | first1 = Christophe
| |
| | title = Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems
| |
| | journal = Journal of Sound and Vibrations
| |
| | volume = 332
| |
| | issue = 11
| |
| |date=May 2013
| |
| | at = Section (4.1.1)
| |
| | doi = 10.1016/j.jsv.2012.12.009
| |
| }}</ref> Therefore, the variance has to be considered in a principal value sense if <math>p-\mu</math> is real while it exists if the imaginary part of <math>p-\mu</math> is non-zero. Note that these means and variances are exact as they do not recur to linearisation of the ratio. The exact covariance of two ratios with a pair of different poles <math>p_1</math> and <math>p_2</math> is similarly available
| |
| .<ref>{{Cite journal
| |
| | last1= Lecomte
| |
| | first1 = Christophe
| |
| | title = Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems
| |
| | journal = Journal of Sound and Vibrations
| |
| | volume = 332
| |
| | issue = 11
| |
| |date=May 2013
| |
| | at = Eq.(39)-(40)
| |
| | doi = 10.1016/j.jsv.2012.12.009
| |
| }}</ref>
| |
| The case of the inverse of a '''complex''' normal variable <math>B</math>, shifted or not, exhibits different characteristics.<ref name=lecomte2013exact />
| |
| | |
| For highly non-linear functions, there exist five categories of probabilistic approaches for uncertainty propagation;<ref>S. H. Lee and W. Chen, ''A comparative study of uncertainty propagation methods for black-box-type problems'', Structural and Multidisciplinary Optimization Volume 37, Number 3 (2009), 239-253, DOI: 10.1007/s00158-008-0234-7</ref> see [[Uncertainty Quantification#Methodologies for forward uncertainty propagation]] for details.
| |
| | |
| ==Example formulas==
| |
| This table shows the variances of simple functions of the real variables <math>A,B\!</math>, with standard deviations <math>\sigma_A, \sigma_B\,</math>, correlation coefficient <math>\rho_{AB}\,</math> and precisely known real-valued constants <math>a,b\,</math>.
| |
| | |
| :{| class="wikitable" background: white"
| |
| ! style="background:#ffdead;" | Function !! style="background:#ffdead;" | Variance
| |
| |-
| |
| | <math>f = aA\,</math> || <math>\sigma_f^2 = a^2\sigma_A^2</math>
| |
| |-
| |
| | <math>f = a A \pm bB\,</math> || <math>\sigma_f^2 = a^2\sigma_A^2 + b^2\sigma_B^2\pm2ab\,\text{cov}_{AB}</math>
| |
| |-
| |
| | <math>f = AB\,</math> || <math>\left(\frac{\sigma_f}{f}\right)^2 \approx \left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2 + 2\frac{\sigma_A\sigma_B}{AB}\rho_{AB}</math>
| |
| |-
| |
| | <math>f = \frac{A}{B}\,</math> || <math>\left(\frac{\sigma_f}{f}\right)^2 \approx \left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2 - 2\frac{\sigma_A\sigma_B}{AB}\rho_{AB}</math><ref>{{cite web |last= |first= |url=http://www.sagepub.com/upm-data/6427_Chapter_4__Lee_%28Analyzing%29_I_PDF_6.pdf |title=Strategies for Variance Estimation |page=37 |accessdate=2013-01-18}}</ref>
| |
| |-
| |
| | <math>f = a A^{\pm b}\,</math> || <math>\frac{\sigma_f}{f} \approx b \frac{\sigma_A}{A}</math> <ref name=fornasini/>
| |
| |-
| |
| | <math>f = a \ln(\pm bA)\,</math> || <math>\sigma_f \approx a \frac{\sigma_A}{A}</math> <ref name=harris2003/>
| |
| |-
| |
| | <math>f = a \log(A)\,</math> || <math>\sigma_f \approx a \frac{\sigma_A}{A \ln(10)}</math> <ref name=harris2003/>
| |
| |-
| |
| | <math>f = a e^{\pm bA}\,</math> || <math>\frac{\sigma_f}{f} \approx b\sigma_A</math> <ref>{{cite web|url=http://www.foothill.edu/psme/daley/tutorials_files/10.%20Error%20Propagation.pdf|date=October 9, 2009|title=Error Propagation tutorial|work=Foothill College|accessdate=2012-03-01}}</ref>
| |
| |-
| |
| | <math>f = a^{\pm bA}\,</math> || <math>\frac{\sigma_f}{f} \approx b\ln(a)\sigma_A</math>
| |
| |}
| |
| For uncorrelated variables the covariance terms are zero.
| |
| Expressions for more complicated functions can be derived by combining simpler functions. For example, repeated multiplication, assuming no correlation gives,
| |
| :<math>f = AB(C); \left(\frac{\sigma_f}{f}\right)^2 \approx \left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2+ \left(\frac{\sigma_C}{C}\right)^2.</math>
| |
| | |
| For the case <math>f = AB </math> we also have Goodman's expression<ref name="Goodman1960"/> for the exact variance: for the uncorrelated case it is
| |
| | |
| <math>V(XY)= E(X)^2 V(Y) + E(Y)^2 V(X) + E((X-E(X))^2 (Y-E(Y))^2)^2</math>
| |
| | |
| and therefore we have:
| |
| | |
| <math>\sigma_f^2 = A^2\sigma_B^2 + B^2\sigma_A^2 + \sigma_A^2\sigma_B^2 </math>
| |
| | |
| ==Partial derivatives==
| |
| Given <math>X=f(A, B, C, \dots)</math>
| |
| :{| class="wikitable" style="text-align:center; background: white"
| |
| ! style="background:#ffdead;" | Absolute Error !! style="background:#ffdead;" | Variance
| |
| |-
| |
| | <math>\left |\Delta X\right |=\left |\frac{\partial f}{\partial A}\right |\cdot \left |\Delta A\right |+\left |\frac{\partial f}{\partial B}\right |\cdot \left |\Delta B\right |+\left |\frac{\partial f}{\partial C}\right |\cdot \left |\Delta C\right |+\cdots</math> || <math>\sigma_X^2=\left (\frac{\partial f}{\partial A}\sigma_A\right )^2+\left (\frac{\partial f}{\partial B}\sigma_B\right )^2+\left (\frac{\partial f}{\partial C}\sigma_C\right )^2+\cdots</math><ref>{{cite web |url=http://www.rit.edu/cos/uphysics/uncertainties/Uncertaintiespart2.html |title=Uncertainties and Error Propagation |accessdate=2007-04-20 |last=Lindberg | first=Vern |date=2009-10-05 |work=Uncertainties, Graphing, and the Vernier Caliper |publisher=Rochester Institute of Technology |pages=1 |language=eng |archiveurl=http://web.archive.org/web/*/http://www.rit.edu/cos/uphysics/uncertainties/Uncertaintiespart2.html |archivedate=2004-11-12 |quote=The guiding principle in all cases is to consider the most pessimistic situation. }}</ref>
| |
| |}
| |
| | |
| ===Example calculation: Inverse tangent function===
| |
| We can calculate the uncertainty propagation for the inverse tangent function as an example of using partial derivatives to propagate error.
| |
| | |
| Define
| |
| | |
| :<math>f(x) = \arctan(x),</math>
| |
| | |
| where <math>\sigma_x</math> is the absolute uncertainty on our measurement of <math>x</math>. The derivative of <math>f(x)</math> with respect to <math>x</math> is
| |
| | |
| :<math>\frac{\text{d} f}{\text{d} x} = \frac{1}{1+x^2}.</math>
| |
| | |
| Therefore, our propagated uncertainty is
| |
| | |
| :<math>\sigma_{f} \approx \frac{\sigma_x}{1+x^2},</math>
| |
| | |
| where <math>\sigma_f</math> is the absolute propagated uncertainty.
| |
| | |
| ===Example application: Resistance measurement===
| |
| A practical application is an [[experiment]] in which one measures [[current (electricity)|current]], ''I'', and [[voltage]], ''V'', on a [[resistor]] in order to determine the [[electrical resistance|resistance]], ''R'', using [[Ohm's law]], <math>R = V / I.</math>
| |
| | |
| Given the measured variables with uncertainties, ''I''±σ<sub>''I''</sub> and ''V''±σ<sub>''V''</sub>, the uncertainty in the computed quantity, σ<sub>''R''</sub> is
| |
| | |
| : <math>\sigma_R \approx \sqrt{ \sigma_V^2 \left(\frac{1}{I}\right)^2 + \sigma_I^2 \left(\frac{-V}{I^2}\right)^2 }.</math> | |
| | |
| ==See also==
| |
| * [[Accuracy and precision]]
| |
| * [[Automatic differentiation]]
| |
| * [[Delta method]]
| |
| * [[Errors and residuals in statistics]]
| |
| * [[Experimental uncertainty analysis]]
| |
| * [[Interval finite element]]
| |
| * [[List of uncertainty propagation software]]
| |
| * [[Measurement uncertainty]]
| |
| * [[Significance arithmetic]]
| |
| * [[Uncertainty quantification]]
| |
| | |
| == Notes ==
| |
| {{reflist|30em|refs=
| |
| <ref name=fornasini>{{citation | first1=Paolo | last1=Fornasini | title=The uncertainty in physical measurements: an introduction to data analysis in the physics laboratory | publisher=Springer | year=2008 | isbn=0-387-78649-X | page=161 | url=http://books.google.com/books?id=PBJgvPgf2NkC&pg=PA161 }}</ref>
| |
| <ref name=harris2003>{{citation | first1=Daniel C. | last1=Harris | title=Quantitative chemical analysis | edition=6th | publisher=Macmillan | year=2003 | isbn=0-7167-4464-3 | page=56 | url=http://books.google.com/books?id=csTsQr-v0d0C&pg=PA56 }}</ref>
| |
| }}
| |
| | |
| ==References==
| |
| *{{Citation |last=Bevington |first=Philip R. |last2=Robinson |first2=D. Keith |year=2002 |title=Data Reduction and Error Analysis for the Physical Sciences |edition=3rd |publisher=McGraw-Hill |isbn=0-07-119926-8 }}
| |
| *{{Citation |last=Meyer |first=Stuart L. |year=1975 |title=Data Analysis for Scientists and Engineers |publisher=Wiley |isbn=0-471-59995-6 }}
| |
| | |
| ==External links==
| |
| *[http://www.av8n.com/physics/uncertainty.htm A detailed discussion of measurements and the propagation of uncertainty] explaining the benefits of using error propagation formulas and Monte Carlo simulations instead of simple [[significance arithmetic]]
| |
| *[http://www.rit.edu/cos/uphysics/uncertainties/Uncertainties.html Uncertainties and Error Propagation], Vern Lindberg's Guide to Uncertainties and Error Propagation.
| |
| *[http://www.bipm.org/en/publications/guides/gum.html GUM], Guide to the Expression of Uncertainty in Measurement
| |
| *[http://infoscience.epfl.ch/record/97374/files/TR-98-01R3.pdf EPFL An Introduction to Error Propagation], Derivation, Meaning and Examples of Cy = Fx Cx Fx'
| |
| *[http://packages.python.org/uncertainties/ uncertainties package], a program/library for transparently performing calculations with uncertainties (and error correlations).
| |
| *[http://pypi.python.org/pypi/soerp soerp package], a python program/library for transparently performing *second-order* calculations with uncertainties (and error correlations).
| |
| *{{cite techreport| author=Joint Committee for Guides in Metrology| title=JCGM 102: Evaluation of Measurement Data - Supplement 2 to the "Guide to the Expression of Uncertainty in Measurement" - Extension to Any Number of Output Quantities| year=2011| institution=JCGM| url=http://www.bipm.org/utils/common/documents/jcgm/JCGM_102_2011_E.pdf| accessdate=13 February 2013}}
| |
| | |
| [[Category:Algebra of random variables]]
| |
| [[Category:Numerical analysis]]
| |
| [[Category:Statistical approximations]]
| |
| [[Category:Uncertainty of numbers]]
| |