Wheeler–DeWitt equation: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Debsuvra
mNo edit summary
Line 1: Line 1:
In [[numerical analysis]], '''numerical differentiation''' describes [[algorithm]]s for estimating the [[derivative]] of a [[mathematical function]] or function [[subroutine]] using values of the function and perhaps other knowledge about the function.
Greetings! I am Marvella and I feel comfortable when people use the complete title. For years he's been working as a meter reader and it's some thing he truly appreciate. For a whilst she's been in South Dakota. The preferred hobby for my kids and me is to perform baseball but I haven't made a dime with it.<br><br>my page; [http://minihttp.com/fooddeliveryservices36486 weight loss food programs]
 
[[Image:Derivative.png|right]]
 
==Finite difference formula==
The simplest method is to use finite difference approximations.
 
A simple two-point estimation is to compute the slope of a nearby [[secant line]] through the points (''x'',''f(x)'') and (''x+h'',''f(x+h)'').<ref>Richard L. Burden, J. Douglas Faires (2000), ''Numerical Analysis'', (7th Ed),  Brooks/Cole. ISBN 0-534-38216-9</ref> Choosing a small number ''h'',  ''h'' represents a small change in ''x'', and it can be either positive or negative.  The slope of this line is
:<math>{f(x+h)-f(x)\over h}.</math>
This expression is [[Isaac Newton|Newton]]'s [[difference quotient]].
 
The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to ''h''. As ''h'' approaches zero, the slope of the secant line approaches the slope of the tangent line. Therefore, the true '''derivative of''' '''''f''''' '''at''' '''''x''''' is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line:
:<math>f'(x)=\lim_{h\to 0}{f(x+h)-f(x)\over h}.</math>
 
Since immediately [[substitution (logic)|substituting]] 0 for ''h'' results in [[division by zero]], calculating the derivative directly can be unintuitive. 
 
Equivalently, the slope could be estimated by employing positions (x - h) and x.
 
Another two-point formula is to compute the slope of a nearby secant line through the points (''x-h'',''f(x-h)'') and (''x+h'',''f(x+h)''). The slope of this line is
:<math>{f(x+h)-f(x-h)\over 2h}.</math>
 
In this case the first-order errors cancel, so the slope of these secant lines differ from the slope of the tangent line by an amount that is approximately proportional to <math>h^2</math>. Hence for small values of ''h'' this is a more accurate approximation to the tangent line than the one-sided estimation. Note however that although the slope is being computed at x, the value of the function at x is not involved.
 
The estimation error is given by:
 
:<math>R = {{-f^{(3)}(c)}\over {6}}h^2</math>,
 
where <math>c</math> is some point between <math>x-h</math> and <math>x+h</math>.
This error does not include the [[rounding error]] due to numbers being represented and calculations being performed in limited precision.
 
===Practical considerations using floating point arithmetic===
[[Image:AbsoluteErrorNumericalDifferentiationExample.png|thumb|300px|Example showing the difficulty of choosing <math>h</math> due to both rounding error and formula error]]
 
An important consideration in practice when the function is approximated using [[floating point]] arithmetic is how small a value of ''h'' to choose. If chosen too small, the subtraction will yield a large [[rounding error]]. In fact all the finite difference formulae are [[ill-conditioned]]<ref name=Fornberg1>Numerical Differentiation of Analytic Functions, B Fornberg - ACM Transactions on Mathematical Software (TOMS), 1981</ref> and due to cancellation will produce a value of zero if ''h'' is small enough.<ref name=SquireTrapp1>Using Complex Variables to Estimate Derivatives of Real Functions, W Squire, G Trapp - SIAM REVIEW, 1998</ref> If too large, the calculation of the slope of the secant line will be more accurately calculated, but the estimate of the slope of the tangent by using the secant could be worse.
 
A choice for ''h'' which is small without producing a large rounding error is <math>\sqrt{\varepsilon}x</math> where the [[machine epsilon]] ''&epsilon;'' is typically of the order 2.2&times;10<sup>−16</sup>.
<ref>Following ''[[Numerical Recipes]] in C'', [http://www.nrbook.com/a/bookcpdf/c5-7.pdf Chapter 5.7]</ref>  This epsilon is for double precision (64-bit) variables: such calculations in single precision are rarely useful. The resulting value is unlikely to be a "round" number in binary, so it is important to realise that although ''x'' is a machine-[[Floating_point#Representable numbers, conversion and rounding|representable]] number, ''x'' + ''h'' almost certainly will not be. This means that ''x'' + ''h'' will be changed (via rounding or truncation) to a nearby machine-representable number, with the consequence that (''x'' + ''h'') - ''x'' will ''not'' equal ''h''; the two function evaluations will not be exactly ''h'' apart. In this regard, since most decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a seemingly round step such as ''h'' = 0.1 will not be a round number in binary; it is 0.000110011001100... A possible approach is as follows:
  h:=sqrt(eps)*x;
  xph:=x + h;
  dx:=xph - x;
  slope:=(F(xph) - F(x))/dx;
However, with computers, [[compiler optimization]] facilities may fail to attend to the details of actual computer arithmetic, and instead apply the axioms of mathematics to deduce that ''dx'' and ''h'' are the same. With C and similar languages, a directive that ''xph'' is a [[volatile variable]] will prevent this.
 
===Higher-order methods===
{{further|Finite difference coefficients}}
Higher-order methods for approximating the derivative, as well as methods for higher derivatives exist.
 
Given below is the five point method for the first derivative ([[five-point stencil]] in one dimension).<ref>Abramowitz & Stegun, Table 25.2</ref>
:<math>f'(x) = \frac{-f(x+2 h)+8 f(x+h)-8 f(x-h)+f(x-2h)}{12 h}+\frac{h^4}{30}f^{(5)}(c)</math>
where <math>c\in[x-2h,x+2h]</math>.
 
==Differential quadrature==
[[Differential quadrature]] is the approximation of derivatives by using weighted sums of function values.<ref>Differential Quadrature and Its Application in Engineering: Engineering Applications, Chang Shu, Springer, 2000, ISBN 978-1-85233-209-9</ref><ref>Advanced Differential Quadrature Methods, Yingyan Zhang, CRC Press, 2009, ISBN 978-1-4200-8248-7</ref> The name is in analogy with ''quadrature'' meaning [[Numerical integration]] where weighted sums are used in methods such [[Simpson's method]] or the [[Trapezium rule]]. There are various methods for determining the weight coefficients. Differential quadrature is used to solve [[partial differential equations]].
 
==Complex variable methods==
 
The classical finite difference approximations for numerical differentiation are ill-conditioned. However, if <math>f</math> is a [[holomorphic function]], real-valued on the real line, which can be evaluated at points in the complex plane near <math>x</math> then there are [[Numerical stability|stable]] methods. For example,<ref name=SquireTrapp1/> the first derivative can be calculated by the complex-step derivative formula:<ref>{{cite journal | last1 = Martins | first1 = JRRA | first2 = P | last2 = Sturdza | first3 = JJ | last3 = Alonso | year = 2003 | id = {{citeseerx|10.1.1.141.8002}} | title = The Complex-Step Derivative Approximation | journal = ACM Transactions on Mathematical Software | volume = 29 | issue = 3 | pages = 245–262 }}</ref>
 
:<math>f'(x)\approx \Im(f(x + ih))/h</math>.
 
The above formula is only valid for calculating a first-order derivative. A generalization of the above for calculating derivatives of any order derivatives employ [[multicomplex numbers]], resulting in multicomplex derivatives.<ref>http://russell.ae.utexas.edu/FinalPublications/ConferencePapers/2010Feb_SanDiego_AAS-10-218_mulicomplex.pdf</ref>
 
In general, derivatives of any order can be calculated using [[Cauchy's integral formula]]:
:<math>f^{(n)}(a) = {n! \over 2\pi i} \oint_\gamma {f(z) \over (z-a)^{n+1}}\, \mathrm{d}z</math>,
where the integration is done [[Numerical integration|numerically]].
 
Using complex variables for numerical differentiation was started by Lyness and Moler in 1967.<ref name=LynessMoler1>{{cite journal | first1 = J. N. | last1 = Lyness | first2 = C. B. | last2 = Moler | title = Numerical differentiation of analytic functions | journal = SIAM J.Numer. Anal. | volume = 4 | year = 1967 | pages = 202–210}}</ref> A method based on numerical inversion of a complex [[Laplace transform]] was developed by Abate and Dubner.<ref>{{cite journal | title = A New Method for Generating Power Series Expansions of Functions | first1 = J | last1 = Abate | first2 = H | last2 = Dubner | journal = SIAM J. Numer. Anal. | volume =5 | issue = 1 | pages = 102–112 |date=March 1968 }}</ref> An algorithm which can be used without requiring knowledge about the method or the character of the function was developed by Fornberg.<ref name=Fornberg1/>
 
==See also==
*[[Automatic differentiation]]
*[[Finite difference]]
*[[Five-point stencil]]
*[[Numerical integration]]
*[[Numerical ordinary differential equations]]
*[[Numerical smoothing and differentiation]]
*[[List of numerical analysis software]]
 
==References==
{{reflist}}
 
== External links ==
{{wikibooks|Numerical Methods}}
* http://mathworld.wolfram.com/NumericalDifferentiation.html
* http://math.fullerton.edu/mathews/n2003/NumericalDiffMod.html
*[http://numericalmethods.eng.usf.edu/topics/continuous_02dif.html Numerical Differentiation Resources: Textbook notes, PPT, Worksheets, Audiovisual YouTube Lectures] at [http://numericalmethods.eng.usf.edu/ Numerical Methods for STEM Undergraduate]
*ftp://math.nist.gov/pub/repository/diff/src/DIFF Fortran code for the numerical differentiation of a function using Neville's process to extrapolate from a sequence of simple polynomial approximations.
* [http://www.nag.co.uk/numeric/fl/nagdoc_fl24/html/D04/d04conts.html NAG Library numerical differentiation routines]
{{DEFAULTSORT:Numerical Differentiation}}
[[Category:Numerical analysis]]
[[Category:Differential calculus]]

Revision as of 03:38, 26 February 2014

Greetings! I am Marvella and I feel comfortable when people use the complete title. For years he's been working as a meter reader and it's some thing he truly appreciate. For a whilst she's been in South Dakota. The preferred hobby for my kids and me is to perform baseball but I haven't made a dime with it.

my page; weight loss food programs