Edgeworth series: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
→‎Edgeworth series: The term "standard normal distribution" is more common than "unit normal distr."
Line 1: Line 1:
{{about|the comparison of functions as inputs approach infinity|asymptotes in geometry|asymptotic curve}}
Hi! <br>My name is Јohn and I'm a 22 years old boy from Germany.<br>http://goblinsguild.com/eqdkp/lan.php?i=christian-louboutin-online-411-search<br><br>Look into my site - [http://www.farma3.it/stile/nike-air-max/air-max-nike-hyperfuse.asp air max nike hyperfuse]
 
In [[mathematical analysis]], '''asymptotic analysis''' is a method of describing [[Limit (mathematics)|limiting]] behavior. The methodology has applications across science. Examples are
* in [[computer science]] in the [[analysis of algorithms]], considering the performance of algorithms when applied to very large input datasets.
* the behavior of [[physical system]]s when they are very large.
* in [[accident analysis]] when identifying the causation of crash through count modeling with large number of crash counts in a given time and space.
The simplest example is, when considering a function ''f''(''n''), there is a need to describe its properties when ''n'' becomes very large.  Thus, if ''f''(''n'') = ''n''<sup>2</sup>+3''n'', the term 3''n'' becomes insignificant compared to ''n''<sup>2</sup> when ''n'' is very large.  The function "''f''(''n'') is said to be asymptotically equivalent to ''n''<sup>2</sup> as ''n'' → ∞", and this is written symbolically as ''f''(''n'') ~ ''n''<sup>2</sup>.
 
== Definition ==
{{Cleanup|reason=general case''' <math>f \sim g \; (n \to a \in \R)</math> '''missing|date=June 2013}}
 
Formally, given complex-valued functions ''f'' and ''g'' of a natural number variable ''n'', one writes
 
:<math>f \sim g \quad (\text{as } n\to\infty)</math>
 
to express the fact, stated in terms of [[Big_O_notation#Little-o_notation|little-o notation]], that
 
:<math>f(n) = g(n) + o(g(n))\quad(\text{as }n\to\infty),\,</math>
 
or equivalently
 
:<math>f(n) = (1+o(1))g(n)\quad(\text{as }n\to\infty).</math>
 
Explicitly this means that for every positive constant ''ε'' there exists a constant ''N'' such that
 
:<math>|f(n)-g(n)|\leq\varepsilon|g(n)|\qquad\text{for all }n\geq N~</math>.
 
Unless ''g''(''n'') is infinity or zero (which would make the limit below undefined), this statement is also equivalent to
 
:<math>\lim_{n\to\infty} \frac{f(n)}{g(n)} = 1.</math>
 
This relation is an equivalence relation on the set of functions of ''n''. The equivalence class of ''f'' informally consists of all functions ''g'' which are approximately equal to ''f'' in a relative sense, in the limit.
 
== Asymptotic expansion ==
 
An [[asymptotic expansion]] of a function ''f''(''x'') is in practice an expression of that function in terms of a [[series (mathematics)|series]], the [[partial sum]]s of which do not necessarily converge, but such that taking any initial partial sum provides an asymptotic formula for ''f''. The idea is that successive terms provide a more and more accurate description of the order of growth of ''f''. An example is [[Stirling's approximation]].
 
In symbols, it means we have
 
:<math>f \sim g_1</math>
 
but also
 
:<math>f \sim g_1 + g_2</math>
 
and
 
:<math>f \sim g_1 + \cdots +  g_k</math>
 
for each ''fixed'' ''k'', while some limit is taken, usually with the requirement that ''g''<sub>''k''+1</sub> = o(''g<sub>k</sub>''), which means the (''g<sub>k</sub>'') form an [[asymptotic scale]]. The requirement that the successive sums improve the approximation may then be expressed as <math>f - (g_1 + \cdots + g_k) = o(g_k).</math>
 
In case the asymptotic expansion does not converge, for any particular value of the argument there will be a particular partial sum which provides the best approximation and adding additional terms will decrease the accuracy.  However, this optimal partial sum will usually have more terms as the argument approaches the limit value.
 
Asymptotic expansions typically arise in the approximation of certain integrals ([[Laplace's method]], [[saddle-point method]], [[method of steepest descent]]) or in the approximation of probability distributions ([[Edgeworth series]]). The famous [[Feynman graphs]] in [[quantum field theory]] are another example of asymptotic expansions which often do not converge.
 
== Use in applied mathematics ==
 
Asymptotic analysis is a key tool for exploring the [[ordinary differential equation|ordinary]] and [[partial differential equation|partial]] differential equations which arise in the [[mathematical model]]ling of real-world phenomena.<ref name="Howison">S. Howison, ''Practical Applied Mathematics'', Cambridge University Press, Cambridge, 2005. ISBN 0-521-60369-2</ref> An illustrative example is the derivation of the [[Boundary layer#Boundary layer equations|boundary layer equations]] from the full [[Navier-Stokes equations]] governing fluid flow. In many cases, the asymptotic expansion is in power of a small parameter, <math>\epsilon</math>: in the boundary layer case, this is the [[dimensional analysis|nondimensional]] ratio of the boundary layer thickness to a typical lengthscale of the problem. Indeed, applications of asymptotic analysis in mathematical modelling often<ref name="Howison" /> centre around a nondimensional parameter which has been shown, or assumed, to be small through a consideration of the scales of the problem at hand.
 
==Method of dominant balance==
The method of dominant balance is used to determine the asymptotic behavior of solutions to an [[Ordinary differential equation|ODE]] without solving it. The process is iterative in that the result obtained by performing the method once can be used as input when the method is repeated, to obtain as many terms in the [[asymptotic expansion]] as desired.
 
The process is as follows:
 
1. Assume that the asymptotic behavior has the form
::<math>y(x) \sim e^{S(x)}\,</math>.
 
2. Make a clever guess as to which terms in the ODE may be negligible in the limit we are interested in.
 
3. Drop those terms and solve the resulting ODE.
 
4. Check that the solution is consistent with step 2. If this is the case, then we have the controlling factor of the asymptotic behavior. Otherwise, we need to try dropping different terms in step 2.
 
5. Repeat the process using our result as the first term of the solution.
 
::<math>xy''+(c-x)y'-ay=0\,</math>
 
::where c and a are arbitrary constants.
This differential equation cannot be solved exactly. However, it may be useful to know how the solutions behave for large x.
 
We start by assuming <math>y\sim e^{S(x)}\,</math> as <math>x\rightarrow \infty</math>. We do this with the benefit of hindsight, to make things quicker.
Since we only care about the behavior of y in the large x limit, we set y equal to <math>e^{S(x)}\,</math>, and re-express the ODE in terms of S(x):
::<math>xS''+xS'^2+(c-x)S'-a=0\,</math>,  or
 
::<math>S''+S'^2+\left(\frac{c}{x}-1\right)S'-\frac{a}{x}=0\,</math>
 
::where we have used the [[product rule]] and [[chain rule]] to find the derivatives of y.
Now let us suppose that a solution to this new ODE satisfies
::<math>S'^2\sim S'\,</math> as <math>x\to\infty</math>
 
::<math>S'',\frac{c}{x}S',\frac{a}{x}=o(S'^2),o(S')\,</math> as <math>x\to\infty</math>
 
We get the dominant asymptotic behaviour by setting
::<math>S_0'^2=S_0'\,</math>
If <math>S_0</math> satisfies the above asymptotic conditions, then everything is consistent. The terms we dropped will indeed have been negligible with respect to the ones we kept. <math>S_0</math> is not a solution to the ODE for S, but it represents the dominant asymptotic behaviour, which is what we are interested in. Let us check that this choice for <math>S_0</math> is consistent:
::<math>S_0'=1\,</math>
::<math>S_0'^2=1\,</math>
::<math>S_0''=0=o(S_0')\,</math>
::<math>\frac{c}{x}S_0'=\frac{c}{x}=o(S_0')\,</math>
 
::<math>\frac{a}{x}=o(S_0')\,</math>
 
Everything is indeed consistent. Thus we find the dominant asymptotic behaviour of a solution to our ODE:
::<math>S_0=x\,</math>
::<math>y\sim e^x\,</math>
 
By convention, the asymptotic series is written as:
::<math>y\sim Ax^p e^{\lambda x^r}\left(1+\frac{u_1}{x}+\frac{u_2}{x^2}\cdots\right)\,</math>
so to get at least the first term of this series we have to do another step to see if there is a power of x out the front.
 
We proceed by making an [[ansatz]] that we can write
::<math>S(x)=S_0(x)+C(x)\,</math>
and then attempt to find asymptotic solutions for C(x). Substituting into the ODE for S(x) we find
::<math>C''+C'^2+C'+\frac{c}{x}C'+\frac{c-a}{x}=0\,</math>
Repeating the same process as before, we keep C' and (c-a)/x and find that
::<math>C_0=\log x^{a-c}\,</math>
The leading asymptotic behaviour is therefore
::<math>y\sim x^{a-c}e^x\,</math>
 
==See also==
*[[Asymptotic computational complexity]]
*[[Asymptotic theory]]
 
==References==
<references />
* {{cite doi|10.1023/A:1006145903624}}
 
{{DEFAULTSORT:Asymptotic Analysis}}
[[Category:Asymptotic analysis|*]]

Revision as of 03:25, 19 February 2014

Hi!
My name is Јohn and I'm a 22 years old boy from Germany.
http://goblinsguild.com/eqdkp/lan.php?i=christian-louboutin-online-411-search

Look into my site - air max nike hyperfuse