Law of the wall: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>MATBA.matteo86
mNo edit summary
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
The '''Remez algorithm'''  or '''Remez exchange algorithm''', published by [[Evgeny Yakovlevich Remez]] in 1934,<ref>E. Ya. Remez, "Sur la détermination des polynômes d'approximation de degré donnée", Comm. Soc. Math. Kharkov '''10''', 41 (1934);<br>"Sur un procédé convergent d'approximations successives pour déterminer les polynômes d'approximation, Compt. Rend. Acad. Sc. '''198''', 2063 (1934);<br>"Sur le calcul effectiv des polynômes d'approximation des Tschebyscheff", Compt. Rend. Acade. Sc. '''199''', 337 (1934).</ref> is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a [[Chebyshev space]] that are the best in the [[uniform norm]] ''L''<sub>∞</sub> sense.
Nestor is the name my mothers and fathers gave me but I don't like when people use my full name. Some time in the past he chose to reside in Idaho. His day occupation is a [http://www.ncdoj.gov/Consumer/Automobiles/Extended-Car-Warranties.aspx monetary officer] but he plans on changing it. extended [http://newdayz.de/index.php?mod=users&action=view&id=16038 car warranty] The favorite pastime for my children and me is dancing and now I'm attempting to make cash  extended car [http://www.Consumer.Ftc.gov/articles/0054-auto-service-contracts-and-warranties warranty] with it.<br><br>[http://clothingcarearchworth.com/index.php?document_srl=84369&mid=customer_review car warranty] [http://Www.carbuyingtips.com/warranty.htm warranty] Also visit my blog post; auto warranty ([http://www.creativecats.com/UserProfile/tabid/331/userId/19522/Default.aspx check out your url])
 
A typical example of a Chebyshev space is the subspace of [[Chebyshev polynomials]] of order ''n'' in the [[Vector space|space]] of real [[continuous function]]s on an [[interval (mathematics)|interval]], ''C''[''a'', ''b''].
The polynomial of best approximation within a given subspace is defined to be the one that minimizes the maximum [[absolute difference]] between the polynomial and the function. In this case, the form of the solution is precised by the [[equioscillation theorem]].
 
==Procedure==
The Remez algorithm starts with the function ''f'' to be approximated and a set ''X'' of <math>n + 2</math> sample points <math> x_1, x_2, ...,x_{n+2}</math> in the approximation interval, usually the [[Chebyshev nodes]] linearly mapped to the interval. The steps are:
 
# Solve the linear system of equations
:<math> b_0 + b_1 x_i+ ... +b_n x_i ^ n + (-1) ^ i E = f(x_i) </math> (where <math> i=1, 2, ... n+2 </math>),
:for the unknowns <math>b_0, b_1...b_n</math> and ''E''.
# Use the <math> b_i </math> as coefficients to form a polynomial <math>P_n</math>.
# Find the set ''M'' of points of local maximum error <math>|P_n(x) - f(x)| </math>.
# If the errors at every <math> m \in M </math> are of equal magnitude and alternate in sign, then <math>P_n</math> is the minimax approximation polynomial.  If not, replace ''X'' with ''M'' and repeat the steps above.
 
The result is called the polynomial of best approximation, the Chebyshev approximation, or the [[minimax approximation]].
 
A review of technicalities in implementing the Remez algorithm is given by W. Fraser.<ref>{{cite journal |doi=10.1145/321281.321282 |first=W. |last=Fraser |title=A Survey of Methods of Computing Minimax and Near-Minimax Polynomial Approximations for Functions of a Single Independent Variable |journal=J. ACM |volume=12 |issue= |pages=295 |year=1965 }}</ref>
 
===On the choice of initialization===
The Chebyshev nodes are a common choice for the initial approximation because of their role in the theory of polynomial interpolation. For the initialization of the optimization problem for function ''f'' by the Lagrange interpolant ''L''<sub>n</sub>(''f''), it can be shown that this initial approximation is bounded by
 
:<math>\lVert f - L_n(f)\rVert_\infty \le (1 + \lVert L_n\rVert_\infty) \inf_{p \in P_n} \lVert f - p\rVert</math>
 
with the norm or [[Lebesgue constant (interpolation)|Lebesgue constant]] of the Lagrange interpolation operator ''L''<sub>''n''</sub> of the nodes (''t''<sub>1</sub>, ..., ''t''<sub>''n''&nbsp;+&nbsp;1</sub>) being
 
:<math>\lVert L_n\rVert_\infty = \overline{\Lambda}_n(T) = \max_{-1 \le x \le 1} \lambda_n(T; x),</math>
 
''T'' being the zeros of the Chebyshev polynomials, and the Lebesgue functions being
 
:<math>\lambda_n(T; x) = \sum_{j = 1}^{n + 1} \left| l_j(x) \right|, \quad l_j(x) = \prod_{\stackrel{i = 1}{i \ne j}}^{n + 1} \frac{(x - t_i)}{(t_j - t_i)}.</math>
 
Theodore A. Kilgore,<ref>{{cite journal |doi=10.1016/0021-9045(78)90013-8 |first=T. A. |last=Kilgore |title=A characterization of the Lagrange interpolating projection with minimal Tchebycheff norm |journal=J. Approx. Theory |volume=24 |issue= |pages=273 |year=1978 }}</ref> Carl de Boor, and Allan Pinkus<ref>{{cite journal |doi=10.1016/0021-9045(78)90014-X |first=C. |last=de Boor |first2=A. |last2=Pinkus |title=Proof of the conjectures of Bernstein and Erdös concerning the optimal nodes for polynomial interpolation |journal=[[Journal of Approximation Theory]] |volume=24 |issue= |pages=289 |year=1978 }}</ref> proved that there exists a unique ''t''<sub>''i''</sub> for each ''L''<sub>''n''</sub>, although not known explicitly for (ordinary) polynomials. Similarly, <math>\underline{\Lambda}_n(T) = \min_{-1 \le x \le 1} \lambda_n(T; x)</math>, and the optimality of a choice of nodes can be expressed as <math>\overline{\Lambda}_n - \underline{\Lambda}_n \ge 0.</math>
 
For Chebyshev nodes, which provides a suboptimal, but analytically explicit choice, the asymptotic behavior is known as<ref>{{cite journal |first=F. W. |last=Luttmann |first2=T. J. |last2=Rivlin |title=Some numerical experiments in the theory of polynomial interpolation |journal=IBM J. Res. Develop. |volume=9 |issue= |pages=187 |year=1965 |doi= }}</ref>
 
:<math>\overline{\Lambda}_n(T) = \frac{2}{\pi} \log(n + 1) + \frac{2}{\pi}\left(\gamma + \log\frac{8}{\pi}\right) + \alpha_{n + 1}</math>
 
(''γ'' being the [[Euler-Mascheroni constant]]) with
 
:<math>0 < \alpha_n < \frac{\pi}{72 n^2}</math> for <math>n \ge 1,</math>
 
and upper bound<ref>T. Rivlin, "The Lebesgue constants for polynomial interpolation", in ''Proceedings of the Int. Conf. on Functional Analysis and Its Application'', edited by H. G. Garnier ''et al.'' (Springer-Verlag, Berlin, 1974), p. 422; ''The Chebyshev polynomials'' (Wiley-Interscience, New York, 1974).</ref>
 
:<math>\overline{\Lambda}_n(T) \le \frac{2}{\pi} \log(n + 1) + 1</math>
 
Lev Brutman<ref>{{cite journal |doi=10.1137/0715046 |first=L. |last=Brutman |title=On the Lebesgue Function for Polynomial Interpolation |journal=SIAM J. Numer. Anal. |volume=15 |issue= |pages=694 |year=1978 }}</ref> obtained the bound for <math>n \ge 3</math>, and <math>\hat{T}</math> being the zeros of the expanded Chebyshev polynomials:
 
:<math>\overline{\Lambda}_n(\hat{T}) - \underline{\Lambda}_n(\hat{T}) < \overline{\Lambda}_3 - \frac{1}{6} \cot \frac{\pi}{8} + \frac{\pi}{64} \frac{1}{\sin^2(3\pi/16)} - \frac{2}{\pi}(\gamma - \log\pi)\approx 0.201.</math>
 
Rüdiger Günttner<ref>{{cite journal |doi=10.1137/0717043 |first=R. |last=Günttner |title=Evaluation of Lebesgue Constants |journal=SIAM J. Numer. Anal. |volume=17 |issue= |pages=512 |year=1980 }}</ref> obtained from a sharper estimate for <math>n \ge 40</math>
 
:<math>\overline{\Lambda}_n(\hat{T}) - \underline{\Lambda}_n(\hat{T}) < 0.0196.</math>
 
==Detailed Discussion==
Here we provide more information on the steps outlined above.  In this section we let the index ''i'' run from 0 to ''n''+1.
 
'''Step 1:''' Given <math>x_0, x_1, ... x_{n+1}</math>, solve the linear system of ''n''+2 equations
:<math> b_0 + b_1 x_i+ ... +b_n x_i ^ n + (-1) ^ i E = f(x_i) </math> (where <math> i=0, 1, ... n+1 </math>),
:for the unknowns <math>b_0, b_1, ...b_n</math> and ''E''.
 
It should be clear that <math>(-1)^i E</math> in this equation makes sense only if the nodes <math>x_0, ...,x_{n+1}</math> are ''ordered'', either strictly increasing or strictly decreasing. Then this linear system has a unique solution.  (As is well known, not every linear system has a solution.) Also, the solution can be obtained with only <math>O(n^2)</math> arithmetic operations while
a standard solver from the library would take <math>O(n^3)</math> operations.  Here is the simple proof:
 
Compute the standard ''n''-th degree interpolant <math>p_1(x)</math> to <math>f(x)</math> at the
first ''n''+1 nodes and also the standard ''n''-th degree interpolant
<math>p_2(x)</math> to the ordinates <math>(-1)^i</math>
:<math>p_1(x_i) = f(x_i), p_2(x_i) = (-1)^i, i = 0, ..., n.</math>
To this end use each time Newton's interpolation formula with the divided
differences of order <math>0, ...,n</math> and <math>O(n^2)</math> arithmetic operations.
 
The polynomial <math>p_2(x)</math> has its ''i''-th zero between <math>x_{i-1}</math> and <math>x_i,\ i=1, ...,n</math>, and thus no further zeroes between <math>x_n</math> and <math>x_{n+1}</math>: <math>p_2(x_n)</math> and <math>p_2(x_{n+1})</math> have the same sign <math>(-1)^n</math>.
 
The linear combination
<math>p(x) := p_1 (x) - p_2(x)\!\cdot\!E</math> is also a polynomial of degree ''n'' and
:<math>p(x_i) = p_1(x_i) - p_2(x_i)\!\cdot\! E \ = \ f(x_i) - (-1)^i E,\ \ \ \  i =0, \ldots, n.</math>
This is the same as the equation above for <math>i = 0, ... ,n</math> and for any choice of ''E''.
The same equation for ''i'' = ''n''+1 is
:<math>p(x_{n+1}) \ = \ p_1(x_{n+1}) - p_2(x_{n+1})\!\cdot\!E \ = \ f(x_{n+1}) - (-1)^{n+1} E</math>
and needs special reasoning:  solved for the variable ''E'', it is the ''definition'' of ''E'':
:<math>E \ := \ \frac{p_1(x_{n+1}) - f(x_{n+1})}{p_2(x_{n+1}) + (-1)^n}.</math>
As mentioned above, the two terms in the denominator have same sign:
''E'' and thus <math>p(x) \equiv b_0 + b_1x + \ldots + b_nx^n</math> are always well-defined.
 
The error at the given ''n''+2 ordered nodes is positive and negative in turn because
:<math>p(x_i) - f(x_i) \ = \ -(-1)^i E,\ \ i = 0, ... , n\!+\!1. </math>
 
The Theorem of ''de La Vallée Poussin'' states that under this
condition no polynomial of degree ''n'' exists with error less than ''E''. Indeed, if such a polynomial existed, call it <math>\tilde p(x)</math>, then the difference
<math>p(x)-\tilde p(x) = (p(x) - f(x)) - (\tilde p(x) - f(x))</math> would still
be positive/negative at the ''n''+2 nodes <math>x_i</math> and therefore have at least ''n''+1 zeros which is impossible for a polynomial of degree ''n''.
Thus, this ''E'' is a lower bound for the minimum error which can be
achieved with polynomials of degree ''n''.
 
'''Step 2''' changes the notation from
<math>b_0 + b_1x + ... + b_nx^n</math> to <math>p(x)</math>.
 
'''Step 3''' improves upon the input nodes <math>x_0, ..., x_{n+1}</math>
and their errors <math>\pm E</math> as follows.
 
In each P-region, the current node <math>x_i</math> is replaced with the local
maximizer <math>\bar{x}_i</math> and in each N-region <math>x_i</math> is replaced with the
local minimizer.  (Expect <math>\bar{x}_0</math> at ''A'', the <math>\bar {x}_i</math> near <math>x_i</math>, and <math>\bar{x}_{n+1}</math> at ''B''.) No high precision is required here,
the standard ''line search'' with a couple of ''quadratic fits''  should
suffice. (See <ref>David G. Luenberger: ''Introduction to Linear and
Nonlinear Programming'', Addison-Wesley Publishing Company 1973.</ref>)
 
Let <math>z_i := p(\bar{x}_i) - f(\bar{x}_i)</math>. Each amplitude <math>|z_i|</math> is greater than or equal to ''E''. The Theorem of ''de La Vallée Poussin'' and its proof also
apply to <math>z_0, ... ,z_{n+1}</math> with <math>\min\{|z_i|\} \geq E</math> as the new
lower bound for the best error possible with polynomials of degree ''n''.
 
Moreover, <math>\max\{|z_i|\}</math> comes in handy as an obvious upper bound for that best possible error.
 
'''Step 4:''' With <math>\min\,\{|z_i|\}</math> and <math>\max\,\{|z_i|\}</math> as lower and upper
bound for the best possible approximation error, one has a reliable
stopping criterion: repeat the steps until <math>\max\{|z_i|\} - \min\{|z_i|\}</math> is sufficiently small or no longer decreases. These bounds indicate the progress.
 
==Variants==
Sometimes more than one sample point is replaced at the same time with the locations of nearby maximum absolute differences.
 
Sometimes [[relative error]] is used to measure the difference between the approximation and the function, especially if the approximation will be used to compute the function on a computer which uses [[floating point]] arithmetic.
 
==See also==
* [[Approximation theory]]
 
==Notes==
{{reflist}}
 
==External links==
*[http://www.bores.com/courses/intro/filters/4_equi.htm Intro to DSP]
*{{MathWorld|urlname=RemezAlgorithm|title=Remez Algorithm|author=Aarts, Ronald M.; Bond, Charles; Mendelsohn, Phil; and Weisstein, Eric W.}}
 
[[Category:Polynomials]]
[[Category:Approximation theory]]
[[Category:Numerical analysis]]

Latest revision as of 10:05, 27 October 2014

Nestor is the name my mothers and fathers gave me but I don't like when people use my full name. Some time in the past he chose to reside in Idaho. His day occupation is a monetary officer but he plans on changing it. extended car warranty The favorite pastime for my children and me is dancing and now I'm attempting to make cash extended car warranty with it.

car warranty warranty Also visit my blog post; auto warranty (check out your url)