Reeb sphere theorem: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Geometry guy
subcat
 
en>David Eppstein
m also mark language for reeb ref
 
Line 1: Line 1:
There are many types of bike brakes for different applications. A little more than 30 miles, with a 5,200 foot elevation gain, and you'll ride over the finish line. The downhill bikes have both front and rear suspension, disc breaks and are very strong. If they aren't a know brand, don't panic, I'm certain they'll be fit for purpose, but bear in mind if they look really cheap then you may struggle with things such as adjustment of your gears. If you've ridden for years and years and know what you need, buying online does make sense. <br><br>The prices offered by these online stores are very competitive and realistic according to the market. Young children can hardly wait for their first Hot Wheels or tricycle. Fitting is logical, the instructions are clear few tools are required. Just to give you an idea, mine has three settings, those being; *Completely off, so that I am riding it as a normal bike; *Power assist, where the power kicks in after a couple of revolutions of the pedals and then remains on as long as the pedals are turning, (there is a high and low setting in this mode). Browse a number of the on-line forums to determine what other riders like and do not like about their bicycles. <br><br>Mountain bikes in Arizona have two great rides in Sedona, Highline and the Dry Creek Loop. An Indian travois style trailer is very simple to make and very effective for heavy loads. In the event you cherished this information and also you want to obtain guidance about [http://fungonline.com/profile/124586/jumaestas Women mountain bike sizing.] generously stop by our own web page. Please remember the stem lengths can impact just how reactive the bike is. The Grand Rapids area is becoming well known as a cycling destination. One of the obvious things to check on both a bike and car are brakes. <br><br>The inaugural Big Bear Gran Fondo takes place on July 26th in Big Bear, with distance options of 20K, 30K, 70K and 100K. This article discusses how to go about buying a mountain bike. Just remember your heart is a muscle and can be developed the more you practice. Now do your comparison-shopping based on five or six available bikes. Next to a good mountain bike helmet, mountain bike gloves to protect your hands are the best accessory you can invest in. <br><br>No matter what type of bike, no matter how old it is, no matter how much money you'd like to spend. The correct quill stems are sized down with the inside diameter of the fork's steer tube. It makes sense to promote your top performers, but just make sure it. If this happens shorten the length of the sides of the rectangle otherwise go back to level 3. As with all kinds of products, the prices for kids dirt bike usually vary depending on the size, style, materials and of course manufacturer.
In [[numerical analysis]], '''pairwise summation''', also called '''cascade summation''', is a technique to sum a sequence of finite-[[arithmetic precision|precision]] [[floating-point]] numbers that substantially reduces the accumulated [[round-off error]] compared to naively accumulating the sum in sequence.<ref name=Higham93>{{Citation | title=The accuracy of floating point summation |
first1=Nicholas J. | last1=Higham | journal=[[SIAM Journal on Scientific Computing]] |
volume=14 | issue=4 | pages=783–799 | doi=10.1137/0914050 | year=1993
}}</ref>  Although there are other techniques such as [[Kahan summation]] that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost&mdash;it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.
 
In particular, pairwise summation of a sequence of ''n'' numbers ''x<sub>n</sub>'' works by [[recursion (computer science)|recursively]] breaking the sequence into two halves, summing each half, and adding the two sums: a [[divide and conquer algorithm]].  Its roundoff errors grow [[Big O notation|asymptotically]] as at most ''O''(ε&nbsp;log&nbsp;''n''), where ε is the [[machine precision]] (assuming a fixed [[condition number]], as discussed below).<ref name=Higham93/>  In comparison, the naive technique of accumulating the sum in sequence (adding each ''x<sub>i</sub>'' one at a time for ''i''&nbsp;=&nbsp;1,&nbsp;...,&nbsp;''n'') has roundoff errors that grow at worst as ''O''(ε''n'').<ref name=Higham93/>  [[Kahan summation]] has a [[error bound|worst-case error]] of roughly ''O''(ε), independent of ''n'', but requires several times more arithmetic operations.<ref name=Higham93/>  If the roundoff errors are random, and in particular have random signs, then they form a [[random walk]] and the error growth is reduced to an average of <math>O(\varepsilon \sqrt{\log n})</math> for pairwise summation.<ref name=Tasche>Manfred Tasche and Hansmartin Zeuner ''Handbook of Analytic-Computational Methods in Applied Mathematics'' Boca Raton, FL: CRC Press, 2000).</ref>
 
A very similar recursive structure of summation is found in many [[fast Fourier transform]] (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.<ref name=Tasche>Manfred Tasche and Hansmartin Zeuner ''Handbook of Analytic-Computational Methods in Applied Mathematics'' Boca Raton, FL: CRC Press, 2000).</ref><ref name=JohnsonFrigo08>S. G. Johnson and M. Frigo, "[http://cnx.org/content/m16336/latest/ Implementing FFTs in practice], in ''[http://cnx.org/content/col10550/ Fast Fourier Transforms]'', edited by [[C. Sidney Burrus]] (2008).</ref>
 
==The algorithm==
 
In [[pseudocode]], the pairwise summation algorithm for an [[Array data type|array]] ''x'' of length ''n'' can be written:
 
''s'' = '''pairwise'''(''x''[1&hellip;''n''])
      if ''n'' &le; ''N''                    ''base case: naive summation for a sufficiently small array''
          ''s'' = ''x''[1]
          for ''i'' = 2 to ''n''
              ''s'' = ''s'' + ''x''[''i'']
      else                        ''divide and conquer: recursively sum two halves of the array''
          ''m'' = [[Floor and ceiling functions|floor]](''n'' / 2)
          ''s'' = '''pairwise'''(''x''[1&hellip;''m'']) + '''pairwise'''(''x''[''m''+1&hellip;''n''])
      endif
 
For some sufficiently small ''N'', this algorithm switches to a naive loop-based summation as a [[base case]], whose error bound is ''O''(ε''N'').  Therefore, the entire sum has a worst-case error that grows asymptotically as ''O''(ε''N''&nbsp;log&nbsp;''n'') for large ''n'', for a given condition number (see below), and the smallest error bound is attained for&nbsp;''N''&nbsp;=&nbsp;1.
 
In an algorithm of this sort (as for [[Divide and conquer algorithm#Choosing the base cases|divide and conquer algorithm]]s in general<ref>Radu Rugina and Martin Rinard, "[http://people.csail.mit.edu/rinard/paper/lcpc00.pdf Recursion unrolling for divide and conquer programs]," in ''Languages and Compilers for Parallel Computing'', chapter 3, pp. 34–48.  ''Lecture Notes in Computer Science'' vol. 2017 (Berlin: Springer, 2001).</ref>), it is desirable to use a larger base case in order to [[Amortized analysis|amortize]] the overhead of the recursion. If ''N''&nbsp;=&nbsp;1, then there is roughly one recursive subroutine call for every input, but more generally there is one recursive call for (roughly) every ''N''/2 inputs if the recursion stops at exactly&nbsp;''n''&nbsp;=&nbsp;''N''.  By making ''N'' sufficiently large, the overhead of recursion can be made negligible (precisely this technique of a large base case for recursive summation is employed by high-performance FFT implementations<ref name=JohnsonFrigo08/>).
 
Regardless of ''N'', exactly ''n''&minus;1 additions are performed in total, the same as for naive summation, so if the recursion overhead is made negligible then pairwise summation has essentially the same computational cost as for naive summation.
 
A variation on this idea is to break the sum into ''b'' blocks at each recursive stage, summing each block recursively, and then summing the results, which was dubbed a "superblock" algorithm by its proposers.<ref name=Castaldo08>Anthony M. Castaldo, R. Clint Whaley, and Anthony T. Chronopoulos, "Reducing floating-point error in dot product using the superblock family of algorithms," ''SIAM J. Sci. Comput.'', vol. 32, pp. 1156–1174 (2008).</ref>  The above pairwise algorithm corresponds to ''b''&nbsp;=&nbsp;2 for every stage except for the last stage which is&nbsp;''b''&nbsp;=&nbsp;''N''.
 
Pairwise summation can be done non-recursively using a working array of partial sums, where <code>sum[i]</code> holds the sum of 2<sup>i</sup> values. For example, when 11 values have been accumulated, <code>sum[3]</code> will hold the sum of the first 8, <code>sum[2]</code> will not be in use, <code>sum[1]</code> will hold the sum of the next 2 values, and <code>sum[0]</code> will hold the one remaining value:
 
'''for''' ''i'' = 0 to ''n'' - 1:
    ''value'' = ''x''[''i'']
    ''place'' = 0
    '''while''' (''i'' & (1 << ''place'')):
        ''value'' += ''sum''[''place'']
        ++''place''
    ''sum''[''place''] = value
''value'' = 0
''place'' = 0
'''while''' (1 << ''place'') &le; ''n''
    '''if''' (''n'' & (1 << ''place'')):
        ''value'' += ''sum''[''place'']
    ++''place''
'''return''' ''value''
 
==Accuracy==
 
Suppose that one is summing ''n'' values ''x''<sub>''i''</sub>, for ''i''&nbsp;=&nbsp;1,&nbsp;...,&nbsp;''n''.  The exact sum is:
:<math>S_n = \sum_{i=1}^n x_i</math>
 
(computed with infinite precision).
 
With pairwise summation for a base case ''N''&nbsp;=&nbsp;1, one instead obtains <math>S_n + E_n</math>, where the error <math>E_n</math> is bounded above by:<ref name=Higham93/>
 
:<math>|E_n| \leq \frac{\varepsilon \log_2 n}{1 - \varepsilon \log_2 n} \sum_{i=1}^n |x_i| </math>
 
where ε is the [[machine precision]] of the arithmetic being employed (e.g. ε&nbsp;&asymp;&nbsp;10<sup>&minus;16</sup> for standard [[double precision]] floating point). Usually, the quantity of interest is the [[relative error]] <math>|E_n|/|S_n|</math>, which is therefore bounded above by:
:<math>\frac{|E_n|}{|S_n|} \leq \frac{\varepsilon \log_2 n}{1 - \varepsilon \log_2 n} \left(\frac{\sum_{i=1}^n |x_i|}{\left| \sum_{i=1}^n x_i \right|}\right). </math>
 
In the expression for the relative error bound, the fraction (&Sigma;|''x<sub>i</sub>''|/|&Sigma;''x<sub>i</sub>''|) is the [[condition number]] of the summation problem. Essentially, the condition number represents the ''intrinsic'' sensitivity of the summation problem to errors, regardless of how it is computed.<ref>L. N. Trefethen and D. Bau, ''Numerical Linear Algebra'' (SIAM: Philadelphia, 1997).</ref> The relative error bound of ''every'' ([[backwards stable]]) summation method by a fixed algorithm in fixed precision (i.e. not those that use [[arbitrary precision]] arithmetic, nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number.<ref name=Higham93/>  An ''ill-conditioned'' summation problem is one in which this ratio is large, and in this case even pairwise summation can have a large relative error. For example, if the summands ''x<sub>i</sub>'' are uncorrelated random numbers with zero mean, the sum is a [[random walk]] and the condition number will grow proportional to <math>\sqrt{n}</math>. On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant as <math>n\to\infty</math>.  If the inputs are all [[non-negative]], then the condition number is 1.
 
Note that the <math>1 - \varepsilon \log_2 n</math> denominator is effectively 1 in practice, since <math>\varepsilon \log_2 n</math> is much smaller than 1 until ''n'' becomes of order 2<sup>1/ε</sup>, which is roughly 10<sup>10<sup>15</sup></sup> in double precision.
 
In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as <math>O(\varepsilon n)</math> multiplied by the condition number.<ref name=Higham93/>  In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has a [[root mean square]] relative error that grows as <math>O(\varepsilon \sqrt{n})</math> and pairwise summation as an error that grows as <math>O(\varepsilon \sqrt{\log n})</math> on average.<ref name=Tasche>Manfred Tasche and Hansmartin Zeuner ''Handbook of Analytic-Computational Methods in Applied Mathematics'' Boca Raton, FL: CRC Press, 2000).</ref>
 
==References==
<references/>
 
[[Category:Computer arithmetic]]
[[Category:Numerical analysis]]
[[Category:Articles with example pseudocode]]

Latest revision as of 05:19, 1 April 2013

In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as On).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of O(εlogn) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

The algorithm

In pseudocode, the pairwise summation algorithm for an array x of length n can be written:

s = pairwise(x[1…n])
      if nN                    base case: naive summation for a sufficiently small array
          s = x[1]
          for i = 2 to n
              s = s + x[i]
      else                        divide and conquer: recursively sum two halves of the array
          m = floor(n / 2)
          s = pairwise(x[1…m]) + pairwise(x[m+1…n])
      endif

For some sufficiently small N, this algorithm switches to a naive loop-based summation as a base case, whose error bound is ON). Therefore, the entire sum has a worst-case error that grows asymptotically as ON log n) for large n, for a given condition number (see below), and the smallest error bound is attained for N = 1.

In an algorithm of this sort (as for divide and conquer algorithms in general[4]), it is desirable to use a larger base case in order to amortize the overhead of the recursion. If N = 1, then there is roughly one recursive subroutine call for every input, but more generally there is one recursive call for (roughly) every N/2 inputs if the recursion stops at exactly n = N. By making N sufficiently large, the overhead of recursion can be made negligible (precisely this technique of a large base case for recursive summation is employed by high-performance FFT implementations[3]).

Regardless of N, exactly n−1 additions are performed in total, the same as for naive summation, so if the recursion overhead is made negligible then pairwise summation has essentially the same computational cost as for naive summation.

A variation on this idea is to break the sum into b blocks at each recursive stage, summing each block recursively, and then summing the results, which was dubbed a "superblock" algorithm by its proposers.[5] The above pairwise algorithm corresponds to b = 2 for every stage except for the last stage which is b = N.

Pairwise summation can be done non-recursively using a working array of partial sums, where sum[i] holds the sum of 2i values. For example, when 11 values have been accumulated, sum[3] will hold the sum of the first 8, sum[2] will not be in use, sum[1] will hold the sum of the next 2 values, and sum[0] will hold the one remaining value:

for i = 0 to n - 1:
    value = x[i]
    place = 0
    while (i & (1 << place)):
        value += sum[place]
        ++place
    sum[place] = value
value = 0
place = 0
while (1 << place) ≤ n
    if (n & (1 << place)):
        value += sum[place]
    ++place
return value

Accuracy

Suppose that one is summing n values xi, for i = 1, ..., n. The exact sum is:

Sn=i=1nxi

(computed with infinite precision).

With pairwise summation for a base case N = 1, one instead obtains Sn+En, where the error En is bounded above by:[1]

|En|εlog2n1εlog2ni=1n|xi|

where ε is the machine precision of the arithmetic being employed (e.g. ε ≈ 10−16 for standard double precision floating point). Usually, the quantity of interest is the relative error |En|/|Sn|, which is therefore bounded above by:

|En||Sn|εlog2n1εlog2n(i=1n|xi||i=1nxi|).

In the expression for the relative error bound, the fraction (Σ|xi|/|Σxi|) is the condition number of the summation problem. Essentially, the condition number represents the intrinsic sensitivity of the summation problem to errors, regardless of how it is computed.[6] The relative error bound of every (backwards stable) summation method by a fixed algorithm in fixed precision (i.e. not those that use arbitrary precision arithmetic, nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number.[1] An ill-conditioned summation problem is one in which this ratio is large, and in this case even pairwise summation can have a large relative error. For example, if the summands xi are uncorrelated random numbers with zero mean, the sum is a random walk and the condition number will grow proportional to n. On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant as n. If the inputs are all non-negative, then the condition number is 1.

Note that the 1εlog2n denominator is effectively 1 in practice, since εlog2n is much smaller than 1 until n becomes of order 21/ε, which is roughly 101015 in double precision.

In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as O(εn) multiplied by the condition number.[1] In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has a root mean square relative error that grows as O(εn) and pairwise summation as an error that grows as O(εlogn) on average.[2]

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 Many property agents need to declare for the PIC grant in Singapore. However, not all of them know find out how to do the correct process for getting this PIC scheme from the IRAS. There are a number of steps that you need to do before your software can be approved.

    Naturally, you will have to pay a safety deposit and that is usually one month rent for annually of the settlement. That is the place your good religion deposit will likely be taken into account and will kind part or all of your security deposit. Anticipate to have a proportionate amount deducted out of your deposit if something is discovered to be damaged if you move out. It's best to you'll want to test the inventory drawn up by the owner, which can detail all objects in the property and their condition. If you happen to fail to notice any harm not already mentioned within the inventory before transferring in, you danger having to pay for it yourself.

    In case you are in search of an actual estate or Singapore property agent on-line, you simply should belief your intuition. It's because you do not know which agent is nice and which agent will not be. Carry out research on several brokers by looking out the internet. As soon as if you end up positive that a selected agent is dependable and reliable, you can choose to utilize his partnerise in finding you a home in Singapore. Most of the time, a property agent is taken into account to be good if he or she locations the contact data on his website. This may mean that the agent does not mind you calling them and asking them any questions relating to new properties in singapore in Singapore. After chatting with them you too can see them in their office after taking an appointment.

    Have handed an trade examination i.e Widespread Examination for House Brokers (CEHA) or Actual Property Agency (REA) examination, or equal; Exclusive brokers are extra keen to share listing information thus making certain the widest doable coverage inside the real estate community via Multiple Listings and Networking. Accepting a severe provide is simpler since your agent is totally conscious of all advertising activity related with your property. This reduces your having to check with a number of agents for some other offers. Price control is easily achieved. Paint work in good restore-discuss with your Property Marketing consultant if main works are still to be done. Softening in residential property prices proceed, led by 2.8 per cent decline within the index for Remainder of Central Region

    Once you place down the one per cent choice price to carry down a non-public property, it's important to accept its situation as it is whenever you move in – faulty air-con, choked rest room and all. Get round this by asking your agent to incorporate a ultimate inspection clause within the possibility-to-buy letter. HDB flat patrons routinely take pleasure in this security net. "There's a ultimate inspection of the property two days before the completion of all HDB transactions. If the air-con is defective, you can request the seller to repair it," says Kelvin.

    15.6.1 As the agent is an intermediary, generally, as soon as the principal and third party are introduced right into a contractual relationship, the agent drops out of the image, subject to any problems with remuneration or indemnification that he could have against the principal, and extra exceptionally, against the third occasion. Generally, agents are entitled to be indemnified for all liabilities reasonably incurred within the execution of the brokers´ authority.

    To achieve the very best outcomes, you must be always updated on market situations, including past transaction information and reliable projections. You could review and examine comparable homes that are currently available in the market, especially these which have been sold or not bought up to now six months. You'll be able to see a pattern of such report by clicking here It's essential to defend yourself in opposition to unscrupulous patrons. They are often very skilled in using highly unethical and manipulative techniques to try and lure you into a lure. That you must also protect your self, your loved ones, and personal belongings as you'll be serving many strangers in your home. Sign a listing itemizing of all of the objects provided by the proprietor, together with their situation. HSR Prime Recruiter 2010
  2. 2.0 2.1 2.2 Manfred Tasche and Hansmartin Zeuner Handbook of Analytic-Computational Methods in Applied Mathematics Boca Raton, FL: CRC Press, 2000).
  3. 3.0 3.1 S. G. Johnson and M. Frigo, "Implementing FFTs in practice, in Fast Fourier Transforms, edited by C. Sidney Burrus (2008).
  4. Radu Rugina and Martin Rinard, "Recursion unrolling for divide and conquer programs," in Languages and Compilers for Parallel Computing, chapter 3, pp. 34–48. Lecture Notes in Computer Science vol. 2017 (Berlin: Springer, 2001).
  5. Anthony M. Castaldo, R. Clint Whaley, and Anthony T. Chronopoulos, "Reducing floating-point error in dot product using the superblock family of algorithms," SIAM J. Sci. Comput., vol. 32, pp. 1156–1174 (2008).
  6. L. N. Trefethen and D. Bau, Numerical Linear Algebra (SIAM: Philadelphia, 1997).