Disjoint-set data structure: Difference between revisions
en>Dekart added Category:Search algorithms using HotCat |
en>Rjwilmsi m Journal cites, added 1 DOI using AWB (9887) |
||
Line 1: | Line 1: | ||
{{For|the theorem in probability theory|Law of the iterated logarithm}} | |||
In [[computer science]], the '''iterated logarithm''' of ''n'', written {{log-star}} ''n'' (usually read "'''log star'''"), is the number of times the [[logarithm]] function must be iteratively applied before the result is less than or equal to 1. The simplest formal definition is the result of this recursive function: | |||
:<math> | |||
\log^* n := | |||
\begin{cases} | |||
0 & \mbox{if } n \le 1; \\ | |||
1 + \log^*(\log n) & \mbox{if } n > 1 | |||
\end{cases} | |||
</math> | |||
On the positive real numbers, the continuous [[super-logarithm]] (inverse [[tetration]]) is essentially equivalent: | |||
:<math>\log^* n = \lceil \text{slog}_e(n) \rceil</math> | |||
but on the negative real numbers, log-star is 0, whereas <math>\lceil \text{slog}_e(-x)\rceil = -1</math> for positive ''x'', so the two functions differ for negative arguments. | |||
[[Image:Iterated logarithm.png|right|300px|thumb|'''Figure 1.''' Demonstrating lg* 4 = 2]] | |||
In computer science, {{lg-star}} is often used to indicate the binary iterated logarithm, which iterates the [[binary logarithm]] instead. The iterated logarithm accepts any positive [[real number]] and yields an [[integer]]. Graphically, it can be understood as the number of "zig-zags" needed in Figure 1 to reach the interval [0, 1] on the ''x''-axis. | |||
Mathematically, the iterated logarithm is well-defined not only for base 2 and base ''e'', but for any base greater than <math>e^{1/e}\approx1.444667</math>. | |||
==Analysis of algorithms== | |||
The iterated logarithm is useful in [[analysis of algorithms]] and [[computational complexity theory|computational complexity]], appearing in the time and space complexity bounds of some algorithms such as: | |||
* Finding the [[Delaunay triangulation]] of a set of points knowing the [[Euclidean minimum spanning tree]]: randomized [[Big-O notation|O]](''n'' {{log-star}} ''n'') time<ref>Olivier Devillers, "Randomization yields simple O(n log* n) algorithms for difficult ω(n) problems.". ''International Journal of Computational Geometry & Applications'' '''2''':01 (1992), pp. 97–111.</ref> | |||
* [[Fürer's algorithm]] for integer multiplication: O(''n'' log ''n'' 2<sup>{{lg-star}} ''n''</sup>) | |||
* Finding an approximate maximum (element at least as large as the median): {{lg-star}} ''n'' − 4 to {{lg-star}} ''n'' + 2 parallel operations<ref>Noga Alon and Yossi Azar, "Finding an Approximate Maximum". ''SIAM Journal of Computing'' '''18''':2 (1989), pp. 258–267.</ref> | |||
* Richard Cole and [[Uzi Vishkin]]'s [[Graph coloring#Parallel_and_distributed_algorithms|distributed algorithm for 3-coloring an ''n''-cycle]]: ''O''({{log-star}} ''n'') synchronous communication rounds.<ref>Richard Cole and Uzi Vishkin: "Deterministic coin tossing with applications to optimal parallel list ranking", Information and Control 70:1(1986), pp. 32–53.</ref><ref>{{Introduction to Algorithms|1}} Section 30.5.</ref> | |||
The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself. For all values of ''n'' relevant to counting the running times of algorithms implemented in practice (i.e., ''n'' ≤ 2<sup>65536</sup>, which is far more than the atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5. | |||
{|class=wikitable | |||
! ''x'' !! {{lg-star}} ''x'' | |||
|- | |||
| (−∞, 1] || 0 | |||
|- | |||
| (1, 2] || 1 | |||
|- | |||
| (2, 4] || 2 | |||
|- | |||
| (4, 16] || 3 | |||
|- | |||
| (16, 65536] || 4 | |||
|- | |||
| (65536, 2<sup>65536</sup>] || 5 | |||
|} | |||
Higher bases give smaller iterated logarithms. Indeed, the only function commonly used in complexity theory that grows more slowly is the [[Ackermann function#Inverse|inverse Ackermann function]]. | |||
==Other applications== | |||
The iterated logarithm is closely related to the generalized logarithm function used in [[symmetric level-index arithmetic]]. It is also proportional to the additive [[persistence of a number]], the number of times one must replace the number by the sum of its digits before reaching its [[digital root]]. | |||
Santhanam<ref>[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.2392 On Separators, Segregators and Time versus Space]</ref> shows that [[DTIME]] and [[NTIME]] are distinct up to <math>n\sqrt{\log^*n}.</math> | |||
==Notes== | |||
{{reflist}} | |||
==References== | |||
*{{Introduction to Algorithms|2|chapter=3.2: Standard notations and common functions|pages=55–56}} | |||
[[Category:Asymptotic analysis]] | |||
[[Category:Logarithms]] |
Revision as of 17:59, 25 January 2014
28 year-old Painting Investments Worker Truman from Regina, usually spends time with pastimes for instance interior design, property developers in new launch ec Singapore and writing. Last month just traveled to City of the Renaissance. In computer science, the iterated logarithm of n, written Template:Log-star n (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1. The simplest formal definition is the result of this recursive function:
On the positive real numbers, the continuous super-logarithm (inverse tetration) is essentially equivalent:
but on the negative real numbers, log-star is 0, whereas for positive x, so the two functions differ for negative arguments.
In computer science, Template:Lg-star is often used to indicate the binary iterated logarithm, which iterates the binary logarithm instead. The iterated logarithm accepts any positive real number and yields an integer. Graphically, it can be understood as the number of "zig-zags" needed in Figure 1 to reach the interval [0, 1] on the x-axis.
Mathematically, the iterated logarithm is well-defined not only for base 2 and base e, but for any base greater than .
Analysis of algorithms
The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:
- Finding the Delaunay triangulation of a set of points knowing the Euclidean minimum spanning tree: randomized O(n Template:Log-star n) time[1]
- Fürer's algorithm for integer multiplication: O(n log n 2Template:Lg-star n)
- Finding an approximate maximum (element at least as large as the median): Template:Lg-star n − 4 to Template:Lg-star n + 2 parallel operations[2]
- Richard Cole and Uzi Vishkin's distributed algorithm for 3-coloring an n-cycle: O(Template:Log-star n) synchronous communication rounds.[3][4]
The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself. For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.
x | Template:Lg-star x |
---|---|
(−∞, 1] | 0 |
(1, 2] | 1 |
(2, 4] | 2 |
(4, 16] | 3 |
(16, 65536] | 4 |
(65536, 265536] | 5 |
Higher bases give smaller iterated logarithms. Indeed, the only function commonly used in complexity theory that grows more slowly is the inverse Ackermann function.
Other applications
The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. It is also proportional to the additive persistence of a number, the number of times one must replace the number by the sum of its digits before reaching its digital root.
Santhanam[5] shows that DTIME and NTIME are distinct up to
Notes
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
References
- ↑ Olivier Devillers, "Randomization yields simple O(n log* n) algorithms for difficult ω(n) problems.". International Journal of Computational Geometry & Applications 2:01 (1992), pp. 97–111.
- ↑ Noga Alon and Yossi Azar, "Finding an Approximate Maximum". SIAM Journal of Computing 18:2 (1989), pp. 258–267.
- ↑ Richard Cole and Uzi Vishkin: "Deterministic coin tossing with applications to optimal parallel list ranking", Information and Control 70:1(1986), pp. 32–53.
- ↑ Template:Introduction to Algorithms Section 30.5.
- ↑ On Separators, Segregators and Time versus Space