Free algebra: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Addbot
m Bot: Migrating 1 interwiki links, now provided by Wikidata on d:q5500180
 
Line 1: Line 1:
{{Probabilistic}}
A '''randomized algorithm''' is an [[algorithm]] which employs a degree of [[randomness]] as part of its logic. The algorithm typically uses [[Uniform distribution (discrete)|uniformly random]] bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random bits. Formally, the algorithm's performance will be a [[random variable]] determined by the random bits; thus either the running time, or the output (or both) are random variables.


One has to distinguish between algorithms that use the random input to reduce the expected running time or memory usage, but always terminate with a correct result ([[Las Vegas algorithm]]s) in a bounded amount of time, and '''probabilistic algorithms''', which, depending on the random input, have a chance of producing an incorrect result ([[Monte Carlo algorithm]]s) or fail to produce a result either by signalling a failure or failing to terminate.


In the second case, random performance and random output, the term "algorithm" for a procedure is somewhat questionable. In the case of random output, it is no longer formally [[Effective method|effective]].<ref>"Probabilistic algorithms should not be mistaken with methods (which I refuse to call algorithms), which produce a result which has a high probability of being correct. It is essential that an algorithm produces correct results (discounting human or computer errors), even if this happens after a very long time." Henri Cohen (2000). ''A Course in Computational Algebraic Number Theory''. Springer-Verlag, p. 2.</ref>
  Hi, I'm Merlin Mick. Guam has for ages been his own home. After being associated with his problem for years he became an investment control and order filler but she's already tried for another anyone. The favorite hobby for her and her kids is body building and she'd never stop. She's been working on her behalf website for a little bit now. Test it out here: http://www.quantumpendants.org/<br><br>Feel free to surf to my web-site [http://www.quantumpendants.org/ Quantum Energy Pendant]
However, in some cases, probabilistic algorithms are the only practical means of solving a problem.<ref>"In [[primality test|testing primality]] of very large numbers chosen at random, the chance of stumbling upon a value that fools the [[Fermat primality test|Fermat test]] is less than the chance that [[cosmic radiation]] will cause the computer to make an error in carrying out a 'correct' algorithm. Considering an algorithm to be inadequate for the first reason but not for the second illustrates the difference between mathematics and engineering." [[Hal Abelson]] and [[Gerald J. Sussman]] (1996). ''[[Structure and Interpretation of Computer Programs]]''. [[MIT Press]], [http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html#footnote_Temp_80 section 1.2].</ref>
 
In common practice, randomized algorithms are approximated using a [[pseudorandom number generator]] in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior.
 
==Motivation==
 
As a motivating example, consider the problem of finding an ‘''a''’ in an [[Array data structure|array]] of ''n'' elements.
 
'''Input''': An array of ''n''≥2 elements, in which half are ‘''a''’s and the other half are ‘''b''’s.
 
'''Output''': Find an ‘''a''’ in the array.
 
We give two versions of the algorithm, one [[Las Vegas algorithm]] and one [[Monte Carlo algorithm]].
 
Las Vegas algorithm:
 
<source lang="pascal">
findingA_LV(array A, n)
begin
    repeat
        Randomly select one element out of n elements.
    until 'a' is found
end
</source>
 
This algorithm succeeds with probability 1. The run time of a single call varies and can be arbitrarily large, but the expected run time over many calls is <math>\Theta(n)</math>. (See [[Big O notation]])
 
Monte Carlo algorithm:
<source lang="pascal">
findingA_MC(array A, n, k)
begin
    i=0
    repeat
        Randomly select one element out of n elements.
        i = i + 1
    until i=k or 'a' is found
end
</source>
If an ‘''a''’ is found, the algorithm succeeds, else the algorithm fails. After ''k'' iterations, the probability of finding an ‘''a''’ is:
<div style="text-align:center;">
<math>
\Pr[\mathrm{find~a}]=1-(1/2)^k
</math>
</div>
 
This algorithm does not guarantee success, but the run time is fixed. The selection is executed exactly ''k'' times, therefore the run time is <math>\Theta(1)</math>.
 
Randomized algorithms are particularly useful when faced with a malicious "adversary" or [[attacker]] who deliberately tries to feed a bad input to the algorithm (see [[worst-case complexity]] and [[competitive analysis (online algorithm)]]) such as in the [[Prisoner's dilemma]]. It is for this reason that [[randomness]] is ubiquitous in [[cryptography]]. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore either a source of truly random numbers or a [[cryptographically secure pseudo-random number generator]] is required.  Another area in which randomness is inherent is [[quantum computer|quantum computing]].
 
In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable.  The Monte Carlo algorithm (related to the [[Monte Carlo method]] for simulation) completes in a fixed amount of time (as a function of the input size), but allow a ''small probability of error''. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (via [[Markov's inequality]]), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.
 
==Computational complexity==<!-- [[Probabilistic complexity]] and  [[Probabilistic computational complexity]] redirect here -->
 
[[Computational complexity theory]] models randomized algorithms as ''[[probabilistic Turing machine|probabilistic]] [[Turing machine]]s''. Both  [[Las Vegas algorithm|Las Vegas]] and [[Monte Carlo algorithm]]s are considered, and several [[complexity class]]es are studied. The most basic randomized complexity class is [[RP (complexity)|RP]], which is the class of [[decision problem]]s for which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms with [[polynomial time]] average case running time whose output is always correct are said to be in [[ZPP (complexity)|ZPP]].
 
The class of problems for which both YES and NO-instances are allowed to be identified with some error is called [[Bounded-error probabilistic polynomial|BPP]]. This class acts as the randomized equivalent of [[P (complexity)|P]], i.e. BPP represents the class of efficient randomized algorithms.
 
==History==
Historically, the first randomized algorithm was a method developed by [[Michael O. Rabin]] for the [[closest pair of points problem|closest pair problem]] in [[computational geometry]].<ref>Smid, Michiel. Closest point problems in computational geometry. Max-Planck-Institut für Informatik|year=1995</ref>
The study of randomized algorithms was spurred by the 1977 discovery of a [[Solovay-Strassen primality test|randomized primality test]] (i.e., determining the [[primality test|primality]] of a number) by [[Robert M. Solovay]] and [[Volker Strassen]]. Soon afterwards Michael O. Rabin demonstrated that the 1976 [[Miller-Rabin primality test|Miller's primality test]] can be turned into a randomized algorithm. At that time, no practical [[deterministic algorithm]] for primality was known.
 
The Miller-Rabin primality test relies on a binary relation between two positive integers ''k'' and ''n'' that can be expressed by saying that ''k'' "is a witness to the compositeness of" ''n''.  It can be shown that
* If there is a witness to the compositeness of ''n'', then ''n'' is composite (i.e., ''n'' is not [[prime number|prime]]), and
* If ''n'' is composite then at least three-fourths of the natural numbers less than ''n'' are witnesses to its compositeness, and
* There is a fast algorithm that, given ''k'' and ''n'', ascertains whether ''k'' is a witness to the compositeness of ''n''.
Observe that this implies that the primality problem is in Co-[[RP (complexity)|RP]].
 
If one [[random]]ly chooses 100 numbers less than a composite number ''n'', then the probability of failing to find such a "witness" is (1/4)<sup>100</sup> so that for most practical purposes, this is a good primality test.  If ''n'' is big, there may be no other test that is practical. The probability of error can be reduced to an arbitrary degree by performing enough independent tests.
 
Therefore, in practice, there is no penalty associated with accepting a small probability of error, since with a little care the probability of error can be made astronomically small.  Indeed, even though a deterministic polynomial-time primality test has since been found (see [[AKS primality test]]), it has not replaced the older probabilistic tests in [[cryptography|cryptographic]] [[computer software|software]] nor is it expected to do so for the foreseeable future.
 
==Applications==
 
===Quicksort===
[[Quicksort]] is a familiar, commonly used algorithm in which randomness can be useful. Any deterministic version of this algorithm requires ''[[Big O notation|O]]''(''n''<sup>2</sup>) time to sort ''n'' numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing in ''O''(''n''&nbsp;log&nbsp;''n'') time regardless of the characteristics of the input.
 
===Randomized incremental constructions in geometry===
In [[computational geometry]], a standard technique to build a structure like a [[convex hull]] or [[Delaunay triangulation]] is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be upper bounded. This technique is known as [[randomized incremental construction]].<ref>Seidel R. [http://www.cs.berkeley.edu/~jrs/meshpapers/Seidel.ps.gz Backwards Analysis of Randomized Geometric Algorithms].</ref>
 
===Verifying matrix multiplication===
{{main|Freivalds' algorithm}}
'''Input''': Matrix ''A'' ∈ ''R''<sup>''m'' &times; ''p''</sup>, ''B'' ∈ ''R''<sup>''p'' &times; ''n''</sup>, and ''C'' ∈ ''R''<sup>''m'' &times; ''n''</sup>.
 
'''Output''': True if  ''C'' = ''A'' · ''B''; false if  ''C'' ≠ ''A'' · ''B''
 
We give a Monte Carlo algorithm to solve the problem.<ref>[[Michael Mitzenmacher]], [[Eli Upfal]]. Probability and Computing:Randomized Algorithms and Probabilistic Analysis, April 2005. Cambridge University Press</ref>
  '''begin'''
    i=1
    '''repeat'''
        Choose r=(r<sub>1</sub>,...,r<sub>n</sub>) ∈ {0,1}<sup>n</sup> at random.
        Compute C · r and A · (B · r)
        '''if''' C · r ≠ A · (B · r)
            '''return''' FALSE
        '''endif'''
        i = i + 1
    '''until''' i=k
    '''return''' TRUE
  '''end'''
The running time of the algorithm is <math>O(kn^2)</math>.
 
'''Theorem''': The algorithm is correct with probability at least <math>1-(\frac{1}{2})^k</math>.
 
We will prove that if <math>A \cdot B \neq C</math> then <math>Pr[A \cdot B \cdot r=C \cdot r]\leq 1/2</math>.
 
If <math>A \cdot B\neq C</math>, by definition we have <math>D=A \cdot B-C \neq 0</math>. Without loss of generality,
we assume that <math>d_{11} \neq 0</math>.
 
On the other hand, <math>Pr[A \cdot B \cdot r=C \cdot r] = Pr[(A \cdot B-C) \cdot r=0] = Pr[D \cdot r=0]</math>.
 
If <math>D \cdot r=0</math>, then the first entry of <math>D \cdot r</math> is 0, that is
 
<div style="text-align:center;"><math>\sum_{j=1}^n d_{1j}r_j=0</math></div>
 
Since <math>d_{11} \neq 0</math>, we can solve for <math>r_1</math>:
 
<div style="text-align:center;"><math>r_1=\frac{-\sum_{j=2}^n d_{1j}r_j}{d_{11}}</math></div>
 
If we fix all <math>r_j</math> except <math>r_1</math>, the equality holds for at most one of the two choices for <math>r_1\in \{0,1\}</math>. Therefore,<math>Pr[ABr=Cr]\leq 1/2</math>.
 
We run the loop for ''k'' times. If <math>C=A \cdot B</math>, the algorithm is always correct; if <math>C\neq A \cdot B</math>, the probability of getting the correct answer is at least <math>1-(\frac{1}{2})^k</math>.
 
===Min cut===
{{Main|Karger’s algorithm}}
[[File:Single run of Karger’s Mincut algorithm.svg|thumb|Figure 2: Successful run of Karger’s algorithm on a 10-vertex graph. The minimum cut has size 3 and is indicated by the vertex colours.]]
 
'''Input''': A [[Graph theory|graph]] ''G''(''V'',''E'')
 
'''Output''': A [[Cut (graph theory)|cut]] partitioning the vertices into ''L'' and ''R'', with the minimum number of edges between ''L'' and ''R''.
 
Recall that the [[Edge contraction|contraction]] of two nodes, ''u'' and ''v'', in a (multi-)graph yields a new node ''u'' ' with edges that are the union of the edges incident on either ''u'' or ''v'', except from any edge(s) connecting ''u'' and ''v''. Figure 1 gives an example of contraction of vertex ''A'' and ''B''.
After contraction, the resulting graph may have parallel edges, but contains no self loops.
[[File:Contraction vertices.jpg|300px|thumb|center|Figure 1: Contraction of vertex A and B]]
 
Karger's <ref>A. A. Tsay, W. S. Lovejoy, David R. Karger, Random Sampling in Cut, Flow, and Network Design Problems, Mathematics of Operations Research, 24(2):383–413, 1999.</ref> basic algorithm:
  '''begin'''
    i=1
    '''repeat'''
      '''repeat'''
        Take a random edge (u,v)∈ E in G
        replace u and v with the contraction u'
      '''until''' only 2 nodes remain
      obtain the corresponding cut result C<sub>i</sub>
      i=i+1
    '''until''' i=m
    output the minimum cut among C<sub>1</sub>,C<sub>2</sub>,...,C<sub>m</sub>.
  '''end'''
In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution is <math>O(n)</math>, and ''n'' denotes the number of vertices.
After ''m'' times executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an
example of one execution of the algorithm. After execution, we get a cut of size 3.
 
'''Lemma 1''': Let ''k'' be the min cut size, and let ''C'' = {''e''<sub>1</sub>,''e''<sub>2</sub>,...,''e''<sub>''k''</sub>} be the min cut. If, during iteration ''i'', no edge ''e'' ∈  ''C'' is selected for contraction, then ''C''<sub>''i''</sub>&nbsp;=&nbsp;''C''.
 
'''Proof''': If ''G'' is not connected, then ''G'' can be partitioned into ''L'' and ''R'' without any edge between them. So the min cut in a disconnected graph is 0. Now, assume ''G'' is connected. Let ''V''=''L''∪ ''R'' be the partition of ''V'' induced by ''C'' : ''C''={ {''u'',''v''} ∈ ''E'' : ''u'' ∈ ''L'',''v'' ∈ ''R'' } (well-defined since ''G'' is connected). Consider an edge {''u'',''v''} of ''C''. Initially, ''u'',''v'' are distinct vertices. ''As long as we pick an edge f &ne; e, u and v do not get merged.'' Thus, at the end of the algorithm, we have two compound nodes covering the entire graph, one consisting of the vertices of ''L'' and the other consisting of the vertices of ''R''. As in figure 2, the size of min cut is 1, and ''C'' = {(''A'',''B'')}. If we don't select (''A'',''B'') for contraction, we can get the min cut.
 
'''Lemma 2''': If ''G'' is a multigraph with ''p'' vertices and whose min cut has size ''k'', then ''G'' has at least ''pk''/2 edges.
 
'''Proof''':
Because the min cut is ''k'', every vertex ''v'' must satisfy degree(''v'') ≥ ''k''. Therefore, the sum of the degree is at least ''pk''. But it is well known that the sum of vertex degrees equals 2|''E''|. The lemma follows.
 
'''Analysis of algorithm'''
 
The probability that the algorithm succeeds is 1&nbsp;&minus;&nbsp;the probability that all attempts fail. By independence, the probability that all attempts fail is
<div style="text-align:center;">
<math>
\prod_{i=1}^m \Pr(C_i\neq C)=\prod_{i=1}^m(1-\Pr(C_i=C)).
</math>
</div>
By lemma 1, the probability that ''C''<sub>''i''</sub>&nbsp;=&nbsp;''C'' is the probability that no edge of ''C'' is selected during iteration ''i''. Consider the inner loop and let ''G''<sub>''j''</sub> denote the graph after ''j'' edge contractions, where ''j''&nbsp;∈&nbsp;{0,1,...,''n''&nbsp;&minus;&nbsp;3}. ''G''<sub>''j''</sub> has ''n''&nbsp;&minus;&nbsp;''j'' vertices. We use the chain rule of [[Conditional probability|conditional possibilities]].
The probability that the edge chosen at iteration ''j'' is not in ''C'', given that no edge of ''C'' has been chosen before, is <math>1-\frac{k}{|E(G_j)|}</math>. Note that ''G''<sub>''j''</sub> still has min cut of size ''k'', so by Lemma 2, it still has at least <math>\frac{(n-j)k}{2}</math> edges.
 
Thus, <math>1-\frac{k}{|E(G_j)|}\geq 1-\frac{2}{n-j}=\frac{n-j-2}{n-j}</math>.
 
So by the chain rule, the probability of finding the min cut ''C'' is
<math>
    Pr[C_i=C] \geq \left(\frac{n-2}{n}\right)\left(\frac{n-3}{n-1}\right)\left(\frac{n-4}{n-2}\right)\ldots\left(\frac{3}{5}\right)\left(\frac{2}{4}\right)\left(\frac{1}{3}\right).
</math>
 
Cancellation gives <math>\Pr[C_i=C]\geq \frac{2}{n(n-1)}</math>. Thus the probability that the algorithm succeeds is at least <math>1-\left(1-\frac{2}{n(n-1)}\right)^m</math>. For <math>m=\frac{n(n-1)}{2}\ln n</math>, this is equivalent to <math>1-\frac{1}{n}</math>. The algorithm finds the min cut with probability  <math>1-\frac{1}{n}</math>, in time <math>O(mn)=O(n^3\log n)</math>.
 
==Derandomization==
 
Randomness can be viewed as a resource, like space and time. Derandomization is then the process of ''removing'' randomness (or using as little of it as possible). From the viewpoint of [[computational complexity]], derandomizing an efficient randomized algorithm is the question, is [[P (complexity)|P]] = [[Bounded-error probabilistic polynomial|BPP]] ?
 
There are also specific methods that can be employed to derandomize particular randomized algorithms:
* the [[method of conditional probabilities]], and its generalization, [[pessimistic estimator]]s
* [[discrepancy theory]] (which is used to derandomize geometric algorithms)
* the exploitation of limited independence in the random variables used by the algorithm, such as the [[pairwise independence]] used in [[universal hashing]]
* the use of [[expander graph]]s (or [[disperser]]s in general) to ''amplify'' a limited amount of initial randomness (this last approach is also referred to as generating [[pseudorandom]] bits from a random source, and leads to the related topic of pseudorandomness)
 
==Where randomness helps==
 
When the model of computation is restricted to [[Turing machine]]s, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements.
* Based on the initial motivating example: given an exponentially long string of 2<sup>''k''</sup> characters, half a's and half b's, a [[random access machine]] requires at least 2<sup>''k''&minus;1</sup> lookups in the worst-case to find the index of an ''a''; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups. <!-- Does the randomized lookup still have the same worst-case performance? -->
* In [[communication complexity]], the equality of two strings can be verified using <math>\log n</math> bits of communication with a randomized protocol. Any deterministic protocol requires <math>\Theta(n)</math> bits.
* The volume of a convex body can be estimated by a randomized algorithm to arbitrary precision in polynomial time.<ref>{{citation|last1=Dyer|first1=M.|last2=Frieze|first2=A.|last3=Kannan|first3=R.|title=A random polynomial-time algorithm for approximating the volume of convex bodies|journal=[[Journal of the ACM]]|volume=38|issue=1|year=1991|pages=1–17|doi=10.1145/102782.102783|url=http://portal.acm.org/citation.cfm?id=102783}}</ref> [[Imre Bárány|Bárány]] and [[Zoltán Füredi|Füredi]] showed that no deterministic algorithm can do the same.<ref>{{citation|last1=Füredi|first1=Z.|author1-link=Zoltán Füredi|last2=Bárány|first2=I.|year=1986|contribution=Computing the volume is difficult|title=[[Symposium on Theory of Computing|Proc. 18th ACM Symposium on Theory of Computing]] (Berkeley, California, May 28–30, 1986)|publisher=ACM|location=New York, NY|pages=442–447|doi=10.1145/12130.12176|isbn=0-89791-193-8}}</ref> This is true unconditionally, i.e. without relying on any complexity-theoretic assumptions.
* A more complexity-theoretic example of a place where randomness appears to help is the class [[IP (complexity)|IP]]. IP consists of all languages that can be accepted (with high probability) by a polynomially long interaction between an all-powerful prover and a verifier that implements a BPP algorithm. IP = [[PSPACE]].<ref>{{citation|last=Shamir|first=A.|authorlink=Adi Shamir|title=IP = PSPACE|journal=Journal of the ACM|volume=39|issue=4|year=1992|pages=869–877|doi=10.1145/146585.146609}}</ref> However, if it is required that the verifier be deterministic, then IP = [[NP (complexity)|NP]].
* In a [[chemical reaction network]] (a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable.  More specifically, a Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random chemical reaction network is used.  With a simple nondeterministic chemical reaction network (any possible reaction can happen next), the computational power is limited to [[Primitive recursive|primitive recursive functions]].
* The inherent randomness of algorithms such as [[Hyper-encryption]], [[Bayesian network]]s, [[Random neural network]]s and [[Probabilistic Cellular Automata]] was harnessed by [[Krishna Palem]] et al. to design highly efficient hardware systems using Probabilistic CMOS or [[PCMOS]] technology that were shown to achieve gains that are as high as a multiplicative factor of 560 when compared to a competing energy-efficient [[CMOS]] based realizations.<ref>{{Cite web|url=http://www.pubzone.org/dblp/conf/date/ChakrapaniACKPS06|title= Ultra Efficient Embedded SOC Architectures based on Probabilistic CMOS (PCMOS) Technology|author= Lakshmi N. Chakrapani, Bilge E. S. Akgul, Suresh Cheemalavagu, Pinar Korkmaz, Krishna V. Palem and Balasubramanian Seshasayee|publisher=Design Automation and Test in Europe Conference (DATE), 2006}}</ref>
 
==See also==
*[[Probabilistic analysis of algorithms]]
 
==Notes==
{{reflist}}
 
==References==
* [[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]], and [[Clifford Stein]]. ''[[Introduction to Algorithms]]'', Second Edition. MIT Press and McGraw–Hill, 1990. ISBN 0-262-03293-7. Chapter 5: Probabilistic Analysis and Randomized Algorithms, pp.&nbsp;91–122.
* [[Jon Kleinberg]] and [[Éva Tardos]]. ''Algorithm Design''. Chapter 13: "Randomized algorithms".
* Don Fallis. 2000. [http://dx.doi.org/10.1093/bjps/51.2.255 "The Reliability of Randomized Algorithms."] ''British Journal for the Philosophy of Science'' 51:255–71.
* [[Michael Mitzenmacher|M. Mitzenmacher]] and [[Eli Upfal|E. Upfal]]. Probability and Computing : Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, New York (NY), 2005.
* [[Rajeev Motwani]] and P. Raghavan. Randomized Algorithms. Cambridge University Press, New York (NY), 1995.
* Rajeev Motwani and P. Raghavan. [http://portal.acm.org/citation.cfm?id=234313.234327 Randomized Algorithms.] A survey on Randomized Algorithms.
* {{Citation|author = [[Christos Papadimitriou]] | year = 1993 | title = Computational Complexity | publisher = Addison Wesley | edition = 1st | isbn = 0-201-53082-1}} Chapter 11: Randomized computation, pp.&nbsp;241–278.
* M. O. Rabin. (1980), "Probabilistic Algorithm for Testing Primality." ''Journal of Number Theory'' 12:128–38.
*A. A. Tsay, W. S. Lovejoy, David R. Karger, Random Sampling in Cut, Flow, and Network Design Problems, Mathematics of Operations Research, 24(2):383–413, 1999.
 
{{DEFAULTSORT:Randomized Algorithm}}
[[Category:Randomized algorithms| ]]
[[Category:Stochastic algorithms]]
[[Category:Analysis of algorithms]]
[[Category:Probabilistic complexity theory]]

Latest revision as of 20:07, 22 August 2014


Hi, I'm Merlin Mick. Guam has for ages been his own home. After being associated with his problem for years he became an investment control and order filler but she's already tried for another anyone. The favorite hobby for her and her kids is body building and she'd never stop. She's been working on her behalf website for a little bit now. Test it out here: http://www.quantumpendants.org/

Feel free to surf to my web-site Quantum Energy Pendant