|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| {{About|an algorithm for the greatest common divisor|other use of "Euclidean"|Euclidean (disambiguation)}}
| | Do you devote long hours at work before a person to in addition to of work? Deal with annoying managers including paper writing services and customers need all day, simply to come the location of yet a thousand more tasks? If you begin doing so, needless to say you just feel like you're on the fast trail to an important burnout.<br><br><br><br>As mentioned above, programs that be ready to generate your office business sales leads do could be different formats. For the best result, get an individual who searches tens of thousands websites for you. These websites will be a collection of classified websites, online marketplaces, message boards, and additional!<br><br>Introduce solutions through various online may mean. Since your target audience usually wishes products on the internet then it's always best to concentrate your efforts in exposing these on the internet. Make use of effective affiliate marketing tools like creating an online business dedicated for those products. Also you can apply some seo processes to help people find your service. Getting your hand in some high traffic online forums will help to show what you have to offer. This is an additional good in order to provide genuine interaction with those of which are looking for your products likely are going to promote. Of course, getting your profiles up in social networking sites like Facebook, Twitter and Plurk will also help spread the word about your latest articles.<br><br>Actually, techniques many freelance article writers out there with different expertise. Are actually freelance writers who could provide you with web copy, newsletters, articles, e-books, e-courses, various other services. Can surely in order to have quality [http://migueljon.tumblr.com/post/95532011079/singapore/ Writing] Services a person need to want efficiently in too busy. Here handful of outlined purposes why you will need the service of a contract writer.<br><br>Decide on the product quickly. This is created for you to truly promote hunger suppressant . that you can easily be used for. If you someone who's passionate to assist people to obtain back to healthier lifestyle and get fit, then can decide on a product involved in health and fitness. If you're have heart for keeping dogs since your best friend, then taking some pet supplies to promote is the best way to continue. Doing higher . help you are your passion to the subsequent level, and completing you'd like of the duties to promote these products will be an enjoyable experience anyone.<br><br>A meta description could be the brief written copy that appears in search engine results below the title. It requires succinctly describe the page so readers know it can provide them the information they hoping to find.<br><br>Keep the communication basic readers opening. You can inquire about their valuable suggestions on subjects compose on. Persuade your readers that usually intend give blog content that wil attract to them all. Integrate a contact page into your blog motivating your visitors to communicating with you. Consider the discussion of the blog forward by encouraging user participation. This will produce your users bookmark your site and share their exposure to others.<br><br>There are nevertheless many work home online business opportunities that you can find and take a crack at. There are a couple of things that require keep in mind. One, will have to thoroughly look into the market a person getting into so discover know how to market yourself the proper way. And two, you must not hurry in earning the first big dollars because the actual not success quick themes. |
| <!-- FOR REASONS OF ACCESSIBILITY TO VISUALLY-IMPAIRED READERS (see [[WP:ACCESS]]), THIS ARTICLE AVOIDS MATH MODE, UNLESS IT'S NECESSARY. PLEASE DO NOT ADD MATH-MODE FORMULAE, UNLESS YOU ALSO ADD THE CORRESPONDING ALT TEXT AS WELL, E.G., <math alt="description">. EXAMPLES CAN BE FOUND BELOW, E.G., IN THE "Matrix method" SECTION. -->
| |
| [[File:Euclid's algorithm Book VII Proposition 2 3.png|300px|thumb|right|Euclid's method for finding the greatest common divisor (GCD) of two starting lengths BA and DC, both defined to be multiples of a common "unit" length. The length DC being shorter, it is used to "measure" BA, but only once because remainder EA is less than CD. EA now measures (twice) the shorter length DC, with remainder FC shorter than EA. Then FC measures (three times) length EA. Because there is no remainder, the process ends with FC being the GCD. On the right [[Nicomachus]]' example with numbers 49 and 21 resulting in their GCD of 7 (derived from Heath 1908:300).]]
| |
| | |
| In [[mathematics]], the '''Euclidean algorithm'''{{Ref_label|a|a|none}}, or '''Euclid's algorithm''', is a method for computing the [[greatest common divisor]] (GCD) of two (usually positive) integers, also known as the greatest common factor (GCF) or highest common factor (HCF). It is named after the [[Greeks|Greek]] [[mathematician]] [[Euclid]], who described it in Books VII and X of his ''[[Euclid's Elements|Elements]]''.<ref>[[Thomas L. Heath]], ''The Thirteen Books of Euclid's Elements'', 2nd ed. [Facsimile. Original publication: Cambridge University Press, 1925], 1956, [[Dover Publications]]</ref>
| |
| | |
| The GCD of two positive integers is the largest integer that divides both of them without leaving a [[remainder]] (the GCD of two integers in general is defined in a more subtle way).
| |
| | |
| In its simplest form, Euclid's algorithm starts with a pair of positive integers, and forms a new pair that consists of the smaller number and the difference between the larger and smaller numbers. The process repeats until the numbers in the pair are equal. That number then is the greatest common divisor of the original pair of integers.
| |
| | |
| The main principle is that the GCD does not change if the smaller number is subtracted from the larger number. For example, the GCD of 252 and 105 is exactly the GCD of 147 (= 252 − 105) and 105. Since the larger of the two numbers is reduced, repeating this process gives successively smaller numbers, so this repetition will necessarily stop sooner or later — when the numbers are equal (if the process is attempted once more, one of the numbers will become 0).
| |
| | |
| The earliest surviving description of the Euclidean algorithm is in Euclid's ''Elements'' (c. 300 BC), making it one of the oldest numerical [[algorithm]]s still in common use. The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as [[Gaussian integer]]s and [[polynomial]]s in one variable. This led to modern [[abstract algebra]]ic notions, such as [[Euclidean domain]]s. The Euclidean algorithm has been generalized further to other mathematical structures, such as [[knot (mathematics)|knots]] and [[multivariate polynomial]]s.
| |
| | |
| The algorithm has many theoretical and practical applications. It may be used to generate almost all the most important traditional [[Euclidean Rythm|musical rhythms]] used in different cultures throughout the world.<ref>{{Citation |last=Toussaint |first=Godfried |author-link=Godfried Toussaint |url=http://cgm.cs.mcgill.ca/~godfried/publications/banff.pdf |title=The Euclidean algorithm generates traditional musical rhythms |journal=Proceedings of BRIDGES: Mathematical Connections in Art, Music, and Science |location=Banff, Alberta, Canada |date=July 31 to August 3, 2005 |pages=47–56 |doi= }}</ref> It is a key element of the [[RSA algorithm]], a [[public-key encryption]] method widely used in [[electronic commerce]]. It is used to solve [[Diophantine equations]], such as finding numbers that satisfy multiple congruences ([[Chinese remainder theorem]]) or [[multiplicative inverse]]s of a [[finite field]]. It can also be used to construct [[continued fraction]]s, in the [[Sturm chain]] method for finding real roots of a polynomial, and in several modern [[integer factorization]] algorithms. Finally, it is a basic tool for proving theorems in modern [[number theory]], such as [[Lagrange's four-square theorem]] and the [[fundamental theorem of arithmetic]] (unique factorization).
| |
| | |
| If implemented using [[remainder]]s of [[Euclidean division]] rather than subtractions, Euclid's algorithm computes the GCD of large numbers efficiently: it never requires more division steps than five times the number of digits (in base 10) of the smaller integer. This was proved by [[Gabriel Lamé]] in 1844, and marks the beginning of [[computational complexity theory]]. Methods for improving the algorithm's efficiency were developed in the 20th century.
| |
| | |
| By [[extended Euclidean algorithm|reversing the steps in the Euclidean algorithm]], the GCD can be expressed as a [[linear combination|sum]] of the two original numbers each multiplied by a positive or negative [[integer]], e.g., the GCD of 252 and 105 is 21, and {{nowrap|21 {{=}} [5 × 105] + [(−2) × 252].}} This important property is known as [[Bézout's identity]].
| |
| | |
| ==Background — Greatest common divisor==
| |
| {{main|Greatest common divisor}}
| |
| | |
| The Euclidean algorithm calculates the greatest common divisor (GCD) of two [[natural number]]s ''a'' and ''b''. The greatest common divisor ''g'' is the largest natural number that divides both ''a'' and ''b'' without leaving a remainder. Synonyms for the GCD include the ''greatest common factor'' (GCF), the ''highest common factor'' (HCF), and the ''greatest common measure'' (GCM). The greatest common divisor is often written as gcd(''a'', ''b'') or, more simply, as (''a'', ''b''),<ref>{{Harvnb|Stark|1978|p=16}}</ref> although the latter notation is also used for other mathematical concepts, such as two-dimensional [[coordinate vector|vectors]].
| |
| | |
| If gcd(''a'', ''b'') = 1, then ''a'' and ''b'' are said to be [[coprime]] (or relatively prime).<ref>{{Harvnb|Stark|1978|p=21}}</ref> This property does not imply that ''a'' or ''b'' are themselves [[prime number]]s.<ref>{{Harvnb|LeVeque|1996|p=32}}</ref> For example, neither 6 nor 35 is a prime number, since they both have two prime factors: 6 = 2 × 3 and 35 = 5 × 7. Nevertheless, 6 and 35 are coprime. No natural number other than 1 divides both 6 and 35, since they have no prime factors in common.
| |
| | |
| [[File:24x60.svg|thumb|upright|alt="Tall, slender rectangle divided into a grid of squares. The rectangle is two squares wide and five squares tall."|A 24-by-60 rectangle is covered with ten 12-by-12 square tiles, where 12 is the GCD of 24 and 60. More generally, an ''a''-by-''b'' rectangle can be covered with square tiles of side-length ''c'' only if ''c'' is a common divisor of ''a'' and ''b''.]]
| |
| | |
| Let ''g'' = gcd(''a'', ''b''). Since ''a'' and ''b'' are both multiples of ''g'', they can be written ''a'' = ''mg'' and ''b'' = ''ng'', and there is no larger number ''G'' > ''g'' for which this is true. The natural numbers ''m'' and ''n'' must be coprime, since any common factor could be factored out of ''m'' and ''n'' to make ''g'' greater. Thus, any other number ''c'' that divides both ''a'' and ''b'' must also divide ''g''. The greatest common divisor ''g'' of ''a'' and ''b'' is the unique (positive) common divisor of ''a'' and ''b'' that is divisible by any other common divisor ''c''.<ref>{{Harvnb|LeVeque|1996|p=31}}</ref>
| |
| | |
| The GCD can be visualized as follows.<ref>{{cite book | author = Grossman JW | year = 1990 | title = Discrete Mathematics | publisher = Macmillan | location = New York | isbn = 0-02-348331-8 | page = 213}}</ref> Consider a rectangular area ''a'' by ''b'', and any common divisor ''c'' that divides both ''a'' and ''b'' exactly. The sides of the rectangle can be divided into segments of length ''c'', which divides the rectangle into a grid of squares of side length ''c''. The greatest common divisor ''g'' is the largest value of ''c'' for which this is possible. For illustration, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can be divided into a grid of 12-by-12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5).
| |
| | |
| The GCD of two numbers ''a'' and ''b'' is the product of the prime factors shared by the two numbers, where a same prime factor can be used multiple times, but only as long as the product of these factors divides both ''a'' and ''b''.<ref name="Schroeder_21" >{{Harvnb|Schroeder|2005|pp=21–22}}</ref> For example, since 1386 can be factored into 2 × 3 × 3 × 7 × 11, and 3213 can be factored into 3 × 3 × 3 × 7 × 17, the greatest common divisor of 1386 and 3213 equals 63 = 3 × 3 × 7, the product of their shared prime factors. If two numbers have no prime factors in common, their greatest common divisor is 1 (obtained here as an instance of the [[empty product]]), in other words they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors.<ref>{{Harvnb|Schroeder|2005|p=19}}</ref><ref>{{cite book | author = Ogilvy CS, Anderson JT | year = 1966 | title = Excursions in number theory | publisher = [[Oxford University Press]] | location = New York | pages = 27–29 | lccn = 6614484}}</ref> [[integer factorization|Factorization]] of large integers is believed to be a computationally very difficult problem, and the security of many modern cryptography systems is based upon its infeasibility.<ref name="Schroeder_216" >{{Harvnb|Schroeder|2005|pp=216–219}}</ref>
| |
| | |
| Another definition of the GCD is helpful in advanced mathematics, particularly [[ring theory]].<ref name="Leveque_p33" /> The greatest common divisor ''g'' of two nonzero numbers ''a'' and ''b'' is also their smallest positive integral linear combination, that is, the smallest positive number of the form ''ua'' + ''vb'' where ''u'' and ''v'' are integers. The set of all integral linear combinations of ''a'' and ''b'' is actually the same as the set of all multiples of ''g'' (''mg'', where ''m'' is an integer). In modern mathematical language, the [[ideal (ring theory)|ideal]] generated by ''a'' and ''b'' is the ideal generated by ''g'' alone (an ideal generated by a single element is called a [[principal ideal]], and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of ''a'' and ''b'' also divides the GCD (it divides both terms of ''ua'' + ''vb''). The equivalence of this GCD definition with the other definitions is described below.
| |
| | |
| The GCD of three or more numbers equals the product of the prime factors common to all the numbers,<ref>{{Harvnb|Stark|1978|p=25}}</ref> but it can also be calculated by repeatedly taking the GCDs of pairs of numbers.<ref>{{Harvnb|Ore|1948|pp=47–48}}</ref> For example,
| |
| | |
| : {{math|1=gcd(''a'', ''b'', ''c'') = gcd(''a'', gcd(''b'', ''c'')) = gcd(gcd(''a'', ''b''), ''c'') = gcd(gcd(''a'', ''c''), ''b'').}}
| |
| | |
| Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers.
| |
| | |
| ==Description==
| |
| The simple form of Euclid's algorithm uses only subtraction and comparison. Starting with a pair of positive integers, form a new pair consisting of the smaller number and the difference between the larger number and the smaller number. This process repeats until the numbers in the new pair are equal to each other; that value is the greatest common divisor of the original pair. If one number is much smaller than the other, many subtraction steps will be needed before the larger number is reduced to a value less than or equal to the other number in the pair.
| |
| | |
| The common form of Euclid's algorithm replaces subtracting the small positive number from the big number (possibly many times) with finding the remainder in long division. This form of Euclid's algorithm also starts with a pair of positive integers, then forms a new pair consisting of the smaller number and the remainder obtained by dividing the larger number by the smaller number. The process repeats until one number is zero. The other number then is the greatest common divisor of the original pair.
| |
| | |
| ===Procedure===
| |
| The Euclidean algorithm proceeds in a series of steps such that the output of each step is used as an input for the next one. Let ''k'' be an integer that counts the steps of the algorithm, starting with zero. Thus, the initial step corresponds to ''k'' = 0, the next step corresponds to ''k'' = 1, and so on.
| |
| | |
| Each step begins with two nonnegative remainders ''r''<sub>''k''−1</sub> and ''r''<sub>''k''−2</sub>. Since the algorithm ensures that the remainders decrease steadily with every step, ''r''<sub>''k''−1</sub> is less than its predecessor ''r''<sub>''k''−2</sub>. The goal of the ''k''th step is to find a [[quotient]] ''q''<sub>''k''</sub> and [[remainder]] ''r''<sub>''k''</sub> such that the equation is satisfied
| |
| | |
| : {{math|1=''r''<sub>''k''−2</sub> = ''q''<sub>''k''</sub> ''r''<sub>''k''−1</sub> + ''r''<sub>''k''</sub>}}
| |
| | |
| where ''r''<sub>''k''</sub> < ''r''<sub>''k''−1</sub>. In other words, multiples of the smaller number ''r''<sub>''k''−1</sub> are subtracted from the larger number ''r''<sub>''k''−2</sub> until the remainder is smaller than the ''r''<sub>''k''−1</sub>.
| |
| | |
| In the initial step (''k'' = 0), the remainders ''r''<sub>−2</sub> and ''r''<sub>−1</sub> equal ''a'' and ''b'', the numbers for which the GCD is sought. In the next step (''k'' = 1), the remainders equal ''b'' and the remainder ''r''<sub>0</sub> of the initial step, and so on. Thus, the algorithm can be written as a sequence of equations
| |
| | |
| : {{math|1=''a'' = ''q''<sub>0</sub> ''b'' + ''r''<sub>0</sub>}}
| |
| : {{math|1=''b'' = ''q''<sub>1</sub> ''r''<sub>0</sub> + ''r''<sub>1</sub>}}
| |
| : {{math|1=''r''<sub>0</sub> = ''q''<sub>2</sub> ''r''<sub>1</sub> + ''r''<sub>2</sub>}}
| |
| : {{math|1=''r''<sub>1</sub> = ''q''<sub>3</sub> ''r''<sub>2</sub> + ''r''<sub>3</sub>}}
| |
| : {{math|1=…}}
| |
| | |
| If ''a'' is smaller than ''b'', the first step of the algorithm swaps the numbers. For example, if ''a'' < ''b'', the initial quotient ''q''<sub>0</sub> equals zero, and the remainder ''r''<sub>0</sub> is ''a''. Thus, ''r''<sub>''k''</sub> is smaller than its predecessor ''r''<sub>''k''−1</sub> for all ''k'' ≥ 0.
| |
| | |
| Since the remainders decrease with every step but can never be negative, a remainder ''r''<sub>''N''</sub> must eventually equal zero, at which point the algorithm stops.<ref>{{Harvnb|Stark|1978|p=18}}</ref> The final nonzero remainder ''r''<sub>''N''−1</sub> is the greatest common divisor of ''a'' and ''b''. The number ''N'' cannot be infinite because there are only a finite number of nonnegative integers between the initial remainder ''r''<sub>0</sub> and zero.
| |
| | |
| ===Proof of validity===
| |
| The validity of the Euclidean algorithm can be proven by a two-step argument.<ref>{{Harvnb|Stark|1978|pp=16–20}}</ref> In the first step, the final nonzero remainder ''r''<sub>''N''−1</sub> is shown to divide both ''a'' and ''b''. Since it is a common divisor, it must be less than or equal to the greatest common divisor ''g''. In the second step, it is shown that any common divisor of ''a'' and ''b'', including ''g'', must divide ''r''<sub>''N''−1</sub>; therefore, ''g'' must be less than or equal to ''r''<sub>''N''−1</sub>. These two conclusions are inconsistent unless ''r''<sub>''N''−1</sub> = ''g''.
| |
| | |
| To demonstrate that ''r''<sub>''N''−1</sub> divides both ''a'' and ''b'' (the first step), ''r''<sub>''N''−1</sub> divides its predecessor ''r''<sub>''N''−2</sub>
| |
| | |
| : {{math|1=''r''<sub>''N''−2</sub> = ''q''<sub>''N''</sub> ''r''<sub>''N''−1</sub>}}
| |
| | |
| since the final remainder ''r''<sub>''N''</sub> is zero. ''r''<sub>''N''−1</sub> also divides its next predecessor ''r''<sub>''N''−3</sub>
| |
| | |
| : {{math|1=''r''<sub>''N''−3</sub> = ''q''<sub>''N''−1</sub> ''r''<sub>''N''−2</sub> + ''r''<sub>''N''−1</sub>}}
| |
| | |
| because it divides both terms on the right-hand side of the equation. Iterating the same argument, ''r''<sub>''N''−1</sub> divides all the preceding remainders, including ''a'' and ''b''. None of the preceding remainders ''r''<sub>''N''−2</sub>, ''r''<sub>''N''−3</sub>, etc. divide ''a'' and ''b'', since they leave a remainder. Since ''r''<sub>''N''−1</sub> is a common divisor of ''a'' and ''b'', ''r''<sub>''N''−1</sub> ≤ ''g''.
| |
| | |
| In the second step, any natural number ''c'' that divides both ''a'' and ''b'' (in other words, any common divisor of ''a'' and ''b'') divides the remainders ''r''<sub>''k''</sub>. By definition, ''a'' and ''b'' can be written as multiples of ''c'': ''a'' = ''mc'' and ''b'' = ''nc'', where ''m'' and ''n'' are natural numbers. Therefore, ''c'' divides the initial remainder ''r''<sub>0</sub>, since ''r''<sub>0</sub> = ''a'' − ''q''<sub>0</sub>''b'' = ''mc'' − ''q''<sub>0</sub>''nc'' = (''m'' − ''q''<sub>0</sub>''n'')''c''. An analogous argument shows that ''c'' also divides the subsequent remainders ''r''<sub>1</sub>, ''r''<sub>2</sub>, etc. Therefore, the greatest common divisor ''g'' must divide ''r''<sub>''N''−1</sub>, which implies that ''g'' ≤ ''r''<sub>''N''−1</sub>. Since the first part of the argument showed the reverse (''r''<sub>''N''−1</sub> ≤ ''g''), it follows that ''g'' = ''r''<sub>''N''−1</sub>. Thus, ''g'' is the greatest common divisor of all the succeeding pairs:<ref>Knuth, p. 320.</ref><ref name="Lovasz_2003">{{cite book | author = Lovász L, Pelikán J, Vesztergombi K | year = 2003 | title = Discrete Mathematics: Elementary and Beyond | publisher = Springer-Verlag | location = New York | isbn = 0-387-95584-4 | pages = 100–101}}</ref>
| |
| | |
| : {{math|1=''g'' = gcd(''a'', ''b'') = gcd(''b'', ''r''<sub>0</sub>) = gcd(''r''<sub>0</sub>, ''r''<sub>1</sub>) = … = gcd(''r''<sub>''N''−2</sub>, ''r''<sub>''N''−1</sub>) = ''r''<sub>''N''−1</sub>.}}
| |
| | |
| ===Worked example===
| |
| [[File:Euclidean algorithm 1071 462.gif|upright|thumb|alt=Animation in which progressively smaller square tiles are added to cover a rectangle completely.|Subtraction-based animation of the Euclidean algorithm. The initial green rectangle has dimensions ''a'' = 1071 and ''b'' = 462. Square 462×462 orange tiles are added until a green 462×147 rectangle remains. The green 462×147 rectangle is tiled with square 147×147 blue tiles until a green 21×147 rectangle remains. The green 21×147 rectangle is tiled with 21×21 square red tiles, leaving no green remaining. Thus, 21 is the greatest common divisor of 1071 and 462.]]
| |
| | |
| For illustration, the Euclidean algorithm can be used to find the greatest common divisor of ''a'' = 1071 and ''b'' = 462. To begin, multiples of 462 are subtracted from 1071 until the remainder is less than 462. Two such multiples can be subtracted (''q''<sub>0</sub> = 2), leaving a remainder of 147
| |
| | |
| : {{math|1=1071 = 2 × 462 + 147.}}
| |
| | |
| Then multiples of 147 are subtracted from 462 until the remainder is less than 147. Three multiples can be subtracted (''q''<sub>1</sub> = 3), leaving a remainder of 21
| |
| | |
| : {{math|1=462 = 3 × 147 + 21.}}
| |
| | |
| Then multiples of 21 are subtracted from 147 until the remainder is less than 21. Seven multiples can be subtracted (''q''<sub>2</sub> = 7), leaving no remainder
| |
| | |
| : {{math|1=147 = 7 × 21 + 0.}}
| |
| | |
| Since the last remainder is zero, the algorithm ends with 21 as the greatest common divisor of 1071 and 462. This agrees with the gcd(1071, 462) found by prime factorization [[#Background|above]]. In tabular form, the steps are
| |
| | |
| {| class="wikitable" id="basic_Euclidean_algorithm" style="margin-left:auto; margin-right:auto; text-align:center"
| |
| !Step ''k''!!Equation!!Quotient and remainder
| |
| |-
| |
| | 0 || {{math|1=1071 = ''q''<sub>0</sub> 462 + ''r''<sub>0</sub>}} || {{math|1=''q''<sub>0</sub> = 2 and ''r''<sub>0</sub> = 147}}
| |
| |-
| |
| | 1 || {{math|1=462 = ''q''<sub>1</sub> 147 + ''r''<sub>1</sub>}} || {{math|1=''q''<sub>1</sub> = 3 and ''r''<sub>1</sub> = 21}}
| |
| |-
| |
| | 2 || {{math|1=147 = ''q''<sub>2</sub> 21 + ''r''<sub>2</sub>}} || {{math|1=''q''<sub>2</sub> = 7 and ''r''<sub>2</sub> = 0}}; algorithm ends
| |
| |}
| |
| | |
| ===Visualization===
| |
| The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor.<ref name="Kimberling_1983">{{cite journal | author = Kimberling C | year = 1983 | title = A Visual Euclidean Algorithm | journal = Mathematics Teacher | volume = 76 | pages = 108–109}}</ref> Assume that we wish to cover an ''a''-by-''b'' rectangle with square tiles exactly, where ''a'' is the larger of the two numbers. We first attempt to tile the rectangle using ''b''-by-''b'' square tiles; however, this leaves an ''r''<sub>0</sub>-by-''b'' residual rectangle untiled, where ''r''<sub>0</sub> < ''b''. We then attempt to tile the residual rectangle with ''r''<sub>0</sub>-by-''r''<sub>0</sub> square tiles. This leaves a second residual rectangle ''r''<sub>1</sub>-by-''r''<sub>0</sub>, which we attempt to tile using ''r''<sub>1</sub>-by-''r''<sub>1</sub> square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is 21-by-21 (shown in red), and 21 is the GCD of 1071 and 462, the dimensions of the original rectangle (shown in green).
| |
| | |
| ===Euclidean division===
| |
| {{Main|Euclidean division}}
| |
| | |
| At every step ''k'', the Euclidean algorithm computes a quotient ''q''<sub>''k''</sub> and remainder ''r''<sub>''k''</sub> from two numbers ''r''<sub>''k''−1</sub> and ''r''<sub>''k''−2</sub>
| |
| | |
| : {{math|1=''r''<sub>''k''−2</sub> = ''q''<sub>''k''</sub> ''r''<sub>''k''−1</sub> + ''r''<sub>''k''</sub>}}
| |
| | |
| where the [[absolute value|magnitude]] of ''r''<sub>''k''</sub> is strictly less than that of ''r''<sub>''k''−1</sub>. The theorem which underlies the definition of the [[Euclidean division]] ensures that such a quotient and remainder always exist and are unique.<ref name="Cohn_1962">{{Harvnb|Cohn|1962|pp=104–110}}</ref>
| |
| | |
| In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, ''r''<sub>''k''−1</sub> is subtracted from ''r''<sub>''k''−2</sub> repeatedly until the remainder ''r''<sub>''k''</sub> is smaller than ''r''<sub>''k''−1</sub>. After that ''r''<sub>''k''</sub> and ''r''<sub>''k''−1</sub> are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the [[modulo operation]], which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply
| |
| | |
| : {{math|1=''r''<sub>''k''</sub> = ''r''<sub>''k''−2</sub> mod ''r''<sub>''k''−1</sub>.}}
| |
| | |
| ===Implementations===
| |
| Implementations of the algorithm may be expressed in [[pseudocode]]. For example, the division-based version may be [[computer programming|programmed]] as<ref>Knuth, pp. 319–320.</ref>
| |
| | |
| '''function''' gcd(a, b)
| |
| '''while''' b ≠ 0
| |
| t := b
| |
| b := a '''mod''' b
| |
| a := t
| |
| '''return''' a
| |
| | |
| At the beginning of the ''k''th iteration, the variable ''b'' holds the latest remainder ''r''<sub>''k''−1</sub>, whereas the variable ''a'' holds its predecessor, ''r''<sub>''k''−2</sub>. The step ''b'' := ''a'' mod ''b'' is equivalent to the above recursion formula ''r''<sub>''k''</sub> ≡ ''r''<sub>''k''−2</sub> mod ''r''<sub>''k''−1</sub>. The [[bound variable|dummy variable]] ''t'' holds the value of ''r''<sub>''k''−1</sub> while the next remainder ''r''<sub>''k''</sub> is being calculated. At the end of the loop iteration, the variable ''b'' holds the remainder ''r''<sub>''k''</sub>, whereas the variable ''a'' holds its predecessor, ''r''<sub>''k''−1</sub>.
| |
| | |
| In the subtraction-based version which was Euclid's original version, the remainder calculation (''b'' = ''a'' mod ''b'') is replaced by repeated subtraction.<ref>Knuth, pp. 318–319.</ref> Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when ''a'' = ''b'':
| |
| | |
| '''function''' gcd(a, b)
| |
| '''while''' a ≠ b
| |
| '''if''' a > b
| |
| a := a − b
| |
| '''else'''
| |
| b := b − a
| |
| '''return''' a
| |
| | |
| The variables ''a'' and ''b'' alternate holding the previous remainders ''r''<sub>''k''−1</sub> and ''r''<sub>''k''−2</sub>. Assume that ''a'' is larger than ''b'' at the beginning of an iteration; then ''a'' equals ''r''<sub>''k''−2</sub>, since ''r''<sub>''k''−2</sub> > ''r''<sub>''k''−1</sub>. During the loop iteration, ''a'' is reduced by multiples of the previous remainder ''b'' until ''a'' is smaller than ''b''. Then ''a'' is the next remainder ''r''<sub>''k''</sub>. Then ''b'' is reduced by multiples of ''a'' until it is again smaller than ''a'', giving the next remainder ''r''<sub>''k''+1</sub>, and so on.
| |
| | |
| The recursive version<ref>{{Harvnb|Stillwell|1997|p=14}}</ref> is based on the equality of the GCDs of successive remainders and the stopping condition gcd(''r''<sub>''N''−1</sub>, 0) = ''r''<sub>''N''−1</sub>.
| |
| | |
| '''function''' gcd(a, b)
| |
| '''if''' b = 0
| |
| '''return''' a
| |
| '''else'''
| |
| '''return''' gcd(b, a '''mod''' b)
| |
| | |
| For illustration, the gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21.
| |
| | |
| ===Method of least absolute remainders===
| |
| In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder.<ref name="Ore_least_abs_remainders" >{{Harvnb|Ore|1948|p=43}}</ref><ref name="Stewart_1964">{{cite book | author = Stewart BM | year = 1964 | title = Theory of Numbers | edition = 2nd | publisher = Macmillan | location = New York | pages = 43–44 | lccn = 6410964}}</ref> Previously, the equation
| |
| | |
| : {{math|1=''r''<sub>''k''−2</sub> = ''q''<sub>''k''</sub> ''r''<sub>''k''−1</sub> + ''r''<sub>''k''</sub>}}
| |
| | |
| assumed that {{math|1={{!}}''r''<sub>''k''−1</sub>{{!}} > ''r''<sub>''k''</sub> > 0}}. However, an alternative negative remainder {{math|1=''e''<sub>''k''</sub>}} can be computed:
| |
| | |
| : {{math|1=''r''<sub>''k''−2</sub> = (''q''<sub>''k''</sub> + 1) ''r''<sub>''k''−1</sub> + ''e''<sub>''k''</sub>}}
| |
| if {{math|1=''r''<sub>''k''−1</sub> > 0}} or
| |
| : {{math|1=''r''<sub>''k''−2</sub> = (''q''<sub>''k''</sub> − 1) ''r''<sub>''k''−1</sub> + ''e''<sub>''k''</sub>}}
| |
| if {{math|1=''r''<sub>''k''−1</sub> < 0}}.
| |
| | |
| If {{math|1={{!}}''e''<sub>''k''</sub>{{!}} < {{!}}''r''<sub>''k''</sub>{{!}}}}, then {{math|1=''r''<sub>''k''</sub>}} is replaced by {{math|1=''e''<sub>''k''</sub>.}} As {{math|1={{!}}''r''<sub>''k''−1</sub>{{!}} = ''r''<sub>''k''</sub> − ''e''<sub>''k''</sub>}}, this new {{math|1=''r''<sub>''k''</sub>}} satisfies
| |
| : {{math|1={{!}}''r''<sub>''k''</sub>{{!}} < {{!}}''r''<sub>''k''−1</sub>{{!}} / 2. }}
| |
| [[Leopold Kronecker]] has shown that this version requires the fewest number of steps of any version of Euclid's algorithm.<ref name="Ore_least_abs_remainders" /><ref name="Stewart_1964" />
| |
| | |
| ==Historical development==
| |
| [[File:Euklid.jpg|thumb|alt="Depiction of Euclid as a bearded man holding a pair of dividers to a tablet."|The Euclidean algorithm was probably invented centuries before [[Euclid]], shown here holding [[Caliper#Divider caliper|dividers]]]]
| |
| | |
| The Euclidean algorithm is one of the oldest algorithms still in common use.<ref name="Knuth, p. 318">Knuth, p. 318.</ref> It appears in [[Euclid's Elements|Euclid's ''Elements'']] (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for [[real numbers]]. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume, and the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths ''a'' and ''b'' corresponds to the greatest length ''g'' that measures ''a'' and ''b'' evenly; in other words, the lengths ''a'' and ''b'' are both integer multiples of the length ''g''.
| |
| | |
| The algorithm was probably not discovered by [[Euclid]], who compiled results from earlier mathematicians in his ''Elements''.<ref name="Weil_1983">{{cite book | author = [[André Weil|Weil A]] | year = 1983 | title = Number Theory | publisher = Birkhäuser | location = Boston | isbn = 0-8176-3141-0 | pages = 4–6}}</ref><ref name="Jones_1994">{{cite book | author = Jones A | year = 1994 | chapter = Greek mathematics to AD 300 | title = Companion encyclopedia of the history and philosophy of the mathematical sciences | publisher = Routledge | location = New York | isbn = 0-415-09238-8 | pages = 46–48}}</ref> The mathematician and historian [[Bartel Leendert van der Waerden|B. L. van der Waerden]] suggests that Book VII derives from a textbook on [[number theory]] written by mathematicians in the school of [[Pythagoras]].<ref name="van_der_Waerden_1954">{{cite book | author = [[Bartel Leendert van der Waerden|van der Waerden BL]] | year = 1954 | title = Science Awakening | series = translated by Arnold Dresden | publisher = P. Noordhoff Ltd | location = Groningen | pages = 114–115}}</ref> The algorithm was probably known by [[Eudoxus of Cnidus]] (about 375 BC).<ref name="Knuth, p. 318"/><ref>{{cite journal | author = von Fritz K | year = 1945 | title = The Discovery of Incommensurability by Hippasus of Metapontum | journal = Annals of Mathematics | volume = 46 | pages = 242–264 | doi = 10.2307/1969021 | jstor = 1969021 | issue = 2}}</ref> The algorithm may even pre-date Eudoxus,<ref>{{cite book | author = [[T. L. Heath|Heath TL]] | year = 1949 | title = Mathematics in Aristotle | publisher = Oxford Press | pages = 80–83}}</ref><ref>{{cite book | author = [[David Fowler (mathematician)|Fowler DH]] | year = 1987 | title = The Mathematics of Plato's Academy: A New Reconstruction | publisher = Oxford University Press | location = Oxford | isbn = 0-19-853912-6 | pages = 31–66}}</ref> judging from the use of the technical term ἀνθυφαίρεσις (''anthyphairesis'', reciprocal subtraction) in works by Euclid and Aristotle.<ref>{{cite journal | author = Becker O | year = 1933 | title = Eudoxus-Studien I. Eine voreuklidische Proportionslehre und ihre Spuren bei Aristoteles und Euklid | volume = 2 | pages = 311–333 | journal = Quellen und Studien zur Geschichte der Mathematik B}}</ref>
| |
| | |
| Centuries later, Euclid's algorithm was discovered independently both in India and in China,<ref name="Stillwell, p. 31">{{Harvnb|Stillwell|1997|p=31}}</ref> primarily to solve [[Diophantine equations]] that arise in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer [[Aryabhata]] described the algorithm as the "pulverizer",<ref name="Tattersall, p. 70">{{Harvnb|Tattersall|2005|p=70}}</ref> perhaps because of its effectiveness in solving Diophantine equations.<ref>{{Harvnb|Rosen|2000|pp=86–87}}</ref> Although a special case of the [[Chinese remainder theorem]] had already been described by Chinese mathematician and astronomer [[Sun Tzu (mathematician)|Sun Tzu]],<ref>{{Harvnb|Ore|1948|pp=247–248}}</ref> the general solution was published by [[Qin Jiushao]] in his 1247 book ''Shushu Jiuzhang'' (數書九章 [[Mathematical Treatise in Nine Sections]]).<ref>{{Harvnb|Tattersall|2005|pp=72, 184–185}}</ref> The Euclidean algorithm was first described in Europe in the second edition of [[Claude Gaspard Bachet de Méziriac|Bachet's]] ''Problèmes plaisants et délectables'' (''Pleasant and enjoyable problems'', 1624).<ref name="Tattersall, p. 70"/> In Europe, it was likewise used to solve Diophantine equations and in developing [[continued fraction]]s. The [[extended Euclidean algorithm]] was published by the English mathematician [[Nicholas Saunderson]], who attributed it to [[Roger Cotes]] as a method for computing continued fractions efficiently.<ref>{{Harvnb|Tattersall|2005|pp=72–76}}</ref>
| |
| | |
| In the 19th century, the Euclidean algorithm led to the development of new number systems, such as [[Gaussian integer]]s and [[Eisenstein integer]]s. In 1815, [[Carl Friedrich Gauss|Carl Gauss]] used the Euclidean algorithm to demonstrate unique factorization of [[Gaussian integer]]s, although his work was first published in 1832.<ref name="Gauss_1832" /> Gauss mentioned the algorithm in his ''[[Disquisitiones Arithmeticae]]'' (published 1801), but only as a method for [[continued fraction]]s.<ref name="Stillwell, p. 31"/> [[Peter Gustav Lejeune Dirichlet]] seems to have been the first to describe the Euclidean algorithm as the basis for much of number theory.<ref>{{Harvnb|Stillwell|1997|pp=31–32}}</ref> Lejeune Dirichlet noted that many results of number theory, such as unique factorization, would hold true for any other system of numbers to which the Euclidean algorithm could be applied.<ref>{{Harvnb|Dirichlet|1894|pp=29–31}}</ref> Lejeune Dirichlet's lectures on number theory were edited and extended by [[Richard Dedekind]], who used Euclid's algorithm to study [[algebraic integer]]s, a new general type of number. For example, Dedekind was the first to prove [[Fermat's theorem on sums of two squares|Fermat's two-square theorem]] using the unique factorization of Gaussian integers.<ref>[[Richard Dedekind]] in {{Harvnb|Dirichlet|1894|loc=Supplement XI}}</ref> Dedekind also defined the concept of a [[Euclidean domain]], a number system in which a generalized version of the Euclidean algorithm can be defined (as described [[#Euclidean domains|below]]). In the closing decades of the 19th century, however, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of [[ideal (ring theory)|ideals]].<ref>{{Harvnb|Stillwell|2003|pp=41–42}}</ref>
| |
| | |
| {| class="toccolours" style="float: left; margin-left: 1em; margin-right: 1em; font-size: 85%; background:#c6dbf7; color:black; width:23em; max-width: 25%;" cellspacing="5"
| |
| | style="text-align: left;" |
| |
| "[The Euclidean algorithm] is the granddaddy of all algorithms, because it is the oldest nontrivial algorithm that has survived to the present day."
| |
| |-
| |
| | style="text-align: left;" | [[Donald Knuth]], ''The Art of Computer Programming, Vol. 2: Seminumerical Algorithms'', 2nd edition (1981), p. 318.
| |
| |}
| |
| | |
| Other applications of Euclid's algorithm were developed in the 19th century. In 1829, [[Jacques Charles François Sturm|Charles Sturm]] showed that the algorithm was useful in the [[Sturm's theorem|Sturm chain]] method for counting the real roots of polynomials in any given interval.<ref>{{cite journal | author = Sturm C | year = 1829 | title = Mémoire sur la résolution des équations numériques | journal = Bull. des sciences de Férussac | volume = 11 | pages = 419–422}}</ref>
| |
| | |
| The Euclidean algorithm was the first [[integer relation algorithm]], which is a method for finding integer relations between commensurate real numbers. Several novel integer relation algorithms have been developed in recent years, such as the [[Ferguson–Forcade algorithm]] (1979) of [[Helaman Ferguson]] and R.W. Forcade,<ref>{{MathWorld|urlname=IntegerRelation|title=Integer Relation}}</ref> and its relatives, the [[Lenstra–Lenstra–Lovász lattice basis reduction algorithm|LLL algorithm]], the [[HJLS algorithm]], and the [[PSLQ algorithm]].<ref>{{cite journal | author = Peterson I | date = 12 August 2002 | title = Jazzing Up Euclid's Algorithm | journal = ScienceNews | url=http://www.sciencenews.org/view/generic/id/172/title/Math_Trek__Jazzing_Up_Euclids_Algorithm}}</ref><ref>{{cite journal | author = Cipra BA | title = The Best of the 20th Century: Editors Name Top 10 Algorithms | journal = SIAM News | volume = 33 | date = 16 May 2000 | publisher = [[Society for Industrial and Applied Mathematics]] | issue = 4 | url=http://amath.colorado.edu/resources/archive/topten.pdf}}</ref>
| |
| | |
| In 1969, Cole and Davie developed a two-player game based on the Euclidean algorithm, called ''The Game of Euclid'',<ref>{{cite journal | author = Cole AJ, Davie AJT | year = 1969 | title = A game based on the Euclidean algorithm and a winning strategy for it | journal = Math. Gaz. | volume = 53 | pages = 354–357 | doi = 10.2307/3612461 | jstor = 3612461 | issue = 386}}</ref> which has an optimal strategy.<ref>{{cite journal | doi = 10.2307/2689037 | author = Spitznagel EL | year = 1973 | title = Properties of a game based on Euclid's algorithm | jstor = 2689037 | journal = Math. Mag. | volume = 46 | issue = 2 | pages = 87–92}}</ref> The players begin with two piles of ''a'' and ''b'' stones. The players take turns removing ''m'' multiples of the smaller pile from the larger. Thus, if the two piles consist of ''x'' and ''y'' stones, where ''x'' is larger than ''y'', the next player can reduce the larger pile from ''x'' stones to ''x'' − ''my'' stones, as long as the latter is a nonnegative integer. The winner is the first player to reduce one pile to zero stones.<ref>{{Harvnb|Rosen|2000|p=95}}</ref><ref>{{cite book | author = Roberts J | year = 1977 | title = Elementary Number Theory: A Problem Oriented Approach | publisher = [[MIT Press]] | location = Cambridge, MA | isbn = 0-262-68028-9 | pages = 1–8}}</ref>
| |
| | |
| ==Mathematical applications==
| |
| | |
| ===Bézout's identity===
| |
| [[Bézout's identity]] states that the greatest common divisor ''g'' of two integers ''a'' and ''b'' can be represented as a linear sum of the original two numbers ''a'' and ''b''.<ref>{{cite book | author = Jones GA, Jones JM | year = 1998 | chapter = Bezout's Identity | title = Elementary Number Theory | publisher = Springer-Verlag | location = New York | pages = 7–11}}</ref> In other words, it is always possible to find integers ''s'' and ''t'' such that ''g'' = ''sa'' + ''tb''.<ref>{{Harvnb|Rosen|2000|p=81}}</ref><ref>{{Harvnb|Cohn|1962|p=104}}</ref>
| |
| | |
| The integers ''s'' and ''t'' can be calculated from the quotients ''q''<sub>0</sub>, ''q''<sub>1</sub>, etc. by reversing the order of equations in Euclid's algorithm.<ref>{{Harvnb|Rosen|2000|p=91}}</ref> Beginning with the next-to-last equation, ''g'' can be expressed in terms of the quotient ''q''<sub>''N''−1</sub> and the two preceding remainders, ''r''<sub>''N''−2</sub> and ''r''<sub>''N''−3</sub>.
| |
| | |
| : {{math|1=''g'' = ''r''<sub>''N''−1</sub> = ''r''<sub>''N''−3</sub> − ''q''<sub>''N''−1</sub> ''r''<sub>''N''−2</sub>}}
| |
| | |
| Those two remainders can be likewise expressed in terms of their quotients and preceding remainders,
| |
| | |
| : {{math|1=''r''<sub>''N''−2</sub> = ''r''<sub>''N''−4</sub> − ''q''<sub>''N''−2</sub> ''r''<sub>''N''−3</sub>}}
| |
| : {{math|1=''r''<sub>''N''−3</sub> = ''r''<sub>''N''−5</sub> − ''q''<sub>''N''−3</sub> ''r''<sub>''N''−4</sub>.}}
| |
| | |
| Substituting these formulae for ''r''<sub>''N''−2</sub> and ''r''<sub>''N''−3</sub> into the first equation yields ''g'' as a linear sum of the remainders ''r''<sub>''N''−4</sub> and ''r''<sub>''N''−5</sub>. The process of substituting remainders by formulae involving their predecessors can be continued until the original numbers ''a'' and ''b'' are reached
| |
| | |
| : {{math|1=''r''<sub>2</sub> = ''r''<sub>0</sub> − ''q''<sub>2</sub> ''r''<sub>1</sub>}}
| |
| : {{math|1=''r''<sub>1</sub> = ''b'' − ''q''<sub>1</sub> ''r''<sub>0</sub>}}
| |
| : {{math|1=''r''<sub>0</sub> = ''a'' − ''q''<sub>0</sub> ''b''.}}
| |
| | |
| After all the remainders ''r''<sub>0</sub>, ''r''<sub>1</sub>, etc. have been substituted, the final equation expresses ''g'' as a linear sum of ''a'' and ''b'': ''g'' = ''sa'' + ''tb''. [[Bézout's identity]], and therefore the previous algorithm, can both be generalized to the context of [[Euclidean domain]]s.
| |
| | |
| ===Principal ideals and related problems===
| |
| Bézout's identity provides yet another definition of the greatest common divisor ''g'' of two numbers ''a'' and ''b''.<ref name="Leveque_p33" >{{Harvnb|LeVeque|1996|p=33}}</ref> Consider the set of all numbers ''ua'' + ''vb'', where ''u'' and ''v'' are any two integers. Since ''a'' and ''b'' are both divisible by ''g'', every number in the set is divisible by ''g''. In other words, every number of the set is an integer multiple of ''g''. This is true for every common divisor of ''a'' and ''b''. However, unlike other common divisors, the greatest common divisor is a member of the set; by Bézout's identity, choosing ''u'' = ''s'' and ''v'' = ''t'' gives ''g''. A smaller common divisor cannot be a member of the set, since every member of the set must be divisible by ''g''. Conversely, any multiple ''m'' of ''g'' can be obtained by choosing ''u'' = ''ms'' and ''v'' = ''mt'', where ''s'' and ''t'' are the integers of Bézout's identity. This may be seen by multiplying Bézout's identity by ''m'',
| |
| | |
| : {{math|1=''mg'' = ''msa'' + ''mtb''.}}
| |
| | |
| Therefore, the set of all numbers ''ua'' + ''vb'' is equivalent to the set of multiples ''m'' of ''g''. In other words, the set of all possible sums of integer multiples of two numbers (''a'' and ''b'') is equivalent to the set of multiples of gcd(''a'', ''b''). The GCD is said to be the generator of the [[ideal (ring theory)|ideal]] of ''a'' and ''b''. This GCD definition led to the modern [[abstract algebra]]ic concepts of a [[principal ideal]] (an ideal generated by a single element) and a [[principal ideal domain]] (a [[domain (ring theory)|domain]] in which every ideal is a principal ideal).
| |
| | |
| Certain problems can be solved using this result.<ref>{{Harvnb|Schroeder|2005|p=23}}</ref> For example, consider two measuring cups of volume ''a'' and ''b''. By adding/subtracting ''u'' multiples of the first cup and ''v'' multiples of the second cup, any volume ''ua'' + ''vb'' can be measured out. These volumes are all multiples of ''g'' = gcd(''a'', ''b'').
| |
| | |
| ===Extended Euclidean algorithm===
| |
| {{Main|Extended Euclidean algorithm}}
| |
| | |
| The integers ''s'' and ''t'' of Bézout's identity can be computed efficiently using the [[extended Euclidean algorithm]]. This extension adds two recursive equations to Euclid's algorithm<ref>{{Harvnb|Rosen|2000|pp=90–93}}</ref>
| |
| | |
| : {{math|1=''s''<sub>''k''</sub> = ''s''<sub>''k''−2</sub> − ''q''<sub>''k''</sub>''s''<sub>''k''−1</sub>}}
| |
| : {{math|1=''t''<sub>''k''</sub> = ''t''<sub>''k''−2</sub> − ''q''<sub>''k''</sub>''t''<sub>''k''−1</sub>}}
| |
| | |
| with the starting values
| |
| | |
| : {{math|1=''s''<sub>−2</sub> = 1, ''t''<sub>−2</sub> = 0}}
| |
| : {{math|1=''s''<sub>−1</sub> = 0, ''t''<sub>−1</sub> = 1.}}
| |
| | |
| Using this recursion, Bézout's integers ''s'' and ''t'' are given by ''s'' = ''s''<sub>''N''</sub> and ''t'' = ''t''<sub>''N''</sub>, where ''N'' is the step on which the algorithm terminates with ''r''<sub>''N''</sub> = 0.
| |
| | |
| The validity of this approach can be shown by induction. Assume that the recursion formula is correct up to step ''k'' − 1 of the algorithm; in other words, assume that
| |
| | |
| : {{math|1=''r''<sub>''j''</sub> = ''s''<sub>''j''</sub> ''a'' + ''t''<sub>''j''</sub> ''b''}}
| |
| | |
| for all ''j'' less than ''k''. The ''k''th step of the algorithm gives the equation
| |
| | |
| : {{math|1=''r''<sub>''k''</sub> = ''r''<sub>''k''−2</sub> − ''q''<sub>''k''</sub>''r''<sub>''k''−1</sub>.}}
| |
| | |
| Since the recursion formula has been assumed to be correct for ''r''<sub>''k''−2</sub> and ''r''<sub>''k''−1</sub>, they may be expressed in terms of the corresponding ''s'' and ''t'' variables
| |
| | |
| : {{math|1=''r''<sub>''k''</sub> = (''s''<sub>''k''−2</sub> ''a'' + ''t''<sub>''k''−2</sub> ''b'') − ''q''<sub>''k''</sub>(''s''<sub>''k''−1</sub> ''a'' + ''t''<sub>''k''−1</sub> ''b'').}}
| |
| | |
| Rearranging this equation yields the recursion formula for step ''k'', as required
| |
| | |
| : {{math|1=''r''<sub>''k''</sub> = ''s''<sub>''k''</sub> ''a'' + ''t''<sub>''k''</sub> ''b'' = (''s''<sub>''k''−2</sub> − ''q''<sub>''k''</sub>''s''<sub>''k''−1</sub>) ''a'' + (''t''<sub>''k''−2</sub> − ''q''<sub>''k''</sub>''t''<sub>''k''−1</sub>) ''b''.}}
| |
| | |
| === Matrix method ===
| |
| | |
| The integers ''s'' and ''t'' can also be found using an equivalent [[matrix (mathematics)|matrix]] method.<ref name="Koshy_2002">{{cite book | author = Koshy T | year = 2002 | title = Elementary Number Theory with Applications | publisher = Harcourt/Academic Press | location = Burlington, MA | isbn = 0-12-421171-2 | pages = 167–169}}</ref> The sequence of equations of Euclid's algorithm
| |
| | |
| : {{math|1=''a'' = ''q''<sub>0</sub> ''b'' + ''r''<sub>0</sub>}}
| |
| : {{math|1=''b'' = ''q''<sub>1</sub> ''r''<sub>0</sub> + ''r''<sub>1</sub>}}
| |
| : {{math|1=…}}
| |
| : {{math|1=''r''<sub>''N''−2</sub> = ''q''<sub>''N''</sub> ''r''<sub>''N''−1</sub> + 0}}
| |
| | |
| can be written as a product of 2-by-2 quotient matrices multiplying a two-dimensional remainder vector
| |
| | |
| :<math alt="A series of equations consisting of two-dimensional vectors multiplied by an ever-increasing number of 2-by-2 matrices. The vector a b equals the matrix q sub zero 1 1 0 times the vector b r sub zero. It also equals the matrix q sub zero 1 1 0 times the matrix q sub one 1 1 0 times the vector r sub zero r sub one. Continuing to the last step N of the algorithm, it equals the product of all 2-by-2 matrices of the form q sub i 1 1 0 times the vector r sub N minus one r sub N. The index i ranges from 0 to N and the last remainder r sub N is zero.">
| |
| \begin{pmatrix} a \\ b \end{pmatrix} =
| |
| \begin{pmatrix} q_0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} b \\ r_{0} \end{pmatrix} =
| |
| \begin{pmatrix} q_0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} q_1 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} r_0 \\ r_1 \end{pmatrix} =
| |
| \cdots =
| |
| \prod_{i=0}^N \begin{pmatrix} q_i & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} r_{N-1} \\ 0 \end{pmatrix}
| |
| </math>
| |
| | |
| Let '''M''' represent the product of all the quotient matrices
| |
| | |
| :<math alt="The 2-by-2 matrix M has four components, m sub 1 1, m sub 1 2, m sub 2 1, and m sub 2 2. It is defined as the product of all 2-by-2 matrices of the form q sub i 1 1 0, where the index i ranges from 0 to N.">
| |
| \mathbf{M} = \begin{pmatrix} m_{11} & m_{12} \\ m_{21} & m_{22} \end{pmatrix} =
| |
| \prod_{i=0}^N \begin{pmatrix} q_i & 1 \\ 1 & 0 \end{pmatrix} =
| |
| \begin{pmatrix} q_0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} q_1 & 1 \\ 1 & 0 \end{pmatrix} \cdots \begin{pmatrix} q_{N} & 1 \\ 1 & 0 \end{pmatrix}
| |
| </math>
| |
| | |
| This simplifies the Euclidean algorithm to the form
| |
| | |
| :<math alt="The two-dimensional vector a b equals the matrix M times the final vector, r sub N minus one zero. The final non-zero remainder is the greatest common divisor g. Therefore, the vector a b equals the matrix M times the vector g zero.">
| |
| \begin{pmatrix} a \\ b \end{pmatrix} =
| |
| \mathbf{M} \begin{pmatrix} r_{N-1} \\ 0 \end{pmatrix} =
| |
| \mathbf{M} \begin{pmatrix} g \\ 0 \end{pmatrix}
| |
| </math>
| |
| | |
| To express ''g'' as a linear sum of ''a'' and ''b'', both sides of this equation can be multiplied by the [[invertible matrix|inverse]] of the matrix '''M'''.<ref name="Koshy_2002" /><ref name="Bach_1996">{{cite book | author = Bach E, Shallit J | year = 1996 | title = Algorithmic number theory | publisher = MIT Press | location = Cambridge, MA | isbn = 0-262-02405-5 | pages = 70–73}}</ref> The [[determinant]] of '''M''' equals (−1)<sup>''N''+1</sup>, since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of '''M''' is never zero, the vector of the final remainders can be solved using the inverse of '''M'''
| |
| | |
| :<math alt="The two-dimensional vector g 0 equals the inverse of matrix M times the vector a b. This equals minus one to the Nth plus one power, times the matrix with components m sub 2 2, minus m sub 1 2, minus m sub 2 1, and m sub 1 1, times the vector a b.">
| |
| \begin{pmatrix} g \\ 0 \end{pmatrix} =
| |
| \mathbf{M}^{-1} \begin{pmatrix} a \\ b \end{pmatrix} =
| |
| (-1)^{N+1} \begin{pmatrix} m_{22} & -m_{12} \\ -m_{21} & m_{11} \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix}
| |
| </math>
| |
| | |
| Since the top equation gives
| |
| | |
| : {{math|1=''g'' = (−1)<sup>''N''+1</sup> ( ''m''<sub>22</sub> ''a'' − ''m''<sub>12</sub> ''b'')}}
| |
| | |
| the two integers of Bézout's identity are ''s'' = (−1)<sup>''N''+1</sup>''m''<sub>22</sub> and ''t'' = (−1)<sup>''N''</sup>''m''<sub>12</sub>. The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm.
| |
| | |
| ===Euclid's lemma and unique factorization===
| |
| Bézout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the [[unique factorization]] of numbers into prime factors.<ref>{{Harvnb|Stark|1978|pp=26–36}}</ref> To illustrate this, suppose that a number ''L'' can be written as a product of two factors ''u'' and ''v'', that is, ''L'' = ''uv''. If another number ''w'' also divides ''L'' but is coprime with ''u'', then ''w'' must divide ''v'', by the following argument: If the greatest common divisor of ''u'' and ''w'' is 1, then integers ''s'' and ''t'' can be found such that
| |
| | |
| : {{math|1=1 = ''su'' + ''tw''}}
| |
| | |
| by Bézout's identity. Multiplying both sides by ''v'' gives the relation
| |
| | |
| : {{math|1=''v'' = ''suv'' + ''twv'' = ''sL'' + ''twv''}}
| |
| | |
| Since ''w'' divides both terms on the right-hand side, it must also divide the left-hand side, ''v''. This result is known as [[Euclid's lemma]].<ref name="Ore, p. 44">{{Harvnb|Ore|1948|p=44}}</ref> Specifically, if a prime number divides ''L'', then it must divide at least one factor of ''L''. Conversely, if a number ''w'' is coprime to each of a series of numbers ''a''<sub>1</sub>, ''a''<sub>2</sub>, …, ''a''<sub>''n''</sub>, then ''w'' is also coprime to their product, ''a''<sub>1</sub> × ''a''<sub>2</sub> × … × ''a''<sub>''n''</sub>.<ref name="Ore, p. 44"/>
| |
| | |
| Euclid's lemma suffices to prove that every number has a unique factorization into prime numbers.<ref>{{Harvnb|Stark|1978|pp=281–292}}</ref> To see this, assume the contrary, that there are two independent factorizations of ''L'' into ''m'' and ''n'' prime factors, respectively
| |
| | |
| : {{math|1=''L'' = ''p''<sub>1</sub>''p''<sub>2</sub>…''p''<sub>''m''</sub> = ''q''<sub>1</sub>''q''<sub>2</sub>…''q''<sub>''n''</sub>}}
| |
| | |
| Since each prime ''p'' divides ''L'' by assumption, it must also divide one of the ''q'' factors; since each ''q'' is prime as well, it must be that ''p'' = ''q''. Iteratively dividing by the ''p'' factors shows that each ''p'' has an equal counterpart ''q''; the two prime factorizations are identical except for their order. The unique factorization of numbers into primes has many applications in mathematical proofs, as shown below.
| |
| | |
| ===Linear Diophantine equations===
| |
| [[File:Diophante Bezout.svg|thumb|alt="A diagonal line running from the upper left corner to the lower right. Fifteen circles are spaced at regular intervals along the line. Perpendicular x-y coordinate axes have their origin in the lower left corner; the line crossed the y-axis at the upper left and crosse the x-axis at the lower right."|Plot of a linear [[Diophantine equation]], 9''x'' + 12''y'' = 483. The solutions are shown as blue circles.]]
| |
| | |
| [[Diophantine equation]]s are equations in which the solutions are restricted to integers; they are named after the 3rd-century Alexandrian mathematician [[Diophantus]].<ref>{{Harvnb|Rosen|2000|pp=119–125}}</ref> A typical ''linear'' Diophantine equation seeks integers ''x'' and ''y'' such that<ref>{{Harvnb|Schroeder|2005|pp=106–107}}</ref>
| |
| | |
| : {{math|1=''ax'' + ''by'' = ''c''}}
| |
| | |
| where ''a'', ''b'' and ''c'' are given integers. This can be written as an equation for ''x'' in [[modular arithmetic]]
| |
| | |
| : {{math|1=''ax'' ≡ ''c'' mod ''b''.}}
| |
| | |
| Let ''g'' be the greatest common divisor of ''a'' and ''b''. Both terms in ''ax'' + ''by'' are divisible by ''g''; therefore, ''c'' must also be divisible by ''g'', or the equation has no solutions. By dividing both sides by ''c''/''g'', the equation can be reduced to Bezout's identity
| |
| | |
| : {{math|1=''sa'' + ''tb'' = ''g''}}
| |
| | |
| where ''s'' and ''t'' can be found by the [[extended Euclidean algorithm]].<ref>{{Harvnb|Schroeder|2005|pp=108–109}}</ref> This provides one solution to the Diophantine equation, ''x''<sub>1</sub> = ''s'' (''c''/''g'') and ''y''<sub>1</sub> = ''t'' (''c''/''g'').
| |
| | |
| In general, a linear Diophantine equation has no solutions, or an infinite number of solutions.<ref>{{Harvnb|Rosen|2000|pp=120–121}}</ref> To find the latter, consider two solutions, (''x''<sub>1</sub>, ''y''<sub>1</sub>) and (''x''<sub>2</sub>, ''y''<sub>2</sub>)
| |
| | |
| : {{math|1=''ax''<sub>1</sub> + ''by''<sub>1</sub> = ''c'' = ''ax''<sub>2</sub> + ''by''<sub>2</sub>}}
| |
| | |
| or equivalently
| |
| | |
| : {{math|1=''a''(''x''<sub>1</sub> − ''x''<sub>2</sub>) = ''b''(''y''<sub>2</sub> − ''y''<sub>1</sub>).}}
| |
| | |
| Therefore, the smallest difference between two ''x'' solutions is ''b''/''g'', whereas the smallest difference between two ''y'' solutions is ''a''/''g''. Thus, the solutions may be expressed as
| |
| | |
| : {{math|1=''x'' = ''x''<sub>1</sub> − ''bt''/''g''}}
| |
| : {{math|1=''y'' = ''y''<sub>1</sub> + ''at''/''g''}}.
| |
| | |
| By allowing ''t'' to vary over all possible integers, an infinite family of solutions can be generated from a single solution (''x''<sub>1</sub>, ''y''<sub>1</sub>). If the solutions are required to be ''positive'' integers (''x'' > 0, ''y'' > 0), only a finite number of solutions may be possible. This restriction on the acceptable solutions allows systems of Diophantine equations to be solved with more unknowns than equations;<ref>{{Harvnb|Stark|1978|p=47}}</ref> this is impossible for a [[system of linear equations]] when the solutions can be any [[real number]].
| |
| | |
| ===Multiplicative inverses and the RSA algorithm===
| |
| A [[finite field]] is a set of numbers with four generalized operations. The operations are called addition, subtraction, multiplication and division and have their usual properties, such as [[commutativity]], [[associativity]] and [[distributivity]]. An example of a finite field is the set of 13 numbers {0, 1, 2, …, 12} using [[modular arithmetic]]. In this field, the results of any mathematical operation (addition/subtraction/multiplication/division) is reduced [[modulo operation|modulo]] 13; that is, multiples of 13 are added or subtracted until the result is brought within the range 0–12. For example, the result of 5 × 7 = 35 mod 13 = 9. Such finite fields can be defined for any prime ''p''; using more sophisticated definitions, they can also be defined for any power ''m'' of a prime ''p''<sup> ''m''</sup>. Finite fields are often called [[Évariste Galois|Galois]] fields, and are abbreviated as GF(''p'') or GF(''p''<sup> ''m''</sup>).
| |
| | |
| In such a field with ''m'' numbers, every nonzero element ''a'' has a unique [[modular multiplicative inverse|modular]] [[multiplicative inverse]], ''a''<sup>−1</sup> such that ''aa''<sup>−1</sup> = ''a''<sup>−1</sup>''a'' ≡ 1 mod ''m''. This inverse can be found by solving the congruence equation ''ax'' ≡ 1 mod ''m'',<ref>{{Harvnb|Schroeder|2005|pp=107–109}}</ref> or the equivalent linear Diophantine equation<ref>{{Harvnb|Stillwell|1997|pp=186–187}}</ref>
| |
| | |
| : {{math|1=''ax'' + ''my'' = 1.}}
| |
| | |
| This equation can be solved by the Euclidean algorithm, as described [[#Linear Diophantine equations|above]]. Finding multiplicative inverses is an essential step in the [[RSA algorithm]], which is widely used in [[electronic commerce]]; specifically, the equation determines the integer used to decrypt the message.<ref>{{Harvnb|Schroeder|2005|p=134}}</ref> Note that although the RSA algorithm uses [[ring (mathematics)|rings]] rather than fields, the Euclidean algorithm can still be used to find a multiplicative inverse where one exists. The Euclidean algorithm also has other applications in [[error-correcting code]]s; for example, it can be used as an alternative to the [[Berlekamp–Massey algorithm]] for decoding [[BCH code|BCH]] and [[Reed–Solomon code]]s, which are based on Galois fields.<ref>"Error correction coding: mathematical methods and algorithms", page 266, Todd K. Moon, John Wiley and Sons, 2005, ISBN 0-471-64800-0</ref>
| |
| | |
| ===Chinese remainder theorem===
| |
| Euclid's algorithm can also be used to solve multiple linear Diophantine equations.<ref>{{Harvnb|Rosen|2000|pp=143–170}}</ref> Such equations arise in the [[Chinese remainder theorem]], which describes a novel method to represent an integer ''x''. Instead of representing an integer by its digits, it may be represented by its remainders ''x''<sub>''i''</sub> modulo a set of ''N'' coprime numbers ''m''<sub>''i''</sub>.<ref>{{Harvnb|Schroeder|2005|pp=194–195}}</ref>
| |
| | |
| : {{math|1=''x''<sub>1</sub> ≡ ''x'' mod ''m''<sub>1</sub>}}
| |
| : {{math|1=''x''<sub>2</sub> ≡ ''x'' mod ''m''<sub>2</sub>}}
| |
| :…
| |
| : {{math|1=''x''<sub>''N''</sub> ≡ ''x'' mod ''m''<sub>''N''</sub>}}
| |
| | |
| The goal is to determine ''x'' from its ''N'' remainders ''x''<sub>''i''</sub>. The solution is to combine the multiple equations into a single linear Diophantine equation with a much larger modulus ''M'' that is the product of all the individual moduli ''m''<sub>''i''</sub>, and define the ''M''<sub>''i''</sub>
| |
| | |
| : {{math|1=''M''<sub>''i''</sub> = ''M'' / ''m''<sub>''i''</sub>}}
| |
| | |
| Thus, each ''M''<sub>''i''</sub> is the product of all the moduli ''except'' ''m''<sub>''i''</sub>. The solution depends on finding ''N'' new numbers ''h''<sub>''i''</sub> such that
| |
| | |
| :''M''<sub>''i''</sub>''h''<sub>''i''</sub> ≡ 1 mod ''m''<sub>''i''</sub>
| |
| | |
| With these numbers ''h''<sub>''i''</sub>, any integer ''x'' can be reconstructed from its remainders ''x''<sub>''i''</sub> by the equation
| |
| | |
| : {{math|1=x ≡ (''x''<sub>1</sub>''M''<sub>1</sub>''h''<sub>1</sub> + ''x''<sub>2</sub>''M''<sub>2</sub>''h''<sub>2</sub> + … + ''x''<sub>''N''</sub>''M''<sub>''N''</sub>''h''<sub>''N''</sub> ) mod ''M''}}
| |
| | |
| Since these numbers ''h''<sub>''i''</sub> are the multiplicative inverses of the ''M''<sub>''i''</sub>, they may be found using Euclid's algorithm as described in the previous subsection.
| |
| | |
| ===Stern–Brocot tree===
| |
| The sequence of subtractions used by the Euclidean algorithm gives a path from the root of the [[Stern–Brocot tree]] to any given rational number. This fact can be used to prove that there is a 1-1 correspondence between the vertices of tree and the positive rational numbers.
| |
| | |
| For example, 3/4 can be found by starting at the root, going to the left once, then to the right twice.
| |
| | |
| [[Image:SternBrocotTree.svg|thumb|400px|The Stern–Brocot tree, and the Stern–Brocot sequences of order ''i'' for ''i'' = 1, 2, 3, 4.]]
| |
| | |
| :<math>
| |
| \begin{align}
| |
| & \gcd(3,4) & \leftarrow \\
| |
| = & \gcd(3,1) & \rightarrow \\
| |
| = & \gcd(2,1) & \rightarrow \\
| |
| = & \gcd(1,1)
| |
| \end{align}
| |
| </math>
| |
| | |
| The Euclidean algorithm has almost the same relationship to the [[Calkin–Wilf tree]]. The difference is that the path is reversed: instead of producing a path from the root of the tree to a target, it produces a path from the target to the root.
| |
| | |
| ===Continued fractions===
| |
| The Euclidean algorithm has a close relationship with [[continued fraction]]s.<ref name="Vinogradov_1954">{{cite book | author = [[Ivan Matveyevich Vinogradov|Vinogradov IM]] | year = 1954 | title = Elements of Number Theory | publisher = Dover | location = New York | pages = 3–13}}</ref> The sequence of equations can be written in the form
| |
| | |
| :<math>
| |
| \begin{align}
| |
| \frac{a}{b} &= q_0 + \frac{r_0}{b} \\
| |
| \frac{b}{r_0} &= q_1 + \frac{r_1}{r_0} \\
| |
| \frac{r_0}{r_1} &= q_2 + \frac{r_2}{r_1} \\
| |
| & {}\ \vdots \\
| |
| \frac{r_{k-2}}{r_{k-1}} &= q_k + \frac{r_k}{r_{k-1}} \\
| |
| & {}\ \vdots \\
| |
| \frac{r_{N-2}}{r_{N-1}} &= q_N
| |
| \end{align}
| |
| </math>
| |
| | |
| The last term on the right-hand side always equals the inverse of the left-hand side of the next equation. Thus, the first two equations may be combined to form
| |
| | |
| :<math>\frac{a}{b} = q_0 + \cfrac{1}{q_1 + \cfrac{r_1}{r_0}} </math>
| |
| | |
| The third equation may be used to substitute the denominator term ''r''<sub>1</sub>/''r''<sub>0</sub>, yielding
| |
| | |
| :<math>\frac{a}{b} = q_0 + \cfrac{1}{q_1 + \cfrac{1}{q_2 + \cfrac{r_2}{r_1}}} </math>
| |
| | |
| The final ratio of remainders ''r''<sub>''k''</sub>/''r''<sub>''k''−1</sub> can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction
| |
| | |
| :<math>\frac{a}{b} = q_0 + \cfrac{1}{q_1 + \cfrac{1}{q_2 + \cfrac{1}{\ddots + \cfrac{1}{q_N}}}} = [ q_0; q_1, q_2, \ldots , q_N ] </math>
| |
| | |
| In the worked example [[#Worked example|above]], the gcd(1071, 462) was calculated, and the quotients ''q''<sub>''k''</sub> were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written
| |
| | |
| :<math>\frac{1071}{462} = 2 + \cfrac{1}{3 + \cfrac{1}{7}} = [2; 3, 7]</math>
| |
| | |
| as can be confirmed by calculation.
| |
| | |
| ===Factorization algorithms===
| |
| Calculating a greatest common divisor is an essential step in several [[integer factorization]] algorithms,<ref>{{cite book | author = Crandall R, Pomerance C | year = 2001 | title = Prime Numbers: A Computational Perspective | edition = 1st | publisher = Springer-Verlag | location = New York | isbn = 0-387-94777-9 | pages = 225–349}}</ref> such as [[Pollard's rho algorithm]],<ref>Knuth, pp. 369–371.</ref> [[Shor's algorithm]],<ref>{{cite journal | author = [[Peter Shor|Shor PW]] | year = 1997 | title = Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer | journal = SIAM Journal on Scientific and Statistical Computing | volume = 26 | pages = 1484}}</ref> [[Dixon's factorization method]]<ref>{{cite journal | author = Dixon JD | year = 1981 | title = Asymptotically fast factorization of integers | journal = Math. Comput. | volume = 36 | pages = 255–260 | doi = 10.2307/2007743 | jstor = 2007743 | issue = 153}}</ref> and the [[Lenstra elliptic curve factorization]].<ref>{{cite journal | author = [[Hendrik Lenstra|Lenstra Jr. HW]] | year = 1987 | title = Factoring integers with elliptic curves | journal = Annals of Mathematics | volume = 126 | pages = 649–673 | doi = 10.2307/1971363 | jstor = 1971363 | issue = 3}}</ref> The Euclidean algorithm may be used to find this GCD efficiently. [[Continued fraction factorization]] uses continued fractions, which are determined using Euclid's algorithm.<ref>Knuth, pp. 380–384.</ref>
| |
| | |
| ==Algorithmic efficiency==
| |
| [[Image:Euclidean algorithm running time X Y.png|thumb|alt="A set of colored lines radiating outwards from the origin of an ''x''-''y'' coordinate system. Each line corresponds to a set of number pairs requiring the same number of steps in the Euclidean algorithm."|Number of steps in the Euclidean algorithm for gcd(''x'',''y''). Red points indicate relatively few steps (quick), whereas yellow, green and blue points indicate successively more steps (slow). The largest blue area follows the line ''y'' = Φ''x'', where Φ represents the [[Golden ratio]].]]
| |
| | |
| The computational efficiency of Euclid's algorithm has been studied thoroughly.<ref>Knuth, pp. 339–364.</ref> This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A.-A.-L. Reynaud in 1811,<ref>{{cite book | author = Reynaud A.-A.-L. | year = 1811 | title = Traité d'arithmétique à l'usage des élèves qui se destinent à l'École Polytechnique | publisher = Courcier }}</ref> who showed that the number of division steps on input (''u'', ''v'') is bounded by ''v''; later he improved this to ''v''/2 + 2. Later, in 1841, P.-J.-E. Finck showed <ref>{{cite book | author = Finck P.-J.-E. | title = Traité élémentaire d'arithmétique à l'usage des candidats aux écoles spéciales | publisher = Derivaux | year = 1841}}</ref> that the number of division steps is at most 2 log<sub>2</sub> ''v'' + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input; also see.<ref>{{cite journal | author = [[Jeffrey Shallit|Shallit J]] | year = 1994 | title = Origins of the analysis of the Euclidean algorithm | journal = Historia Math. | volume = 21 | pages = 401–419}}</ref> His analysis was refined by [[Gabriel Lamé]] in 1844,<ref>{{cite journal | author = [[Gabriel Lamé|Lamé G]] | year = 1844 | title = Note sur la limite du nombre des divisions dans la recherche du plus grand commun diviseur entre deux nombres entiers | journal = Comptes Rendus Acad. Sci. | volume = 19 | pages = 867–870}}</ref> who showed that the number of steps required for completion is never more than five times the number ''h'' of base-10 digits of the smaller number ''b''.<ref>{{cite journal | author = Grossman H | year = 1924 | title = On the Number of Divisions in Finding a G.C.D | journal = The American Mathematical Monthly | volume = 31 | page = 443 | doi = 10.2307/2298146 | jstor = 2298146 | issue = 9}}</ref><ref>{{cite book | author = Honsberger R | year = 1976 | title = Mathematical Gems II | publisher = The [[Mathematical Association of America]] | isbn = 0-88385-302-7 | pages = 54–57}}</ref> Since the computational expense of each step is also typically of order ''h'', the overall expense grows like ''h''<sup>2</sup>.
| |
| | |
| ===Number of steps===
| |
| The number of steps to calculate the GCD of two natural numbers, ''a'' and ''b'', may be denoted by ''T''(''a'', ''b'').<ref name="Knuth, p. 344">Knuth, p. 344.</ref> If ''g'' is the GCD of ''a'' and ''b'', then ''a'' = ''mg'' and ''b'' = ''ng'' for two coprime numbers ''m'' and ''n''. Then
| |
| | |
| : {{math|1=''T''(''a'', ''b'') = ''T''(''m'', ''n'')}}
| |
| | |
| as may be seen by dividing all the steps in the Euclidean algorithm by ''g''.<ref>{{Harvnb|Ore|1948|p=45}}</ref> By the same argument, the number of steps remains the same if ''a'' and ''b'' are multiplied by a common factor ''w'': ''T''(''a'', ''b'') = ''T''(''wa'', ''wb''). Therefore, the number of steps ''T'' may vary dramatically between neighboring pairs of numbers, such as T(''a'', ''b'') and T(''a'', ''b'' + 1), depending on the size of the two GCDs.
| |
| | |
| The recursive nature of the Euclidean algorithm gives another equation
| |
| | |
| : {{math|1=''T''(''a'', ''b'') = 1 + ''T''(''b'', ''r''<sub>0</sub>) = 2 + ''T''(''r''<sub>0</sub>, ''r''<sub>1</sub>) = … = ''N'' + ''T''(''r''<sub>''N''−2</sub>, ''r''<sub>''N''−1</sub>) = ''N'' + 1}}
| |
| | |
| where ''T''(''x'', 0) = 0 by assumption.<ref name="Knuth, p. 344"/>
| |
| | |
| ====Worst-case number of steps====
| |
| | |
| If the Euclidean algorithm requires ''N'' steps for a pair of natural numbers ''a'' > ''b'' > 0, the smallest values of ''a'' and ''b'' for which this is true are the [[Fibonacci numbers]] ''F''<sub>''N''+2</sub> and ''F''<sub>''N''+1</sub>, respectively.<ref name="Knuth, p. 343">Knuth, p. 343.</ref> This can be shown by [[mathematical induction|induction]].<ref>{{Harvnb|Mollin|2008|p=21}}</ref> If ''N'' = 1, ''b'' divides ''a'' with no remainder; the smallest natural numbers for which this is true is ''b'' = 1 and ''a'' = 2, which are ''F''<sub>2</sub> and ''F''<sub>3</sub>, respectively. Now assume that the result holds for all values of ''N'' up to ''M'' − 1. The first step of the ''M''-step algorithm is ''a'' = ''q''<sub>0</sub>''b'' + ''r''<sub>0</sub>, and the second step is ''b'' = ''q''<sub>1</sub>''r''<sub>0</sub> + ''r''<sub>1</sub>. Since the algorithm is recursive, it required ''M'' − 1 steps to find gcd(''b'', ''r''<sub>0</sub>) and their smallest values are ''F''<sub>''M''+1</sub> and ''F''<sub>''M''</sub>. The smallest value of ''a'' is therefore when ''q''<sub>0</sub> = 1, which gives ''a'' = ''b'' + ''r''<sub>0</sub> = ''F''<sub>''M''+1</sub> + ''F''<sub>''M''</sub> = ''F''<sub>''M''+2</sub>. This proof, published by [[Gabriel Lamé]] in 1844, represents the beginning of [[computational complexity theory]],<ref>{{Harvnb|LeVeque|1996|p=35}}</ref> and also the first practical application of the Fibonacci numbers.<ref name="Knuth, p. 343"/>
| |
| | |
| This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10).<ref>{{Harvnb|Mollin|2008|pp=21–22}}</ref> For if the algorithm requires ''N'' steps, then ''b'' is greater than or equal to ''F''<sub>''N''+1</sub> which in turn is greater than or equal to ''φ''<sup>''N''−1</sup>, where ''φ'' is the [[golden ratio]]. Since ''b'' ≥ ''φ''<sup>''N''−1</sup>, then ''N'' − 1 ≤ log<sub>''φ''</sub>''b''. Since log<sub>10</sub>''φ'' > 1/5, (''N'' − 1)/5 < log<sub>10</sub>''φ'' log<sub>''φ''</sub>''b'' = log<sub>10</sub>''b''. Thus, ''N'' ≤ 5 log<sub>10</sub>''b''. Thus, the Euclidean algorithm always needs less than [[Big O notation|''O''(''h'')]] divisions, where ''h'' is the number of digits in the smaller number ''b''.
| |
| | |
| ====Average number of steps====
| |
| The average number of steps taken by the Euclidean algorithm has been defined in three different ways. The first definition is the average time ''T''(''a'') required to calculate the GCD of a given number ''a'' and a smaller natural number ''b'' chosen with equal probability from the integers 0 to ''a'' − 1<ref name="Knuth, p. 344"/>
| |
| | |
| :<math alt="The function T of a equals one over a times the sum of the function T of a comma b over all nonnegative values of b less than a." >
| |
| T(a) = \frac{1}{a} \sum_{0 \leq b<a} T(a, b).
| |
| </math>
| |
| | |
| However, since ''T''(''a'', ''b'') fluctuates dramatically with the GCD of the two numbers, the averaged function ''T''(''a'') is likewise "noisy".<ref>Knuth, p. 353.</ref>
| |
| | |
| To reduce this noise, a second average τ(''a'') is taken over all numbers coprime with ''a''
| |
| | |
| :<math alt="The function tau of a equals one over the function phi of a times the sum of the function T of a comma b over all nonnegative values of b less than a, where b is coprime to a." >
| |
| \tau(a) = \frac{1}{\varphi(a)} \sum_{0 \leq b<a, \mathrm{gcd}(a, b) = 1} T(a, b).
| |
| </math>
| |
| | |
| There are ''φ''(''a'') coprime integers less than ''a'', where ''φ'' is [[Euler's totient function]]. This tau average grows smoothly with ''a''<ref>Knuth, p. 357.</ref><ref>{{cite journal | author = Tonkov T | year = 1974 | title = On the average length of finite continued fractions | journal = Acta arithmetica | volume = 26 | pages = 47–57}}</ref>
| |
| | |
| :<math>\tau(a) = \frac{12}{\pi^{2}}\ln 2 \ln a + C + O(a^{-1/6-\epsilon})</math>
| |
| | |
| with the residual error being of order ''a''<sup>−(1/6) + ε</sup>, where ε is [[infinitesimal]]. The constant ''C'' (''Porter's Constant''<ref>http://mathworld.wolfram.com/PortersConstant.html</ref>)in this formula equals
| |
| :<math>C= -\frac{1}{2} + \frac{6 \ln 2}{\pi^{2}}(4\gamma -24\pi^{2}\zeta'(2) + 3\ln 2 - 2) \approx 1.467</math>
| |
| | |
| where γ is the [[Euler–Mascheroni constant]] and ζ' is the [[derivative]] of the [[Riemann zeta function]].<ref>{{cite journal | author = Porter JW | year = 1975 | title = On a Theorem of Heilbronn | journal = Mathematika | volume = 22 | pages = 20–28 | doi = 10.1112/S0025579300004459}}</ref><ref>{{cite journal | author = Knuth DE | year = 1976 | title = Evaluation of Porter's Constant | journal = Computers and Mathematics with Applications | volume = 2 | pages = 137–139 | doi = 10.1016/0898-1221(76)90025-0 | issue = 2}}</ref> The leading coefficient (12/π<sup>2</sup>) ln 2 was determined by two independent methods.<ref>{{cite journal | author = Dixon JD | year = 1970 | title = The Number of Steps in the Euclidean Algorithm | journal = J. Number Theory | volume = 2 | pages = 414–422 | doi = 10.1016/0022-314X(70)90044-2 | issue = 4}}</ref><ref>{{cite book | author = Heilbronn HA | year = 1969 | chapter = On the Average Length of a Class of Finite Continued Fractions | title = Number Theory and Analysis | editor = Paul Turán | publisher = Plenum | location = New York | pages = 87–96 | lccn = 688991}}</ref>
| |
| | |
| Since the first average can be calculated from the tau average by summing over the divisors ''d'' of ''a''<ref>Knuth, p. 354.</ref>
| |
| | |
| :<math alt="The function T of a equals one over a times the sum of an argument summed over all divisors d of a. The argument equals the function phi of d multiplied by the function tau of d.">
| |
| T(a) = \frac{1}{a} \sum_{d | a} \varphi(d) \tau(d)
| |
| </math>
| |
| | |
| it can be approximated by the formula<ref name="Norton_1990">{{cite journal | author = Norton GH | year = 1990 | title = On the Asymptotic Analysis of the Euclidean Algorithm | journal = Journal of Symbolic Computation | volume = 10 | pages = 53–58 | doi = 10.1016/S0747-7171(08)80036-3}}</ref>
| |
| | |
| :<math>T(a) \approx C + \frac{12}{\pi^{2}} \ln 2 \left(\ln a - \sum_{d | a} \frac{\Lambda(d)}{d}\right)</math>
| |
| | |
| where Λ(''d'') is the [[von Mangoldt function|Mangoldt function]].<ref>Knuth, p. 355.</ref>
| |
| | |
| A third average ''Y''(''n'') is defined as the mean number of steps required when both ''a'' and ''b'' are chosen randomly (with uniform distribution) from 1 to ''n''<ref name="Norton_1990" />
| |
| | |
| :<math alt="The function Y of n equals one over n squared times the double sum of the function T of a comma b for all values of a and b ranging from 1 to n. This equals one over n times the sum of the function T of a over all values of a ranging from 1 to n.">
| |
| Y(n) = \frac{1}{n^{2}} \sum_{a=1}^n \sum_{b=1}^n T(a, b) = \frac{1}{n} \sum_{a=1}^n T(a).
| |
| </math>
| |
| | |
| Substituting the approximate formula for ''T''(''a'') into this equation yields an estimate for ''Y''(''n'')<ref>Knuth, p. 356.</ref>
| |
| :<math>Y(n) \approx \frac{12}{\pi^{2}} \ln 2 \ln n + 0.06.</math>
| |
| | |
| ===Computational expense per step===
| |
| In each step ''k'' of the Euclidean algorithm, the quotient ''q''<sub>''k''</sub> and remainder ''r''<sub>''k''</sub> are computed for a given pair of integers ''r''<sub>''k''−2</sub> and ''r''<sub>''k''−1</sub>
| |
| | |
| : {{math|1=''r''<sub>''k''−2</sub> = ''q''<sub>''k''</sub> ''r''<sub>''k''−1</sub> + ''r''<sub>''k''</sub>.}}
| |
| | |
| The computational expense per step is associated chiefly with finding ''q''<sub>''k''</sub>, since the remainder ''r''<sub>''k''</sub> can be calculated quickly from ''r''<sub>''k''−2</sub>, ''r''<sub>''k''−1</sub>, and ''q''<sub>''k''</sub>
| |
| | |
| : {{math|1=''r''<sub>''k''</sub> = ''r''<sub>''k''−2</sub> − ''q''<sub>''k''</sub> ''r''<sub>''k''−1</sub>.}}
| |
| | |
| The computational expense of dividing ''h''-bit numbers scales as ''O''(''h''(''ℓ''+1)), where ''ℓ'' is the length of the quotient.<ref>Knuth, pp. 257–261.</ref>
| |
| | |
| For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient ''q'' number of subtractions. If the ratio of ''a'' and ''b'' is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient ''q'' is approximately ln|''u''/(''u'' − 1)| where ''u'' = (''q'' + 1)<sup>2</sup>.<ref>Knuth, p. 352.</ref> For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers,<ref>{{cite book | author = Wagon S | year = 1999 | title = Mathematica in Action | publisher = Springer-Verlag | location = New York | isbn = 0-387-98252-3 | pages = 335–336}}</ref> the subtraction-based Euclid's algorithm is competitive with the division-based version.<ref>{{Harvnb|Cohen|1993|p=14}}</ref> This is exploited in the [[binary GCD algorithm|binary version]] of Euclid's algorithm.<ref>{{Harvnb|Cohen|1993|pp=14–15, 17–18}}</ref>
| |
| | |
| Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (''h''<sup>2</sup>) with the average number of digits ''h'' in the initial two numbers ''a'' and ''b''. Let ''h''<sub>0</sub>, ''h''<sub>1</sub>, …, ''h''<sub>''N''−1</sub> represent the number of digits in the successive remainders ''r''<sub>0</sub>, ''r''<sub>1</sub>, …, ''r''<sub>''N''−1</sub>. Since the number of steps ''N'' grows linearly with ''h'', the running time is bounded by
| |
| | |
| :<math alt="The order of the sum over all i less than N of h sub i times parenthesis h sub i minus h sub i plus one plus 2 close parenthesis is a subset of the order of h times the sum over all i less than N of h sub i minus h sub i plus one plus 2, which in turn is a subset of the order of h times h sub zero plus 2 N, which in turn is a subset of the order of h squared.">
| |
| O\Big(\sum_{i<N}h_i(h_i-h_{i+1}+2)\Big)\subseteq O\Big(h\sum_{i<N}(h_i-h_{i+1}+2)\Big)\subseteq O(h(h_0+2N))\subseteq O(h^2).</math>
| |
| | |
| ===Efficiency of alternative methods===
| |
| Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. For comparison, the efficiency of alternatives to Euclid's algorithm may be determined.
| |
| | |
| One inefficient approach to finding the GCD of two natural numbers ''a'' and ''b'' is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number ''b''. The number of steps of this approach grows linearly with ''b'', or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted [[#Greatest common divisor|above]], the GCD equals the product of the prime factors shared by the two numbers ''a'' and ''b''.<ref name="Schroeder_21" /> Present methods for [[integer factorization|prime factorization]] are also inefficient; many modern cryptography systems even rely on that inefficiency.<ref name="Schroeder_216" />
| |
| | |
| The [[binary GCD algorithm]] is an efficient alternative that substitutes division with faster operations by exploiting the [[binary numeral system|binary]] representation used by computers.<ref>Knuth, pp. 321–323.</ref><ref>{{cite journal | author = Stein J | year = 1967 | title = Computational problems associated with Racah algebra | journal = Journal of Computational Physics | volume = 1 | pages = 397–405 | doi = 10.1016/0021-9991(67)90047-2 | issue = 3}}</ref> However, this alternative also scales like [[big-O notation|''O''(''h''²)]]. It is generally faster than the Euclidean algorithm on real computers, even though it scales in the same way.<ref name="Crandall_2001">{{cite book | author = Crandall R, Pomerance C | year = 2001 | title = Prime Numbers: A Computational Perspective | edition = 1st | publisher = Springer-Verlag | location = New York | isbn = 0-387-94777-9 | pages = 77–79, 81–85, 425–431}}</ref> Additional efficiency can be gleaned by examining only the leading digits of the two numbers ''a'' and ''b''.<ref>Knuth, p. 328.</ref><ref>{{cite journal | author = Lehmer DH | year = 1938 | title = Euclid's Algorithm for Large Numbers | journal = The American Mathematical Monthly | volume = 45 | pages = 227–233 | doi = 10.2307/2302607 | jstor = 2302607 | issue = 4}}</ref> The binary algorithm can be extended to other bases (''k''-ary algorithms),<ref>{{cite journal | author = Sorenson J | year = 1994 | title = Two fast GCD algorithms | journal = J. Algorithms | volume = 16 | pages = 110–144 | doi = 10.1006/jagm.1994.1006}}</ref> with up to fivefold increases in speed.<ref>{{cite journal | author = Weber K | year = 1995 | title = The accelerated GCD algorithm | journal = ACM Trans. Math. Soft. | volume = 21 | pages = 111–122 | doi = 10.1145/200979.201042}}</ref>
| |
| | |
| A recursive approach for very large integers (with more than 25,000 digits) leads to ''subquadratic integer GCD algorithms'',<ref>{{cite book | author = Aho A, Hopcroft J, Ullman J | year = 1974 | title = The Design and Analysis of Computer Algorithms | publisher = Addison–Wesley | location = New York | pages = 300–310 | isbn = 0-201-00029-6}}</ref> such as those of Schönhage,<ref>{{cite journal | author = Schönhage A | title = Schnelle Berechnung von Kettenbruchentwicklungen | journal = Acta Informatica | volume = 1 | pages = 139–144 | doi = 10.1007/BF00289520 | year = 1971 | issue = 2}}</ref><ref>{{cite book | author = Cesari G | year = 1998 | chapter = Parallel implementation of Schönhage's integer GCD algorithm | title = Algorithmic Number Theory: Proc. ANTS-III, Portland, OR | editor = G. Buhler | publisher = Springer-Verlag | location = New York | pages = 64–76}} Volume 1423 in ''Lecture notes in Computer Science''.</ref> and Stehlé and Zimmermann.<ref>{{cite book | author = Stehlé D, Zimmermann P | year = 2005 | chapter = Gal's accurate tables method revisited | title = Proceedings of the 17th IEEE Symposium on Computer Arithmetic (ARITH-17) | publisher = [[IEEE Computer Society Press]] | location = Los Alamitos, CA}}</ref> These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given [[#Matrix method|above]]. These subquadratic methods generally scale as {{nowrap|''O''(''h'' (log ''h'')<sup>2</sup> (log log ''h'')).}}<ref name="Crandall_2001" /><ref>{{cite journal | author = Möller N | title = On Schönhage's algorithm and subquadratic integer gcd computation | journal = Mathematics of Computation | volume = 77 | pages = 589–607 | doi = 10.1090/S0025-5718-07-02017-0 | year = 2008 | url=http://www.lysator.liu.se/~nisse/archive/sgcd.pdf | issue = 261}}</ref>
| |
| | |
| ==Other number systems==
| |
| As described above, the Euclidean algorithm is used to find the greatest common divisor of two natural numbers (positive integers). However, it may be generalized to the real numbers, and to more exotic number systems such as [[polynomial]]s, [[quadratic integer]]s and [[Hurwitz quaternion]]s. In the latter cases, the Euclidean algorithm is used to demonstrate the crucial property of unique factorization, i.e., that such numbers can be factored uniquely into [[irreducible element]]s, the counterparts of prime numbers. Unique factorization is essential to many proofs of number theory.
| |
| | |
| ===Rational and real numbers===
| |
| Euclid's algorithm can be applied to [[real number]]s, as described by Euclid in Book 10 of his ''[[Euclid's Elements|Elements]]''. The goal of the algorithm is to identify a real number ''g'' such that two given real numbers, ''a'' and ''b'', are integer multiples of it: ''a'' = ''mg'' and ''b'' = ''ng'', where ''m'' and ''n'' are [[integer]]s.<ref name="Weil_1983" /> This identification is equivalent to finding an [[integer relation algorithm|integer relation]] among the real numbers ''a'' and ''b''; that is, it determines integers ''s'' and ''t'' such that ''sa'' + ''tb'' = 0. Euclid uses this algorithm to treat the question of [[Commensurability (mathematics)|incommensurable lengths]].<ref>{{cite book | author = Boyer CB, Merzbach UC | year = 1991 | title = A History of Mathematics | edition = 2nd | publisher = Wiley | location = New York | isbn = 0-471-54397-7 | pages = 116–117}}</ref><ref>{{cite book | author = [[Florian Cajori|Cajori F]] | year = 1894 | title = A History of Mathematics | publisher = Macmillan | location = New York | page = 70 | isbn = 0-486-43874-0}}</ref>
| |
| | |
| The real-number Euclidean algorithm differs from its integer counterpart in two respects. First, the remainders ''r''<sub>''k''</sub> are real numbers, although the quotients ''q''<sub>''k''</sub> are integers as before. Second, the algorithm is not guaranteed to end in a finite number ''N'' of steps. If it does, the fraction ''a''/''b'' is a rational number, i.e., the ratio of two integers
| |
| | |
| : {{math|1=''a''/''b'' = ''mg''/''ng'' = ''m''/''n''}}
| |
| | |
| and can be written as a finite continued fraction [''q''<sub>0</sub>; ''q''<sub>1</sub>, ''q''<sub>2</sub>, …, ''q''<sub>''N''</sub>]. If the algorithm does not stop, the fraction ''a''/''b'' is an [[irrational number]] and can be described by an infinite continued fraction [''q''<sub>0</sub>; ''q''<sub>1</sub>, ''q''<sub>2</sub>, …]. Examples of infinite continued fractions are the [[golden ratio]] ''φ'' = [1; 1, 1, …] and the [[square root of 2|square root of two]], √2 = [1; 2, 2, …]. Generally speaking, the algorithm is unlikely to stop, since [[almost all]] ratios ''a''/''b'' of two real numbers are irrational.
| |
| | |
| An infinite continued fraction may be truncated at a step ''k'' [''q''<sub>0</sub>; ''q''<sub>1</sub>, ''q''<sub>2</sub>, …, ''q''<sub>''k''</sub>] to yield an approximation to ''a''/''b'' that improves as ''k'' is increased. The approximation is described by [[convergent (continued fraction)|convergents]] ''m''<sub>''k''</sub>/''n''<sub>''k''</sub>; the numerator and denominators are coprime and obey the recursion
| |
| | |
| :''m''<sub>''k''</sub> = ''q''<sub>''k''</sub> ''m''<sub>''k''−1</sub> + ''m''<sub>''k''−2</sub>
| |
| :''n''<sub>''k''</sub> = ''q''<sub>''k''</sub> ''n''<sub>''k''−1</sub> + ''n''<sub>''k''−2</sub>
| |
| | |
| where ''m''<sub>−1</sub> = ''n''<sub>−2</sub> = 1 and ''m''<sub>−2</sub> = ''n''<sub>−1</sub> = 0 are the initial values of the recursion. The convergent ''m''<sub>''k''</sub>/''n''<sub>''k''</sub> is the best [[rational number]] approximation to ''a''/''b'' with denominator ''n''<sub>''k''</sub>:
| |
| | |
| :<math alt="The absolute value of the difference of two ratios (a over b minus m sub k over n sub k) is less than one over n sub k squared.">
| |
| \left|\frac{a}{b} - \frac{m_k}{n_k}\right| < \frac{1}{n_k^2}.
| |
| </math>
| |
| | |
| ===Polynomials===
| |
| {{main|Greatest common divisor of two polynomials}}
| |
| Polynomials in a single variable ''x'' can be added, multiplied and factored into [[irreducible polynomial]]s, which are the analogs of the prime numbers for integers. The greatest common divisor polynomial ''g''(''x'') of two polynomials ''a''(''x'') and ''b''(''x'') is defined as the product of their shared irreducible polynomials, which can be identified using the Euclidean algorithm.<ref name="Lang_1984" >{{cite book | author = [[Serge Lang|Lang S]] | year = 1984 | title = Algebra | edition = 2nd | publisher = Addison–Wesley | location = Menlo Park, CA | isbn = 0-201-05487-6 | pages = 190–194}}</ref> The basic procedure is similar to integers. At each step ''k'', a quotient polynomial ''q''<sub>''k''</sub>(''x'') and a remainder polynomial ''r''<sub>''k''</sub>(''x'') are identified to satisfy the recursive equation
| |
| | |
| : {{math|1=''r''<sub>''k''−2</sub>(''x'') = ''q''<sub>''k''</sub>(''x'') ''r''<sub>''k''−1</sub>(''x'') + ''r''<sub>''k''</sub>(''x'')}}
| |
| | |
| where ''r''<sub>−2</sub>(''x'') = ''a''(''x'') and ''r''<sub>−1</sub>(''x'') = ''b''(''x''). The quotient polynomial is chosen so that the leading term of ''q''<sub>''k''</sub>(''x'') ''r''<sub>''k''−1</sub>(''x'') equals the leading term of ''r''<sub>''k''−2</sub>(''x''); this ensures that the degree of each remainder is smaller than the degree of its predecessor deg[''r''<sub>''k''</sub>(''x'')] < deg[''r''<sub>''k''−1</sub>(''x'')]. Since the degree is a nonnegative integer, and since it decreases with every step, the Euclidean algorithm concludes in a finite number of steps. The final nonzero remainder is the greatest common divisor of the original two polynomials, ''a''(''x'') and ''b''(''x'').<ref>{{Harvnb|Cox|Little|O'Shea|1997|pp=37–46}}</ref>
| |
| | |
| For example, consider the following two quartic polynomials, which each factor into two quadratic polynomials
| |
| | |
| : {{math|1=''a''(''x'') = ''x''<sup>4</sup> − 4''x''<sup>3</sup> + 4 ''x''<sup>2</sup> − 3''x'' + 14 = (''x''<sup>2</sup> − 5''x'' + 7)(''x''<sup>2</sup> + ''x'' + 2)}}
| |
| | |
| and
| |
| | |
| : {{math|1=''b''(''x'') = ''x''<sup>4</sup> + 8''x''<sup>3</sup> + 12''x''<sup>2</sup> + 17''x'' + 6 = (''x''<sup>2</sup> + 7''x'' + 3)(''x''<sup>2</sup> + ''x'' + 2).}}
| |
| | |
| [[polynomial long division|Dividing]] ''a''(''x'') by ''b''(''x'') yields a remainder ''r''<sub>0</sub>(''x'') = ''x''<sup>3</sup> + (2/3) ''x''<sup>2</sup> + (5/3) ''x'' − (2/3). In the next step, ''b''(''x'') is divided by ''r''<sub>0</sub>(''x'') yielding a remainder ''r''<sub>1</sub>(''x'') = ''x''<sup>2</sup> + ''x'' + 2. Finally, dividing ''r''<sub>0</sub>(''x'') by ''r''<sub>1</sub>(''x'') yields a zero remainder, indicating that ''r''<sub>1</sub>(''x'') is the greatest common divisor polynomial of ''a''(''x'') and ''b''(''x''), consistent with their factorization.
| |
| | |
| Many of the applications described above for integers carry over to polynomials.<ref>{{Harvnb|Schroeder|2005|pp=254–259}}</ref> The Euclidean algorithm can be used to solve linear Diophantine equations and Chinese remainder problems for polynomials; continued fractions of polynomials can also be defined.
| |
| | |
| The polynomial Euclidean algorithm has other applications as well, such as [[Sturm chain]]s, a method for counting the number of real roots of a polynomial within a given interval on the real axis. This has applications in several areas, such as the [[Routh–Hurwitz stability criterion]] in [[control theory]].
| |
| | |
| Finally, the coefficients of the polynomials need not be drawn from integers, real numbers or even the complex numbers. For example, the coefficients may be drawn from a general field, such as the finite fields GF(''p'') described above. The corresponding conclusions about the Euclidean algorithm and its applications hold even for such polynomials.<ref name="Lang_1984" />
| |
| | |
| ===Gaussian integers===
| |
| [[File:Gaussian primes.png|thumb|alt="A set of dots lying within a circle. The pattern of dots has fourfold symmetry, i.e., rotations by 90 degrees leave the pattern unchanged. The pattern can also be mirrored about four lines passing through the center of the circle: the vertical and horizontal axes, and the two diagonal lines at ±45 degrees."|Distribution of Gaussian primes ''u'' + ''vi'' in the complex plane, with norms ''u''<sup>2</sup> + ''v''<sup>2</sup> less than 500]]
| |
| | |
| The Gaussian integers are [[complex number]]s of the form α = ''u'' + ''vi'', where ''u'' and ''v'' are ordinary [[integer]]s and ''i'' is the [[imaginary unit|square root of negative one]].<ref name="Stillwell_2003" /> By defining an analog of the Euclidean algorithm, Gaussian integers can be shown to be uniquely factorizable, by the argument [[#Bézout's identity|above]].<ref name="Gauss_1832">{{cite journal | author = [[Carl Friedrich Gauss|Gauss CF]] | year = 1832 | title = Theoria residuorum biquadraticorum | journal = Comm. Soc. Reg. Sci. Gött. Rec. | volume = 4}} See also ''Werke'', '''2''':67–148.</ref> This unique factorization is helpful in many applications, such as deriving all [[Pythagorean triple]]s or proving [[Fermat's theorem on sums of two squares]].<ref name="Stillwell_2003">{{Harvnb|Stillwell|2003|pp=101–116}}</ref> In general, the Euclidean algorithm is convenient in such applications, but not essential; for example, the theorems can often be proven by other arguments.
| |
| | |
| The Euclidean algorithm developed for two Gaussian integers α and β is nearly the same as that for normal integers, but differs in two respects. As before, the task at each step ''k'' is to identify a quotient ''q''<sub>''k''</sub> and a remainder ''r''<sub>''k''</sub> such that
| |
| | |
| : {{math|1=''r''<sub>''k''</sub> = ''r''<sub>''k''−2</sub> − ''q''<sub>''k''</sub> ''r''<sub>''k''−1</sub>}}
| |
| | |
| where ''r''<sub>''k''−2</sub> = α, ''r''<sub>''k''−1</sub> = β, and every remainder is strictly smaller than its predecessor, |''r''<sub>''k''</sub>| < |''r''<sub>''k''−1</sub>|. The first difference is that the quotients and remainders are themselves Gaussian integers, and thus are [[complex number]]s. The quotients ''q''<sub>''k''</sub> are generally found by rounding the real and complex parts of the exact ratio (such as the complex number α/β) to the nearest integers. The second difference lies in the necessity of defining how one complex remainder can be "smaller" than another. To do this, we define a [[norm (mathematics)|norm function]] ''f''(''u'' + ''v''i) = ''u''<sup>2</sup> + ''v''<sup>2</sup>, which converts every Gaussian integer ''u'' + ''vi'' into a normal integer. After each step ''k'' of the Euclidean algorithm, the norm of the remainder ''f''(''r''<sub>''k''</sub>) is smaller than the norm of the preceding remainder, ''f''(''r''<sub>''k''−1</sub>). Since the norm is a nonnegative integer and decreases with every step, the Euclidean algorithm for Gaussian integers ends in a finite number of steps. The final nonzero remainder is the gcd(α,β), the Gaussian integer of largest norm that divides both α and β; it is unique up to multiplication by a unit, ±1 or ±''i''.
| |
| | |
| Many of the other applications of the Euclidean algorithm carry over to Gaussian integers. For example, it can be used to solve linear Diophantine equations and Chinese remainder problems for Gaussian integers; continued fractions of Gaussian integers can also be defined.
| |
| | |
| ===Euclidean domains===
| |
| A set of elements under two [[binary operation]]s, + and ·, is called a [[Euclidean domain]] if it forms a [[commutative ring]] ''R'' and, roughly speaking, if a generalized Euclidean algorithm can be performed on them.<ref>{{Harvnb|Stark|1978|p=290}}</ref><ref>{{Harvnb|Cohn|1962|pp=104–105}}</ref> The two operations of such a ring need not be the addition and multiplication of ordinary arithmetic; rather, they can be more general, such as the operations of a [[group (mathematics)|mathematical group]] or [[monoid]]. Nevertheless, these general operations should respect many of the laws governing ordinary arithmetic, such as [[commutativity]], [[associativity]] and [[distributivity]].
| |
| | |
| The generalized Euclidean algorithm requires a ''Euclidean function'', i.e., a mapping ''f'' from ''R'' into the set of nonnegative integers such that, for any two nonzero elements ''a'' and ''b'' in ''R'', there exist ''q'' and ''r'' in ''R'' such that ''a'' = ''qb'' + ''r'' and ''f''(''r'') < ''f''(''b''). An example of this mapping is the norm function used to order the Gaussian integers [[#Gaussian integers|above]]. The function ''f'' can be the magnitude of the number, or the [[degree of a polynomial]]. The basic principle is that each step of the algorithm reduces ''f'' inexorably; hence, if ''f'' can be reduced only a finite number of times, the algorithm must stop in a finite number of steps. This principle relies heavily on the natural [[well-ordering]] of the non-negative integers; roughly speaking, this requires that every non-empty set of non-negative integers has a smallest member.
| |
| | |
| The [[fundamental theorem of arithmetic]] applies to any Euclidean domain: Any number from a Euclidean domain can be factored uniquely into [[irreducible element]]s. Any Euclidean domain is a [[unique factorization domain]] (UFD), although the converse is not true. The Euclidean domains and the UFD's are subclasses of the [[GCD domain]]s, domains in which a greatest common divisor of two numbers always exists. In other words, a greatest common divisor may exist (for all pairs of elements in a domain), although it may not be possible to find it using a Euclidean algorithm. A Euclidean domain is always a [[principal ideal domain]] (PID), an [[integral domain]] in which every [[ideal (ring theory)|ideal]] is a [[principal ideal]]. Again, the converse is not true: not every PID is a Euclidean domain.
| |
| | |
| The unique factorization of Euclidean domains is useful in many applications. For example, the unique factorization of the Gaussian integers is convenient in deriving formulae for all [[Pythagorean triple]]s and in proving [[Fermat's theorem on sums of two squares]].<ref name="Stillwell_2003" /> Unique factorization was also a key element in an attempted proof of [[Fermat's Last Theorem]] published in 1847 by Gabriel Lamé, the same mathematician who analyzed the efficiency of Euclid's algorithm, based on a suggestion of [[Joseph Liouville]].<ref>{{cite journal | author = Lamé G | year = 1847 | title = Mémoire sur la résolution, en nombres complexes, de l'équation A<sup>n</sup> + B<sup>n</sup> + C<sup>n</sup> = 0 | journal = J. Math. Pures Appl. | volume = 12 | pages = 172–184}}</ref> Lamé's approach required the unique factorization of numbers of the form ''x'' + ω''y'', where ''x'' and ''y'' are integers, and ω = ''e''<sup>2''i''π/''n''</sup> is an ''n''th root of 1, that is, ω<sup>''n''</sup> = 1. Although this approach succeeds for some values of ''n'' (such as ''n''=3, the [[Eisenstein integer]]s), in general such numbers do ''not'' factor uniquely. This failure of unique factorization in some [[cyclotomic field]]s led [[Ernst Kummer]] to the concept of [[ideal number]]s and, later, [[Richard Dedekind]] to [[ideal (ring theory)|ideals]].{{citation needed|date=March 2010}}
| |
| | |
| ====Unique factorization of quadratic integers====
| |
| | |
| [[File:Eisenstein primes.svg|thumb|alt="A set of dots lying within a circle. The pattern of dots has sixfold symmetry, i.e., rotations by 60 degrees leave the pattern unchanged. The pattern can also be mirrored about six lines passing through the center of the circle: the vertical and horizontal axes, and the four diagonal lines at ±30 and ±60 degrees."|Distribution of Eisenstein primes ''u'' + ''v''ω in the complex plane, with norms less than 500. The number ω equals the [[root of unity|cube root of 1]].]]
| |
| | |
| The [[quadratic integer ring]]s are helpful to illustrate Euclidean domains. Quadratic integers are generalizations of the Gaussian integers in which the [[imaginary unit]] ''i'' is replaced by a number ω. Thus, they have the form ''u'' + ''v'' ω, where ''u'' and ''v'' are integers and ω has one of two forms, depending on a parameter ''D''. If ''D'' does not equal a multiple of four plus one, then
| |
| | |
| : {{math|1=ω = √''D''.}}
| |
| | |
| If, however, ''D'' does equal a multiple of four plus one, then
| |
| | |
| : {{math|1=ω = (1 + √''D'')/2.}}
| |
| | |
| If the function ''f'' corresponds to a [[field norm|norm]] function, such as that used to order the Gaussian integers [[#Gaussian integers|above]], then the domain is known as ''[[Norm-Euclidean field|norm-Euclidean]]''. The norm-Euclidean rings of quadratic integers are exactly those where ''D'' = −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57 or 73.<ref name="Cohn_1962" /><ref>{{cite book | last = LeVeque | first = William J. | authorlink = William J. LeVeque | title = Topics in Number Theory, Volumes I and II | publisher = Dover Publications | location = New York | year = 2002 | origyear = 1956 | isbn = 978-0-486-42539-9 | zbl=1009.11001 | pages=II:57,81}}</ref> The quadratic integers with ''D'' = −1 and −3 are known as the [[Gaussian integer]]s and [[Eisenstein integer]]s, respectively.
| |
| | |
| If ''f'' is allowed to be any Euclidean function, then the list of possible ''D'' values for which the domain is Euclidean is not yet known.<ref name="Clark_1994">{{cite journal | last=Clark | first=David A. | year = 1994 | title = A quadratic field which is Euclidean but not norm-Euclidean | journal = [[Manuscripta Mathematica]] | volume = 83 | pages = 327–330 | doi = 10.1007/BF02567617 | url=http://www.springerlink.com/content/6t9u2440402n1346/ | zbl=0817.11047 }}</ref> The first example of a Euclidean domain that was not norm-Euclidean (with ''D'' = 69) was published in 1994.<ref name="Clark_1994" /> In 1973, Weinberger proved that a quadratic integer ring with ''D'' > 0 is Euclidean if, and only if, it is a [[principal ideal domain]], provided that the [[generalized Riemann hypothesis]] holds.<ref>{{cite journal | author = Weinberger P | chapter = On Euclidean rings of algebraic integers | journal = Proc. Sympos. Pure Math. | volume = 24 | pages = 321–332}}</ref>
| |
| | |
| ===Noncommutative rings===
| |
| It is also possible to apply the Euclidean algorithm to noncommutative rings such as the set of [[Hurwitz quaternion]]s.<ref>{{Harvnb|Stillwell|2003|pp=151–152}}</ref> Let α and β represent two elements from such a ring. They have a common right divisor δ if α = ξδ and β = ηδ for some choice of ξ and η in the ring. Similarly, they have a common left divisor if α = δξ and β = δη for some choice of ξ and η in the ring. Since multiplication is not commutative, there are two versions of the Euclidean algorithm, one for right divisors and one for left divisors. Choosing the right divisors, the first step in finding the gcd(α, β) by the Euclidean algorithm can be written
| |
| | |
| : {{math|1=ρ<sub>0</sub> = α − ψ<sub>0</sub>β = (ξ − ψ<sub>0</sub>η)δ}}
| |
| | |
| where ψ<sub>0</sub> represents the quotient and ρ<sub>0</sub> the remainder. This equation shows that any common right divisor of α and β is likewise a common divisor of the remainder ρ<sub>0</sub>. The analogous equation for the left divisors would be
| |
| | |
| : {{math|1=ρ<sub>0</sub> = α − βψ<sub>0</sub> = δ(ξ − ηψ<sub>0</sub>)}}
| |
| | |
| With either choice, the process is repeated as above until the greatest common right or left divisor is identified. As in the Euclidean domain, the "size" of the remainder ρ<sub>0</sub> must be strictly smaller than β, and there must be only a finite number of possible sizes for ρ<sub>0</sub>, so that the algorithm is guaranteed to terminate.
| |
| | |
| Most of the results for the GCD carry over to noncommutative numbers. For example, [[Bézout's identity]] states that the right gcd(α, β) can be expressed as a linear combination of α and β. In other words, there are numbers σ and τ such that
| |
| | |
| : {{math|1=Γ<sub>right</sub> = σα + τβ}}
| |
| | |
| The analogous identity for the left GCD is nearly the same
| |
| | |
| : {{math|1=Γ<sub>left</sub> = ασ + βτ}}
| |
| | |
| Bézout's identity can be used to solve Diophantine equations.
| |
| | |
| ==Generalizations to other mathematical structures==
| |
| [[File:TorusKnot3D.png|thumb|alt="A cord wound seven times around a torus and reconnected to its beginning, forming a closed loop. In the process, the cord completes three circuits of the torus, forming a (3, 7) torus knot."|The Euclidean algorithm can be applied in [[knot theory]].<ref>{{cite arxiv | author = Yamada Y | year = 2007 | title = Generalized rational blow-down, torus knots, and Euclidean algorithm | publisher = | eprint=0708.2316 | class = math.GT}}</ref>]]
| |
| | |
| The Euclidean algorithm has three general features that together ensure it will not continue indefinitely. First, it can be written as a sequence of recursive equations
| |
| | |
| : {{math|1=''r''<sub>''k''</sub> = ''r''<sub>''k''−2</sub> − ''q''<sub>''k''</sub> ''r''<sub>''k''−1</sub>}}
| |
| | |
| where each remainder is strictly smaller than its predecessor, |''r''<sub>''k''</sub>| < |''r''<sub>''k''−1</sub>|. Second, the size of each remainder has a strict lower limit, such as |''r''<sub>''k''</sub>| ≥ 0. Third, there is only a finite number of sizes smaller than a given remainder |''r''<sub>''k''</sub>|. Generalizations of Euclid's algorithm with these basic features have been applied to other mathematical structures, such as [[tangle (mathematics)|tangles]]<ref>{{cite book|first=John Horton|last=Conway|author-link=John Horton Conway|contribution=An enumeration of knots and links, and some of their algebraic properties|publisher=Pergamon|year=1970|pages=329–358|title=Computational Problems in Abstract Algebra (Proc. Conf., Oxford, 1967)}}</ref> and [[transfinite number|transfinite]] [[ordinal number]]s.<ref>{{cite journal | author = Jategaonkar AV | year = 1969 | title = Rings with transfinite left division algorithm | journal = Bull. Amer. Math. Soc. | volume = 75 | pages = 559–561 | doi = 10.1090/S0002-9904-1969-12242-1 | url=http://projecteuclid.org/euclid.bams/1183530557 | issue = 3}}</ref>
| |
| | |
| An important generalization of the Euclidean algorithm is the concept of a [[Gröbner basis]] in [[algebraic geometry]]. As shown above, the GCD ''g'' of two integers ''a'' and ''b'' is the generator of their [[ideal (ring theory)|ideal]]. In other words, for any choice of the integers ''s'' and ''t'', there is another integer ''m'' such that
| |
| | |
| : {{math|1=''sa'' + ''tb'' = ''mg''.}}
| |
| | |
| Although this remains true when ''s'', ''t'', ''m'', ''a'' and ''b'' represent polynomials of a single variable, it is ''not'' true for rings of more than one variable.<ref>{{Harvnb|Cox|Little|O'Shea|1997|p=65}}</ref> In that case, a finite set of generator polynomials ''g''<sub>1</sub>, ''g''<sub>2</sub>, etc. can be defined such that any linear combination of two multivariable polynomials ''a'' and ''b'' can be expressed as multiples of the generators
| |
| | |
| : {{math|1=''sa'' + ''tb'' = Σ<sub>''k''</sub> ''m''<sub>''k''</sub>''g''<sub>''k''</sub>}}
| |
| | |
| where ''s'', ''t'' and ''m''<sub>''k''</sub> are multivariable polynomials.<ref>{{Harvnb|Cox|Little|O'Shea|1997|pp=73–79}}</ref> Any such multivariable polynomial ''f'' can be expressed as such a sum of generator polynomials plus a unique remainder polynomial ''r'', sometimes called the ''normal form'' of polynomial ''f''
| |
| | |
| : {{math|1=''f'' = ''r'' + Σ<sub>''k''</sub> ''q''<sub>''k''</sub>''g''<sub>''k''</sub>}}
| |
| | |
| although the quotient polynomials ''q''<sub>''k''</sub> may not be unique.<ref>{{Harvnb|Cox|Little|O'Shea|1997|pp=79–86}}</ref> The set of these generator polynomials is known as a Gröbner basis.<ref>{{Harvnb|Cox|Little|O'Shea|1997|p=74}}</ref>
| |
| | |
| ==See also==
| |
| * [[Binary GCD algorithm]]
| |
| * [[Greatest common divisor of two polynomials]]
| |
| * [[Integer relation algorithm]]
| |
| * [[Lehmer's GCD algorithm]]
| |
| | |
| ==Notes==
| |
| {{refbegin}}
| |
| * '''a.''' {{Note_label|a|a|none}} Some widely used textbooks, such as [[I. N. Herstein]]'s ''Topics in Algebra'' and [[Serge Lang]]'s ''Algebra'', use the term "Euclidean algorithm" to refer to [[Euclidean division]].
| |
| | |
| {{refend}}
| |
| | |
| ==References==
| |
| {{reflist|30em}}
| |
| | |
| ==Bibliography==
| |
| * {{cite book |last=Cohen |first=Henri |author-link=Henri Cohen (number theorist) |year=1993 |title=A Course in Computational Algebraic Number Theory |url=http://books.google.com/?id=hXGr-9l1DXcC |publisher=Springer-Verlag |location=New York |isbn=0-387-55640-0 |ref=harv |doi=}}
| |
| * {{cite book |last=Cohn |first=H. |year=1962 |title=Advanced Number Theory |url=http://books.google.com/?id=yMGeElJ8M0wC |publisher=Dover |location = New York |isbn=0-486-64023-X |ref=harv |doi= }}
| |
| * {{cite book | author = [[Thomas H. Cormen|Cormen TH]], [[Charles E. Leiserson|Leiserson CE]], [[Ronald L. Rivest|Rivest RL]], and [[Clifford Stein|Stein C]] | year = 2001|title= Introduction to Algorithms |url=http://books.google.com/?id=NLngYyWFl_YC |edition=2nd|publisher=MIT Press and McGraw–Hill|isbn =0-262-03293-7}}
| |
| * {{cite book |last1=Cox |first1=D |last2=Little |first2=J |last3=O'Shea |first3=D |year=1997 |title=Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra |url=http://books.google.com/?id=7eLkq0wQytAC |edition=2nd |publisher = Springer-Verlag |isbn=0-387-94680-2 |doi= |ref=harv }}
| |
| * {{cite book |last=Lejeune Dirichlet |first=Peter Gustav |author-link=Peter Gustav Lejeune Dirichlet |year=1894 |title=Vorlesungen über Zahlentheorie (Lectures on Number Theory) |editor-last=Dedekind |editor-first=Richard |editor-link=Richard Dedekind |publisher=Vieweg |location=Braunschweig |language=German |url=http://books.google.com/books?id=WZIKAAAAYAAJ&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false |lccn=03005859 |oclc=490186017 |doi= |ref=harv }}. See also [[Vorlesungen über Zahlentheorie]]
| |
| * {{cite book | author = [[G. H. Hardy|Hardy GH]], [[E. M. Wright|Wright EM]] [revised by D.R. Heath-Brown and J.H. Silverman]. | year = 2008|url = http://books.google.com/?id=rey9wfSaJ9EC | title =An Introduction to the Theory of Numbers|edition = 6th|publisher=Clarendon Press|location =Oxford | isbn = 978-0-19-921986-5}}
| |
| * {{cite book|author=[[Donald Knuth|Knuth D]]|year=1997|title=[[The Art of Computer Programming]], Volume 2: Seminumerical Algorithms|edition=3rd|publisher=Addison–Wesley| isbn=0-201-89684-2}}
| |
| * {{cite book | last=LeVeque |first=William J. |author-link=William J. LeVeque |origyear=1977 |year=1996 |title=Fundamentals of Number Theory |publisher=Dover |location=New York |isbn=0-486-68906-9 |ref=harv }}
| |
| * {{cite book |last=Mollin |first=R. A. |year=2008 |title=Fundamental Number Theory with Applications |edition=2nd |publisher=Chapman & Hall/CRC |location=Boca Raton |isbn=978-1-4200-6659-3 |ref=harv }}
| |
| * {{cite book |last=Ore |first=Øystein |author-link=Øystein Ore |year=1948 |title=Number Theory and Its History |publisher=McGraw–Hill |location=New York |ref=harv |doi= }}
| |
| * {{cite book |last=Rosen |first=K. H. |year=2000 |title=Elementary Number Theory and its Applications |edition=4th |publisher=Addison–Wesley |location=Reading, MA |isbn=0-201-87073-8 |ref=harv |doi= }}
| |
| * {{cite book |last=Schroeder |first=Manfred |author-link=Manfred R. Schroeder |year=2005 |title=Number Theory in Science and Communication |edition=4th |publisher=Springer-Verlag |isbn= 0-387-15800-6 |ref=harv |doi= }}
| |
| * {{cite book |last=Stark |first=Harold |author-link=Harold Stark |year=1978 |title=An Introduction to Number Theory |publisher=MIT Press |isbn=0-262-69060-8 |ref=harv }}
| |
| * {{cite book |last=Stillwell |first=John |author-link=John Stillwell |year=1997 |title=Numbers and Geometry |publisher=Springer-Verlag |location=New York |isbn=0-387-98289-2 |doi= |ref=harv }}
| |
| *{{cite book |last=Stillwell |first=John |author-link=John Stillwell |year=2003 |title=Elements of Number Theory |publisher=Springer-Verlag |location=New York |isbn=0-387-95587-9 |doi= |ref=harv }}
| |
| * {{cite book |last=Tattersall |first=J. J. |year=2005 |title=Elementary number theory in nine chapters |publisher=[[Cambridge University Press]] |location=Cambridge |isbn = 978-0-521-85014-8 |ref=harv }}
| |
| * {{cite book | author = Uspensky JV, Heaslet MA | year = 1939 | title = Elementary Number Theory | publisher = McGraw–Hill | location = New York }}
| |
| | |
| ==External links==
| |
| {{Commons category|Euclidean algorithm}}
| |
| * [http://www.math.sc.edu/~sumner/numbertheory/euclidean/euclidean.html Demonstrations of Euclid's algorithm]
| |
| * {{MathWorld | urlname=EuclideanAlgorithm | title=Euclidean Algorithm}}
| |
| * [http://www.cut-the-knot.org/blue/Euclid.shtml Euclid's Algorithm] at [[cut-the-knot]]
| |
| * {{PlanetMath | urlname=EuclidsAlgorithm | title=Euclid's algorithm}}
| |
| * [http://www.mathpages.com/home/kmath384.htm The Euclidean Algorithm] at MathPages
| |
| * [http://www.cut-the-knot.org/blue/EuclidAlg.shtml Euclid's Game] at [[cut-the-knot]]
| |
| * [http://plus.maths.org/issue40/features/wardhaugh/index.html Music and Euclid's algorithm]
| |
| | |
| {{number theoretic algorithms}}
| |
| | |
| {{featured article}}
| |
| | |
| {{DEFAULTSORT:Euclidean Algorithm}}
| |
| [[Category:Number theoretic algorithms]]
| |
| [[Category:Articles with example pseudocode]]
| |
| [[Category:Articles containing proofs]]
| |
| [[Category:Euclid|Algorithm]]
| |
| {{Link GA|zh}}
| |