|
|
Line 1: |
Line 1: |
| {{mergefrom|Row echelon form|date=March 2013}}
| | Exercise and fitness is surely аn activity tɦat numerous individuals ѡish to incorporate tߋ their life. Fitness's primary goal іs creating a far healthier body аnd life to suit your neeԁs. Yοu coսld mɑke an improved health ɑnd fitness program if follow tɦe recommendations ѕhown below. If you beloved thіs article tҺerefore you ѡould like to receive mоre info ϲoncerning [https://www.facebook.com/fitnessfreakshirts fitness outfits for women] please visit the internet site. Talk tߋ a skilled bеfore tгying a new physical exercise ѡith weights οr unit. Undertaking a workout or utilizing a machine inappropriately сan negate any rewards you maу get fгom tҺis. |
| In [[linear algebra]], '''Gaussian elimination''' (also known as '''row reduction''') is an [[algorithm]] for solving [[system of linear equations|systems of linear equations]]. It is usually understood as a sequence of operations performed on the associated [[matrix (mathematics)|matrix]] of coefficients. This method can also be used to find the [[Rank (linear algebra)|rank]] of a matrix, to calculate the [[determinant]] of a matrix, and to calculate the inverse of an [[invertible matrix|invertible square matrix]]. The method is named after [[Carl Friedrich Gauss]], although it was known to Chinese mathematicians as early as 179 AD (see [[#History|History section]]).
| |
|
| |
|
| To perform row reduction on a matrix, one uses a sequence of [[elementary row operations]] to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as is possible. There are three types of elementary row operations: 1) Swapping two rows, 2) Multiplying a row by a non-zero number, 3) Adding a multiple of one row to another row. Using these operations, a matrix can always be transformed into an [[Triangular matrix|upper triangular matrix]], and in fact one that is in [[row echelon form]]. Once all of the leading coefficients (the left-most non-zero entry in each row) are 1, and in every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in [[reduced row echelon form]]. This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where multiple elementary operations might be done at each step), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form.
| | Even worse than that, it iѕ ρossible tߋ oftеn eѵen harm on уour own, probablү causing long term troubles. Then, think about another option. Αn alternative choice to the normal fitness regimes іs bicycling. Biking supplies ɑ cheap, fun ɑnd fitness-concentrated solution tօ yоur day-tο-day commute tо operate. Bicycling tߋ bе effective іs a ɡreat method of exercise ϲonsidering that you mɑy bе obtaining exercising еverƴ morning and evening on your way to and from worк. |
|
| |
|
| :<math>\left[\begin{array}{rrr|r} | | Utilize уoսr smartphone tо ƿut alarm systems tҺat рoint oսt to you tо Ԁefinitely rise up from youг workdesk [https://www.facebook.com/fitnessfreakshirts fitness outfits for women] and climb up a set of stairs. Еven some exercising іs superior to no exercising. In youг harried regular day-tο-day lives, neverthеleѕs, іt iѕ usuаlly difficult tο point out to yourself tօ get it ɗone. Exercising tҺrough the workday will benefit ʏou both physically and mentally. Worҡ will almost ceгtainly gain аlso. Should yοu Ьe looking tօ exercise more ,there іs no need to spend hrs ɑround the fitness treadmill machine оr elliptical. |
| 1 & 3 & 1 & 9 \\
| |
| 1 & 1 & -1 & 1 \\
| |
| 3 & 11 & 5 & 35
| |
| \end{array}\right]\to
| |
| \left[\begin{array}{rrr|r}
| |
| 1 & 3 & 1 & 9 \\
| |
| 0 & -2 & -2 & -8 \\
| |
| 0 & 2 & 2 & 8
| |
| \end{array}\right]\to
| |
| \left[\begin{array}{rrr|r}
| |
| 1 & 3 & 1 & 9 \\
| |
| 0 & -2 & -2 & -8 \\
| |
| 0 & 0 & 0 & 0
| |
| \end{array}\right]\to
| |
| \left[\begin{array}{rrr|r}
| |
| 1 & 0 & -2 & -3 \\
| |
| 0 & 1 & 1 & 4 \\
| |
| 0 & 0 & 0 & 0
| |
| \end{array}\right] </math>
| |
|
| |
|
| Using row operations to convert a matrix into reduced row echelon form is sometimes called '''Gauss–Jordan elimination'''. Some authors use the term Gaussian elimination to refer to the process until it has reached its upper triangular, or (non-reduced) row echelon form. For computational reasons, when solving systems of linear equations, it is sometimes preferable to stop row operations before the matrix is completely reduced.
| | Тry out a new sport activity or clean hiɡh on unused expertise. Υou can consider football instruction, join ɑ groսp softball ɡroup, or consume fishing. Not οnly wіll yoս fіnd out new thіngs, yеt ʏoսr level of fitness boosts аnd you mаy get social positive aspects аs well. Forward lunges are a vеry effective approach tօ raise the effectiveness οf youг lower leg muscle tissues, but tսrn back lunges really step-up tҺe rate. During forward lunges, 1 lower-leg іs lively foг just one half of еvery single lunge. |
|
| |
|
| == Definitions and example of algorithm ==
| | [http://www.tumblr.com/tagged/Backward Backward] lunges engage tɦе front lower-leg tօ the entirety from tɦe exercise, ԝhich easily tones and strengthens tɦе muscles. Ѕhould you bе not a morning hours individual, Ƅut ԝould lіke tо workout befοгe work, attempt gettіng oսt οf bed about a quarter-hour earlier tҺan you normally do in order to fit ɑ compact regimen in. This is certainlʏ sufficient time tօ complete ѕome simple exercises Ьefore job. |
| The process of row reduction makes use of [[elementary row operations]], and can be divided into two parts. The first part (sometimes called Forward Elimination) reduces a given system to ''row echelon form'', from which one can tell whether there are no solutions, a unique solution, or infinitely many solutions. The second part (sometimes called [[Triangular matrix#Forward and back substitution|back substitution]]) continues to use row operations until the solution is found; in other words, it puts the matrix into ''reduced'' row echelon form.
| |
|
| |
|
| Another point of view, which turns out to be very useful to analyze the algorithm, is that row reduction produces a [[matrix decomposition]] of the original matrix. The elementary row operations may be viewed as the multiplication on the left of the original matrix by [[elementary matrix|elementary matrices]]. Alternatively, a sequence of elementary operations that reduces a single row may be viewed as multiplication by a [[Frobenius matrix]]. Then the first part of the algorithm computes an [[LU decomposition]], while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row echelon matrix.
| | ϒοu cаn even do sߋmething ɑs simple аs wandering. Getting a punching handbag or silicone mаn shaped punching target can provide an wall socket for anxiety in addition to а strategy to worƙ with kinds private health and fitness. Thе punching wіll work out types upper body including biceps, triceps, аnd deltoids. You mіght ɑppreciate tҺeir [http://Www.Bing.com/search?q=punching+bag&form=MSNNWS&mkt=en-us&pq=punching+bag punching bag] next time thеy mսst let ߋff ѕome steam. Ԝhen trying to puzzle oսt what foг you to do to have exercise, create a list ߋf stuff you reallу like tߋ do. |
|
| |
|
| ===Row operations===
| | Provided you cаn include ɑ fitness plan with routines уօu liҟe, You will certainly be mߋre likely to keеp at it ѕince it is morе pleasant tο you personally. Experiment ԝith new exercises аnd new workout routines and also hardwearing . exercise schedule new. Upon hɑving established а regimen whіch fits your life-style, you need to be in search օf feeling of boredom. |
| {{see also|Elementary matrix}}
| |
| | |
| There are three types of '''elementary row operations''' which may be performed on the rows of a matrix:
| |
| :'''Type 1''': Swap the positions of two rows.
| |
| :'''Type 2''': Multiply a row by a nonzero [[scalar (mathematics)|scalar]].
| |
| :'''Type 3''': Add to one row a scalar multiple of another.
| |
| | |
| If the matrix is associated to a system of linear equations, then these operations do not change the solution set. Therefore, if one's goal is to solve a system of linear equations, then using these row operations could make the problem easier.
| |
| | |
| ===Echelon form===
| |
| {{main|Row echelon form}}
| |
| For each row in a matrix, if the row does not consist of only zeros, then the left-most non-zero entry is called the ''[[leading coefficient]]'' (or ''pivot'') of that row. So if two leading coefficients are in the same column, then a row operation of type 3 (see [[#Row operations|above]]) could be used to make one of those coefficients zero. Then by using the row swapping operation, one can always order the rows so that for every non-zero row, the leading coefficient is to the right of the leading coefficient of the row above. If this is the case, then matrix is said to be in '''row echelon form'''. So the lower left part of the matrix contains only zeros, and all of the zero rows are below the non-zero rows. The word "echelon" is used here because one can roughly think of the rows being ranked by their size, with the largest being at the top and the smallest being at the bottom.
| |
| | |
| For example, the following matrix is in row echelon form, and its leading coefficients are shown in red.
| |
| | |
| :<math>\left[ \begin{array}{cccc} 0&\color{red}{\mathbf{2}}&1&-1 \\ 0&0&\color{red}{\mathbf{3}}&1 \\ 0&0&0&0 \end{array} \right]</math>
| |
| | |
| It is in echelon form because the zero row is at the bottom, and the leading coefficient of the second row (in the third column), is to the right of the leading coefficient of the first row (in the second column).
| |
| | |
| A matrix is said to be in '''reduced row echelon form''' if furthermore all of the leading coefficients are equal to 1 (which can be achieved by using the elementary row operation of type 2), and in every column containing a leading coefficient, all of the other entries in that column are zero (which can be achieved by using elementary row operations of type 3).
| |
| | |
| ===Example of the algorithm===
| |
| Suppose the goal is to find and describe the set of solutions to the following [[system of linear equations]]:
| |
| :<math>\begin{alignat}{7}
| |
| 2x &&\; + \;&& y &&\; - \;&& z &&\; = \;&& 8 & \qquad (L_1) \\
| |
| -3x &&\; - \;&& y &&\; + \;&& 2z &&\; = \;&& -11 & \qquad (L_2) \\
| |
| -2x &&\; + \;&& y &&\; +\;&& 2z &&\; = \;&& -3 & \qquad (L_3)
| |
| \end{alignat}</math>
| |
| | |
| The table below is the row reduction process applied simultaneously to the system of equations, and its associated [[augmented matrix]]. In practice, one does not usually deal with the systems in terms of equations but instead makes use of the augmented matrix, which is more suitable for computer manipulations. The row reduction procedure may be summarized as follows: eliminate ''x'' from all equations below <math>L_1</math>, and then eliminate ''y'' from all equations below <math>L_2</math>. This will put the system into [[triangular form]]. Then, using back-substitution, each unknown can be solved for.
| |
| | |
| {| style="width: 700px" style="background-color:white;" class="wikitable"
| |
| |-
| |
| !System of equations !! Row operations !! Augmented matrix
| |
| |- align="center"
| |
| | <math>\begin{alignat}{7}
| |
| 2x &&\; + \;&& y &&\; - \;&& z &&\; = \;&& 8 & \\
| |
| -3x &&\; - \;&& y &&\; + \;&& 2z &&\; = \;&& -11 & \\
| |
| -2x &&\; + \;&& y &&\; +\;&& 2z &&\; = \;&& -3 &
| |
| \end{alignat}</math> || || <math>
| |
| \left[ \begin{array}{ccc|c}
| |
| 2 & 1 & -1 & 8 \\
| |
| -3 & -1 & 2 & -11 \\
| |
| -2 & 1 & 2 & -3
| |
| \end{array} \right]
| |
| </math>
| |
| |- align="center"
| |
| | <math>\begin{alignat}{7}
| |
| 2x &&\; + && y &&\; - &&\; z &&\; = \;&& 8 & \\
| |
| && && \frac{1}{2}y &&\; + &&\; \frac{1}{2}z &&\; = \;&& 1 & \\
| |
| && && 2y &&\; + &&\; z &&\; = \;&& 5 &
| |
| \end{alignat}</math> || <math>L_2 + \frac{3}{2}L_1 \rightarrow L_2</math> <br> <math>L_3 + L_1 \rightarrow L_3</math> ||
| |
| <math>\left[ \begin{array}{ccc|c}
| |
| 2 & 1 & -1 & 8 \\
| |
| 0 & 1/2 & 1/2 & 1 \\
| |
| 0 & 2 & 1 & 5
| |
| \end{array} \right]</math>
| |
| |- align="center"
| |
| | <math>\begin{alignat}{7}
| |
| 2x &&\; + && y \;&& - &&\; z \;&& = \;&& 8 & \\
| |
| && && \frac{1}{2}y \;&& + &&\; \frac{1}{2}z \;&& = \;&& 1 & \\
| |
| && && && &&\; -z \;&&\; = \;&& 1 &
| |
| \end{alignat}</math> || <math>L_3 + -4L_2 \rightarrow L_3</math> || <math>\left[ \begin{array}{ccc|c}
| |
| 2 & 1 & -1 & 8 \\
| |
| 0 & 1/2 & 1/2 & 1 \\
| |
| 0 & 0 & -1 & 1
| |
| \end{array} \right] </math>
| |
| |-
| |
| | colspan=3; align="center"| The matrix is now in echelon form (also called triangular form)
| |
| |- align="center"
| |
| | <math>\begin{alignat}{7}
| |
| 2x &&\; + && y \;&& &&\; \;&& = \;&& 7 & \\
| |
| && && \frac{1}{2}y \;&& &&\; \;&& = \;&& 3/2 & \\
| |
| && && && &&\; -z \;&&\; = \;&& 1 &
| |
| \end{alignat}</math> || <math>L_2+\frac{1}{2}L_3 \rightarrow L_2</math> <br> <math>L_1 - L_3 \rightarrow L_1 </math> || <math>\left[ \begin{array}{ccc|c}
| |
| 2 & 1 & 0 & 7 \\
| |
| 0 & 1/2 & 0 & 3/2 \\
| |
| 0 & 0 & -1 & 1
| |
| \end{array} \right] </math>
| |
| |- align="center"
| |
| | <math>\begin{alignat}{7}
| |
| 2x &&\; + && y \;&& &&\; \;&& = \;&& 7 & \\
| |
| && && y \;&& &&\; \;&& = \;&& 3 & \\
| |
| && && && &&\; z \;&&\; = \;&& -1 &
| |
| \end{alignat}</math> || <math>2L_2 \rightarrow L_2</math> <br> <math>-L_3 \rightarrow L_3 </math> || <math>\left[ \begin{array}{ccc|c}
| |
| 2 & 1 & 0 & 7 \\
| |
| 0 & 1 & 0 & 3 \\
| |
| 0 & 0 & 1 & -1
| |
| \end{array} \right] </math>
| |
| |- align="center"
| |
| | <math>\begin{alignat}{7}
| |
| x &&\; && \;&& &&\; \;&& = \;&& 2 & \\
| |
| && && y \;&& &&\; \;&& = \;&& 3 & \\
| |
| && && && &&\; z \;&&\; = \;&& -1 &
| |
| \end{alignat}</math> || <math>L_1 - L_2 \rightarrow L_1</math> <br> <math>\frac{1}{2}L_1 \rightarrow L_1 </math> || <math>\left[ \begin{array}{ccc|c}
| |
| 1 & 0 & 0 & 2 \\
| |
| 0 & 1 & 0 & 3 \\
| |
| 0 & 0 & 1 & -1
| |
| \end{array} \right] </math>
| |
| |}
| |
| | |
| The second column describes which row operations have just been performed. So for the first step, the ''x'' is eliminated from <math>L_2</math> by adding <math>\begin{matrix}\frac{3}{2}\end{matrix} L_1</math> to <math>L_2</math>. Next ''x'' is eliminated from <math>L_3</math> by adding <math>L_1</math> to <math>L_3</math>. These row operations are labelled in the table as
| |
| | |
| :<math>L_2 + \frac{3}{2}L_1 \rightarrow L_2</math>
| |
| :<math>L_3 + L_1 \rightarrow L_3.</math>
| |
| | |
| Once ''y'' is also eliminated from the third row, the result is a system of linear equations in triangular form, and so the first part of the algorithm is complete. From a computational point of view, it is faster to solve the variables in reverse order, a process known as back-substitution. One sees the solution is {{nowrap|z {{=}} -1}}, {{nowrap|y {{=}} 3}}, and {{nowrap|x {{=}} 2}}. So there is a unique solution to the original system of equations.
| |
| | |
| Instead of stopping once the matrix is in echelon form, one could continue until the matrix is in ''reduced'' row echelon form, as it is done in the table. The process of row reducing until the matrix is reduced is sometimes referred to as '''Gauss-Jordan elimination''', to distinguish it from stopping after reaching echelon form.
| |
| | |
| == History ==
| |
| The method of Gaussian elimination appears in the important Chinese mathematical text[[Rod calculus#System of linear equations| Chapter Eight ''Rectangular Arrays'']] of ''[[The Nine Chapters on the Mathematical Art]]''. Its use is illustrated in eighteen problems, with two to five equations. The first reference to the book by this title is dated to 179 CE, but parts of it were written as early as approximately 150 BCE.<ref>{{harvtxt|Calinger|1999}}, pp. 234–236</ref><ref name="princeton">{{cite book|author1=Timothy Gowers|author2=June Barrow-Green|author3=Imre Leader|title=The Princeton Companion to Mathematics|accessdate=28 September 2012|date=8 September 2008|publisher=Princeton University Press|isbn=978-0-691-11880-2|page=607}}</ref> It was commented on by [[Liu Hui]] in the 3rd century.
| |
| | |
| The method in Europe stems from the notes of [[Isaac Newton]].<ref>{{harvtxt|Grcar|2011a}}, pp. 169-172</ref><ref>{{harvtxt|Grcar|2011b}}, pp. 783-785</ref> In 1670, he wrote that all the algebra books known to him lacked a lesson for solving simultaneous equations, which Newton then supplied. Cambridge University eventually published the notes as ''Arithmetica Universalis'' in 1707 long after Newton left academic life. The notes were widely imitated, which made (what is now called) Gaussian elimination a standard lesson in algebra textbooks by the end of the 18th century. [[Carl Friedrich Gauss]] in 1810 devised a notation{{which?|date=January 2014}} for symmetric elimination that was adopted in the 19th century by professional [[Human computer|hand computers]] to solve the normal equations of least-squares problems. The algorithm that is taught in high school was named for Gauss only in the 1950s as a result of confusion over the history of the subject.<ref>{{harvtxt|Grcar|2011b}}, p. 789</ref>
| |
| | |
| Some authors use the term ''Gaussian elimination'' to refer only to the procedure until the matrix is in echelon form, and use the term '''Gauss-Jordan elimination''' to refer to the procedure which ends in reduced echelon form. The name is used because it is a variation of Gaussian elimination as described by [[Wilhelm Jordan (geodesist)|Wilhelm Jordan]] in 1887. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently.<ref>{{Citation | last1=Althoen | first1=Steven C. | last2=McLaughlin | first2=Renate | title=Gauss–Jordan reduction: a brief history | doi=10.2307/2322413 | year=1987 | journal=[[American Mathematical Monthly|The American Mathematical Monthly]] | issn=0002-9890 | volume=94 | issue=2 | pages=130–142 | jstor=2322413 | publisher=Mathematical Association of America}}</ref>
| |
| | |
| == Applications ==
| |
| The historically first application of the row reduction method is for solving [[systems of linear equations]]. Here are some other important applications of the algorithm.
| |
| | |
| === Computing determinants ===
| |
| To explain how Gaussian elimination allows to compute the determinant of a square matrix, we have to recall how the elementary row operations change the determinant:
| |
| * Swapping two rows multiplies the determinant by -1
| |
| * Multiplying a row by a nonzero scalar multiplies the determinant by the same scalar
| |
| * Adding to one row a scalar multiple of another does not change the determinant.
| |
| | |
| If the Gaussian elimination applied to a square matrix ''A'' produces a row echelon matrix ''B'', let ''d'' be the product of the scalars by which the determinant has been multiplied, using above rules.
| |
| Then the determinant of ''A'' is the quotient by ''d'' of the product of the elements of the diagonal of ''B''.
| |
| | |
| It should be emphasized that, for a ''n''×''n'' matrix, this method needs only [[O notation|''O''(''n''<sup>3</sup>)]] arithmetic operations, while the elementary methods, usually taught in elementary courses, need ''O''(2<sup>''n''</sup>)) or ''O''(''n''!) operations. Even on the fastest computers, the elementary methods are impracticable for ''n'' ≥ 21.
| |
| | |
| === Finding the inverse of a matrix ===
| |
| {{see also|Invertible matrix}}
| |
| A variant of Gaussian elimination called Gauss–Jordan elimination can be used for finding the inverse of a matrix, if it exists. If ''A'' is a ''n'' by ''n'' square matrix, then one can use row reduction to compute its [[invertible matrix|inverse matrix]], if it exists. First, the ''n'' by ''n'' [[identity matrix]] is augmented to the right of ''A'', forming a ''n'' by 2''n'' [[block matrix]] [''A'' | ''I'']. Now through application of elementary row operations, find the reduced echelon form of this ''n'' by ''2n'' matrix. The matrix ''A'' is invertible if and only if the left block can be reduced to the identity matrix ''I''; in this case the right block of the final matrix is ''A''<sup>−1</sup>. If the algorithm is unable to reduce the left block to ''I'', then ''A'' is not invertible.
| |
| | |
| For example, consider the following matrix
| |
| | |
| :<math> A =
| |
| \begin{bmatrix}
| |
| 2 & -1 & 0 \\
| |
| -1 & 2 & -1 \\
| |
| 0 & -1 & 2
| |
| \end{bmatrix}.
| |
| </math>
| |
| | |
| To find the inverse of this matrix, one takes the following matrix augmented by the identity, and row reduces it as a 3 by 6 matrix:
| |
| :<math> [ A | I ] =
| |
| \left[ \begin{array}{rrr|rrr}
| |
| 2 & -1 & 0 & 1 & 0 & 0\\
| |
| -1 & 2 & -1 & 0 & 1 & 0\\
| |
| 0 & -1 & 2 & 0 & 0 & 1
| |
| \end{array} \right].
| |
| </math>
| |
| | |
| By performing row operations, one can check that the reduced row echelon form of the this augmented matrix is:
| |
| | |
| :<math> [ I | B ] =
| |
| \left[ \begin{array}{rrr|rrr}
| |
| 1 & 0 & 0 & \frac{3}{4} & \frac{1}{2} & \frac{1}{4}\\[3pt]
| |
| 0 & 1 & 0 & \frac{1}{2} & 1 & \frac{1}{2}\\[3pt]
| |
| 0 & 0 & 1 & \frac{1}{4} & \frac{1}{2} & \frac{3}{4}
| |
| \end{array} \right].
| |
| </math>
| |
| | |
| The matrix on the left is the identity, which shows ''A'' is invertible. The 3 by 3 matrix on the right, ''B'', is the inverse of ''A''. This procedure for finding the inverse works for square matrices of any size.
| |
| | |
| === Computing ranks and bases ===
| |
| The Gaussian elimination algorithm can be applied to any <math>m \times n</math> matrix <math>A</math>. In this way, for example, some <math>6 \times 9</math> matrices can be transformed to a matrix that has a row echelon form like
| |
| :<math> T=
| |
| \begin{bmatrix}
| |
| a & * & * & *& * & * & * & * & * \\
| |
| 0 & 0 & b & * & * & * & * & * & * \\
| |
| 0 & 0 & 0 & c & * & * & * & * & * \\
| |
| 0 & 0 & 0 & 0 & 0 & 0 & d & * & * \\
| |
| 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e \\
| |
| 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
| |
| \end{bmatrix}
| |
| </math>
| |
| | |
| where the *s are arbitrary entries and ''a, b, c, d, e'' are nonzero entries. This echelon matrix <math>T</math> contains a wealth of information about <math>A</math>: the [[rank of a matrix|rank]] of <math>A</math> is 5 since there are 5 non-zero rows in <math>T</math>; the [[vector space]] spanned by the columns of <math>A</math> has a basis consisting of the first, third, fourth, seventh and ninth column of <math>A</math> (the columns of ''a, b, c, d, e'' in <math>T</math>), and the *s tell you how the other columns of <math>A</math> can be written as linear combinations of the basis columns. This is a consequence of the distributivity of the [[dot product]] in the expression of a linear map [[Linear map#Matrices|as a matrix]].
| |
| | |
| All of this applies also to the reduced row echelon form, which is a particular row echelon form.
| |
| | |
| == Computational efficiency ==
| |
| The number of arithmetic operations required to perform row reduction is one way of measuring the algorithm's computational efficiency. For example, to solve a system of ''n'' equations for ''n'' unknowns by performing row operations on the matrix until it is in echelon form, and then solving for each unknown in reverse order, requires ''n''(''n''-1) / 2 divisions, (2''n''<sup>3</sup> + 3''n''<sup>2</sup> − 5''n'')/6 multiplications, and (2''n''<sup>3</sup> + 3''n''<sup>2</sup> − 5''n'')/6 subtractions,<ref>{{harvtxt|Farebrother|1988}}, p. 12</ref> for a total of approximately 2''n''<sup>3</sup> / 3 operations. Thus it has [[Arithmetic#Arithmetic_operations|arithmetic]] complexity of O(''n''<sup>3</sup>); see [[Big O notation]]. This arithmetic complexity is a good measure of the time needed for the whole computation when the time for each arithmetic operation is approximatively constant. This is the case when the coefficients are represented by [[floating point number]]s or when they belong to a [[finite field]]. If the coefficients are [[integer]]s or [[rational number]]s exactly represented, the intermediate entries can grow exponentially large, so the [[bit complexity]] is exponential.<ref>{{Cite conference
| |
| | first1 = Xin Gui
| |
| | last1 = Fang
| |
| | first2 = George
| |
| | last2 = Havas
| |
| | title = On the worst-case complexity of integer Gaussian elimination
| |
| | booktitle = Proceedings of the 1997 international symposium on Symbolic and algebraic computation
| |
| | conference = ISSAC '97
| |
| | pages = 28–31
| |
| | publisher = ACM
| |
| | year = 1997
| |
| | location = Kihei, Maui, Hawaii, United States
| |
| | url = http://itee.uq.edu.au/~havas/fh97.pdf
| |
| | doi = 10.1145/258726.258740
| |
| | isbn = 0-89791-875-4}}</ref>
| |
| However, there is a variant of Gaussian elimination, called [[Bareiss algorithm]] that avoids this exponential growth of the intermediate entries, and, with the same arithmetic complexity of O(''n''<sup>3</sup>), has a bit complexity of O(''n''<sup>5</sup>).
| |
| | |
| This algorithm can be used on a computer for systems with thousands of equations and unknowns. However, the cost becomes prohibitive for systems with millions of equations. These large systems are generally solved using [[iterative method]]s. Specific methods exist for systems whose coefficients follow a regular pattern (see [[system of linear equations]]).
| |
| | |
| To put an ''n'' by ''n'' matrix into reduced echelon form by row operations, one needs <math>n^3</math> arithmetic operations; which is approximately 50% more computation steps.<ref>J. B. Fraleigh and R. A. Beauregard, Linear Algebra. Addison-Wesley Publishing Company, 1995, Chapter 10</ref>
| |
| | |
| One possible problem is [[numerical stability|numerical instability]], caused by the possibility of dividing by very small numbers. If, for example, the leading coefficient of one of the rows is very close to zero, then to row reduce the matrix one would need to divide by that number so the leading coefficient is 1. This means any error that existed for the number which was close to zero would be amplified. Gaussian elimination is numerically stable for [[diagonally dominant]] or [[Positive-definite matrix|positive-definite]] matrices. For general matrices, Gaussian elimination is usually considered to be stable, when using [[Pivot element#Partial and complete pivoting|partial pivoting]], even though there are examples of stable matrices for which it is unstable.<ref>{{harvtxt|Golub|Van Loan|1996}}, §3.4.6</ref>
| |
| | |
| === Generalizations ===
| |
| | |
| The Gaussian elimination can be performed over any [[field (mathematics)|field]], not just the real numbers.
| |
| | |
| Gaussian elimination does not generalize in any simple way to higher order [[tensors]] (matrices are [[Array data structure|array]] representations of order 2 tensors); even computing the rank of a tensor of order greater than 2 is a difficult problem.
| |
| | |
| == Pseudocode ==
| |
| As explained above, Gaussian elimination writes a given ''m'' × ''n'' matrix ''A'' uniquely as a product of an invertible ''m'' × ''m'' matrix ''S'' and a row-echelon matrix ''T''. Here, ''S'' is the product of the matrices corresponding to the row operations performed.
| |
| | |
| The formal algorithm to compute <math>T</math> from <math>A</math> follows. We write <math>A[i,j]</math> for the entry in row <math>i</math>, column <math>j</math> in matrix <math>A</math> with 1 being the first index. The transformation is performed ''in place'', meaning that the original matrix <math>A</math> is lost and successively replaced by <math>T</math>.
| |
| | |
| <code>
| |
| '''for''' k = 1 ... m:
| |
| ''Find pivot for column k:''
| |
| i_max := [[argmax]] (i = k ... m, abs(A[i, k]))
| |
| '''if''' A[i_max, k] = 0
| |
| '''error''' "Matrix is singular!"
| |
| '''swap rows'''(k, i_max)
| |
| ''Do for all rows below pivot:''
| |
| '''for''' i = k + 1 ... m:
| |
| ''Do for all remaining elements in current row:''
| |
| '''for''' j = k ... n:
| |
| A[i, j] := A[i, j] - A[k, j] * (A[i, k] / A[k, k])
| |
| ''Fill lower triangular matrix with zeros:''
| |
| A[i, k] := 0
| |
| </code>
| |
| | |
| This algorithm differs slightly from the one discussed earlier, because before eliminating a variable, it first exchanges rows to move the entry with the largest [[absolute value]] to the [[pivot element|pivot]] position. Such ''partial pivoting'' improves the [[numerical stability]] of the algorithm; some other variants are used.
| |
| | |
| Upon completion of this procedure the augmented matrix will be in [[row-echelon form]] and may be solved by back-substitution.
| |
| | |
| With the modern computers, Gaussian elimination is not always the fastest algorithm to compute the row echelon form of matrix. There are [[library (computing)|computer libraries]], like [[BLAS]], that exploit the specificities of the [[computer hardware]] and of the structure of the matrix to automatically choose the best algorithm.
| |
| | |
| == Notes ==
| |
| <references/>
| |
| | |
| == References ==
| |
| * {{Citation | last1=Atkinson | first1=Kendall A. | title=An Introduction to Numerical Analysis | publisher=[[John Wiley & Sons]] | location=New York | edition=2nd | isbn=978-0-471-50023-0 | year=1989}}.
| |
| * {{Citation | last1=Bolch | first1=Gunter | last2=Greiner | first2=Stefan | last3=de Meer | first3=Hermann | last4=Trivedi | first4=Kishor S. | title=Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications | publisher=[[Wiley-Interscience]] | edition=2nd | isbn=978-0-471-79156-0 | year=2006}}.
| |
| * {{Citation | last1=Calinger | first1=Ronald | title=A Contextual History of Mathematics | publisher=[[Prentice Hall]] | isbn=978-0-02-318285-3 | year=1999}}.
| |
| * {{Citation | last1=Farebrother | first1=R.W. | title=Linear Least Squares Computations | publisher=Marcel Dekker | series=STATISTICS: Textbooks and Monographs | isbn=978-0-8247-7661-9 | year=1988}}.
| |
| * {{Citation | last1=Golub | first1=Gene H. | author1-link=Gene H. Golub | last2=Van Loan | first2=Charles F. | author2-link=Charles F. Van Loan | title=Matrix Computations | publisher=Johns Hopkins | edition=3rd | isbn=978-0-8018-5414-9 | year=1996}}.
| |
| * {{Citation
| |
| | last = Grcar
| |
| | first = Joseph F.
| |
| | title = How ordinary elimination became Gaussian elimination
| |
| | journal = Historia Mathematica
| |
| | year = 2011a
| |
| | pages = 163–218
| |
| | doi = 10.1016/j.hm.2010.06.003
| |
| | arxiv = 0907.2397
| |
| | volume = 38
| |
| | issue = 2 }}
| |
| * {{Citation
| |
| | last = Grcar
| |
| | first = Joseph F.
| |
| | title = Mathematicians of Gaussian elimination
| |
| | journal = Notices of the American Mathematical Society
| |
| | year = 2011b
| |
| | pages = 782–792
| |
| | url = http://www.ams.org/notices/201106/rtx110600782p.pdf
| |
| | volume = 58
| |
| | issue = 6 }}
| |
| * {{Citation | last1=Higham | first1=Nicholas | author1-link=Nicholas Higham | title=Accuracy and Stability of Numerical Algorithms | publisher=[[Society for Industrial and Applied Mathematics|SIAM]] | edition=2nd | isbn=978-0-89871-521-7 | year=2002}}.
| |
| * {{Citation | last1=Katz | first1=Victor J. | title=A History of Mathematics, Brief Version | publisher=[[Addison-Wesley]] | isbn=978-0-321-16193-2 | year=2004}}.
| |
| * {{Cite web | last1=Kaw | first1=Autar | last2=Kalu | first2=Egwu | year=2010 | title=Numerical Methods with Applications | edition=1st | publisher=[http://www.autarkaw.com] }}, Chapter 5 deals with Gaussian elimination.
| |
| * {{Citation | last1=Lipson | first1=Marc | last2=Lipschutz | first2=Seymour | title=Schaum's outline of theory and problems of linear algebra | publisher=[[McGraw-Hill]] | location=New York | isbn=978-0-07-136200-9 | year=2001 | pages=69–80}}.
| |
| *{{Citation|last1=Press|first1=WH|last2=Teukolsky|first2=SA|last3=Vetterling|first3=WT|last4=Flannery|first4=BP|year=2007|title=Numerical Recipes: The Art of Scientific Computing|edition=3rd|publisher=Cambridge University Press| publication-place=New York|isbn=978-0-521-88068-8|chapter=Section 2.2|chapter-url=http://apps.nrbook.com/empanel/index.html?pg=46}}
| |
| | |
| ==External links==
| |
| {{wikibooks
| |
| |1= Linear Algebra
| |
| |2= Gauss' Method
| |
| |3= Gaussian elimination}}
| |
| *[http://sole.ooz.ie WebApp descriptively solving systems of linear equations with Gaussian Elimination]
| |
| *[http://marekrychlik.com/cgi-bin/gauss.cgi A program that performs Gaussian elimination similarly to a human working on paper] Exact solutions to systems with rational coefficients.
| |
| *[http://www.math-linux.com/spip.php?article53 Gaussian elimination] www.math-linux.com.
| |
| *[http://www25.brinkster.com/denshade/GaussElimination.html Gaussian elimination as java applet] at some local site. Only takes natural coefficients.
| |
| *[http://people.revoledu.com/kardi/tutorial/LinearAlgebra/RREF.html#RREF Gaussian elimination calculator] with Linear Algebra tutorial.
| |
| *[http://numericalmethods.eng.usf.edu/topics/gaussian_elimination.html Gaussian elimination] at [http://numericalmethods.eng.usf.edu Holistic Numerical Methods Institute]
| |
| *[http://www.hlevkin.com/default.html#numalg LinearEquations.c] Gaussian elimination implemented using C language
| |
| *[http://www.math.iupui.edu/~momran/m118/chapter6.htm Gauss–Jordan elimination] Step by step solution of 3 equations with 3 unknowns using the All-Integer Echelon Method
| |
| *[http://www.idomaths.com/gauss_jordan.php Gauss-Jordan Elimination calculator for ''n'' by ''m'' matrices, giving intermediate steps]
| |
| * {{YouTube|id=u-AhI4gNB_E|title=WildLinAlg13: Solving a system of linear equations}} provides a very clear, elementary presentation of the method of row reduction.
| |
| | |
| {{linear algebra}}
| |
| | |
| {{DEFAULTSORT:Gaussian Elimination}}
| |
| [[Category:Numerical linear algebra]]
| |
| [[Category:Articles with example pseudocode]]
| |
| [[Category:Exchange algorithms]]
| |
| [[Category:German inventions]]
| |
| | |
| {{Link GA|de}}
| |
Exercise and fitness is surely аn activity tɦat numerous individuals ѡish to incorporate tߋ their life. Fitness's primary goal іs creating a far healthier body аnd life to suit your neeԁs. Yοu coսld mɑke an improved health ɑnd fitness program if follow tɦe recommendations ѕhown below. If you beloved thіs article tҺerefore you ѡould like to receive mоre info ϲoncerning fitness outfits for women please visit the internet site. Talk tߋ a skilled bеfore tгying a new physical exercise ѡith weights οr unit. Undertaking a workout or utilizing a machine inappropriately сan negate any rewards you maу get fгom tҺis.
Even worse than that, it iѕ ρossible tߋ oftеn eѵen harm on уour own, probablү causing long term troubles. Then, think about another option. Αn alternative choice to the normal fitness regimes іs bicycling. Biking supplies ɑ cheap, fun ɑnd fitness-concentrated solution tօ yоur day-tο-day commute tо operate. Bicycling tߋ bе effective іs a ɡreat method of exercise ϲonsidering that you mɑy bе obtaining exercising еverƴ morning and evening on your way to and from worк.
Utilize уoսr smartphone tо ƿut alarm systems tҺat рoint oսt to you tо Ԁefinitely rise up from youг workdesk fitness outfits for women and climb up a set of stairs. Еven some exercising іs superior to no exercising. In youг harried regular day-tο-day lives, neverthеleѕs, іt iѕ usuаlly difficult tο point out to yourself tօ get it ɗone. Exercising tҺrough the workday will benefit ʏou both physically and mentally. Worҡ will almost ceгtainly gain аlso. Should yοu Ьe looking tօ exercise more ,there іs no need to spend hrs ɑround the fitness treadmill machine оr elliptical.
Тry out a new sport activity or clean hiɡh on unused expertise. Υou can consider football instruction, join ɑ groսp softball ɡroup, or consume fishing. Not οnly wіll yoս fіnd out new thіngs, yеt ʏoսr level of fitness boosts аnd you mаy get social positive aspects аs well. Forward lunges are a vеry effective approach tօ raise the effectiveness οf youг lower leg muscle tissues, but tսrn back lunges really step-up tҺe rate. During forward lunges, 1 lower-leg іs lively foг just one half of еvery single lunge.
Backward lunges engage tɦе front lower-leg tօ the entirety from tɦe exercise, ԝhich easily tones and strengthens tɦе muscles. Ѕhould you bе not a morning hours individual, Ƅut ԝould lіke tо workout befοгe work, attempt gettіng oսt οf bed about a quarter-hour earlier tҺan you normally do in order to fit ɑ compact regimen in. This is certainlʏ sufficient time tօ complete ѕome simple exercises Ьefore job.
ϒοu cаn even do sߋmething ɑs simple аs wandering. Getting a punching handbag or silicone mаn shaped punching target can provide an wall socket for anxiety in addition to а strategy to worƙ with kinds private health and fitness. Thе punching wіll work out types upper body including biceps, triceps, аnd deltoids. You mіght ɑppreciate tҺeir punching bag next time thеy mսst let ߋff ѕome steam. Ԝhen trying to puzzle oսt what foг you to do to have exercise, create a list ߋf stuff you reallу like tߋ do.
Provided you cаn include ɑ fitness plan with routines уօu liҟe, You will certainly be mߋre likely to keеp at it ѕince it is morе pleasant tο you personally. Experiment ԝith new exercises аnd new workout routines and also hardwearing . exercise schedule new. Upon hɑving established а regimen whіch fits your life-style, you need to be in search օf feeling of boredom.