Quadric (projective geometry): Difference between revisions
en>L Kensington m Reverted edits by 151.204.231.122 (talk) to last revision by L Kensington (HG) |
en>Michael Hardy No edit summary |
||
Line 1: | Line 1: | ||
In [[numerical linear algebra]], the '''Gauss–Seidel method''', also known as the '''Liebmann method''' or the '''method of successive displacement''', is an [[iterative method]] used to solve a [[linear system of equations]]. It is named after the [[Germany|German]] [[mathematician]]s [[Carl Friedrich Gauss]] and [[Philipp Ludwig von Seidel]], and is similar to the [[Jacobi method]]. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either [[diagonally dominant matrix|diagonally dominant]], or [[Symmetric matrix|symmetric]] and [[Positive-definite matrix|positive definite]]. It was only mentioned in a private letter from Gauss to his student [[Christian Ludwig Gerling|Gerling]] in 1823.<ref> | |||
{{harvnb|Gauss|1903|p=279}}; [http://gdz.sub.uni-goettingen.de/en/dms/loader/img/?PPN=PPN23601515X&DMDID=DMDLOG_0112&LOGID=LOG_0112&PHYSID=PHYS_0286 direct link].</ref> A publication was not delivered before 1874 by Seidel. | |||
== Description == | |||
The Gauss–Seidel method is an [[Iterative method|iterative technique]] for solving a square system of ''n'' linear equations with unknown '''x''': | |||
:<math>A\mathbf x = \mathbf b</math>. | |||
It is defined by the iteration | |||
:<math> L_* \mathbf{x}^{(k+1)} = \mathbf{b} - U \mathbf{x}^{(k)}, </math> | |||
where the matrix ''A'' is decomposed into a [[triangular matrix|lower triangular]] component <math>L_*</math>, and a [[triangular matrix#Strictly triangular matrix|strictly upper triangular]] component ''U'': <math> A = L_* + U </math>.<ref>{{harvnb|Golub|Van Loan|1996|p=511}}.</ref> | |||
In more detail, write out ''A'', '''x''' and '''b''' in their components: | |||
:<math>A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \qquad \mathbf{x} = \begin{bmatrix} x_{1} \\ x_2 \\ \vdots \\ x_n \end{bmatrix} , \qquad \mathbf{b} = \begin{bmatrix} b_{1} \\ b_2 \\ \vdots \\ b_n \end{bmatrix}.</math> | |||
Then the decomposition of ''A'' into its lower triangular component and its strictly upper triangular component is given by: | |||
:<math>A=L_*+U \qquad \text{where} \qquad L_* = \begin{bmatrix} a_{11} & 0 & \cdots & 0 \\ a_{21} & a_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \quad U = \begin{bmatrix} 0 & a_{12} & \cdots & a_{1n} \\ 0 & 0 & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\0 & 0 & \cdots & 0 \end{bmatrix}. </math> | |||
The system of linear equations may be rewritten as: | |||
:<math>L_* \mathbf{x} = \mathbf{b} - U \mathbf{x} </math> | |||
The Gauss–Seidel method now solves the left hand side of this expression for '''x''', using previous value for '''x''' on the right hand side. Analytically, this may be written as: | |||
:<math> \mathbf{x}^{(k+1)} = L_*^{-1} (\mathbf{b} - U \mathbf{x}^{(k)}). </math> | |||
However, by taking advantage of the triangular form of <math>L_*</math>, the elements of '''x'''<sup>(''k''+1)</sup> can be computed sequentially using [[forward substitution]]: | |||
:<math> x^{(k+1)}_i = \frac{1}{a_{ii}} \left(b_i - \sum_{j<i}a_{ij}x^{(k+1)}_j - \sum_{j>i}a_{ij}x^{(k)}_j \right),\quad i,j=1,2,\ldots,n. </math> <ref>{{harvnb|Golub|Van Loan|1996|loc=eqn (10.1.3)}}.</ref> | |||
The procedure is generally continued until the changes made by an iteration are below some tolerance, such as a sufficiently small [[Residual (numerical analysis)|residual]]. | |||
=== Discussion === | |||
The element-wise formula for the Gauss–Seidel method is extremely similar to that of the [[Jacobi method]]. | |||
The computation of ''x''<sub>''i''</sub><sup>(''k''+1)</sup> uses only the elements of '''x'''<sup>(''k''+1)</sup> that have already been computed, and only the elements of '''x'''<sup>(''k'')</sup> that have not yet to be advanced to iteration ''k''+1. This means that, unlike the Jacobi method, only one storage vector is required as elements can be overwritten as they are computed, which can be advantageous for very large problems. | |||
However, unlike the Jacobi method, the computations for each element cannot be done in [[Parallel algorithm|parallel]]. Furthermore, the values at each iteration are dependent on the order of the original equations. | |||
Gauss-Seidel is the same as [[Successive Over-relaxation|SOR (successive over-relaxation)]] with <math>\omega=1</math>. | |||
==Convergence== | |||
The convergence properties of the Gauss–Seidel method are dependent on the matrix ''A''. Namely, the procedure is known to converge if either: | |||
* ''A'' is symmetric [[positive-definite matrix|positive-definite]],<ref>{{harvnb|Golub|Van Loan|1996|loc=Thm 10.1.2}}.</ref> or | |||
* ''A'' is strictly or irreducibly [[diagonally dominant matrix|diagonally dominant]]. | |||
The Gauss–Seidel method sometimes converges even if these conditions are not satisfied. | |||
== Algorithm == | |||
Since elements can be overwritten as they are computed in this algorithm, only one storage vector is needed, and vector indexing is omitted. The algorithm goes as follows: | |||
Inputs: {{var|A}}, {{var|b}} | |||
Output: <math>\phi</math> | |||
Choose an initial guess <math>\phi</math> to the solution | |||
'''repeat''' until convergence | |||
'''for''' {{var|i}} '''from''' 1 '''until''' {{var|n}} '''do''' | |||
<math>\sigma \leftarrow 0</math> | |||
'''for''' {{var|j}} '''from''' 1 '''until''' {{var|n}} '''do''' | |||
'''if''' {{var|j}} ≠ {{var|i}} '''then''' | |||
<math> \sigma \leftarrow \sigma + a_{ij} \phi_j </math> | |||
'''end if''' | |||
'''end''' ({{var|j}}-loop) | |||
<math> \phi_i \leftarrow \frac 1 {a_{ii}} (b_i - \sigma)</math> | |||
'''end''' ({{var|i}}-loop) | |||
check if convergence is reached | |||
'''end''' (repeat) | |||
==Examples== | |||
===An example for the matrix version=== | |||
A linear system shown as <math>A \mathbf{x} = \mathbf{b}</math> is given by: | |||
:<math> A= | |||
\begin{bmatrix} | |||
16 & 3 \\ | |||
7 & -11 \\ | |||
\end{bmatrix} | |||
</math> and <math> b= | |||
\begin{bmatrix} | |||
11 \\ | |||
13 | |||
\end{bmatrix}. | |||
</math> | |||
We want to use the equation | |||
:<math> \mathbf{x}^{(k+1)} = L_*^{-1} (\mathbf{b} - U \mathbf{x}^{(k)}) </math> | |||
in the form | |||
:<math> \mathbf{x}^{(k+1)} = T \mathbf{x}^{(k)} + C </math> | |||
where: | |||
:<math>T = - L_*^{-1} U</math> and <math>C = L_*^{-1} \mathbf{b}.</math> | |||
We must decompose <math>A_{}^{}</math> into the sum of a lower triangular component <math>L_*^{}</math> and a strict upper triangular component <math>U_{}^{}</math>: | |||
:<math> L_*= | |||
\begin{bmatrix} | |||
16 & 0 \\ | |||
7 & -11 \\ | |||
\end{bmatrix} | |||
</math> and <math> U = | |||
\begin{bmatrix} | |||
0 & 3 \\ | |||
0 & 0 | |||
\end{bmatrix}.</math> | |||
The inverse of <math>L_*^{}</math> is: | |||
:<math> L_*^{-1} = | |||
\begin{bmatrix} | |||
16 & 0 \\ | |||
7 & -11 | |||
\end{bmatrix}^{-1} | |||
= | |||
\begin{bmatrix} | |||
0.0625 & 0.0000 \\ | |||
0.0398 & -0.0909 \\ | |||
\end{bmatrix} | |||
</math>. | |||
Now we can find: | |||
:<math> T = - | |||
\begin{bmatrix} | |||
0.0625 & 0.0000 \\ | |||
0.0398 & -0.0909 | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
0 & 3 \\ | |||
0 & 0 | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
0.000 & -0.1875 \\ | |||
0.000 & -0.1193 | |||
\end{bmatrix}, </math> | |||
:<math> C = | |||
\begin{bmatrix} | |||
0.0625 & 0.0000 \\ | |||
0.0398 & -0.0909 | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
11 \\ | |||
13 | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
0.6875 \\ | |||
-0.7439 | |||
\end{bmatrix}. </math> | |||
Now we have <math>T_{}^{}</math> and <math>C_{}^{}</math> and we can use them to obtain the vectors <math>\mathbf{x}</math> iteratively. | |||
First of all, we have to choose <math>\mathbf{x}^{(0)}</math>: we can only guess. The better the guess, the quicker the algorithm will perform. | |||
We suppose: | |||
:<math> x^{(0)} = | |||
\begin{bmatrix} | |||
1.0 \\ | |||
1.0 | |||
\end{bmatrix}.</math> | |||
We can then calculate: | |||
:<math> x^{(1)} = | |||
\begin{bmatrix} | |||
0.000 & -0.1875 \\ | |||
0.000 & -0.1193 | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
1.0 \\ | |||
1.0 | |||
\end{bmatrix} | |||
+ | |||
\begin{bmatrix} | |||
0.6875 \\ | |||
-0.7443 | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
0.5000 \\ | |||
-0.8636 | |||
\end{bmatrix}. </math> | |||
:<math> x^{(2)} = | |||
\begin{bmatrix} | |||
0.000 & -0.1875 \\ | |||
0.000 & -0.1193 | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
0.5000 \\ | |||
-0.8636 | |||
\end{bmatrix} | |||
+ | |||
\begin{bmatrix} | |||
0.6875 \\ | |||
-0.7443 | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
0.8494 \\ | |||
-0.6413 | |||
\end{bmatrix}. </math> | |||
:<math> x^{(3)} = | |||
\begin{bmatrix} | |||
0.000 & -0.1875 \\ | |||
0.000 & -0.1193 | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
0.8494 \\ | |||
-0.6413 \\ | |||
\end{bmatrix} | |||
+ | |||
\begin{bmatrix} | |||
0.6875 \\ | |||
-0.7443 | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
0.8077 \\ | |||
-0.6678 | |||
\end{bmatrix}. </math> | |||
:<math> x^{(4)} = | |||
\begin{bmatrix} | |||
0.000 & -0.1875 \\ | |||
0.000 & -0.1193 | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
0.8077 \\ | |||
-0.6678 | |||
\end{bmatrix} | |||
+ | |||
\begin{bmatrix} | |||
0.6875 \\ | |||
-0.7443 | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
0.8127 \\ | |||
-0.6646 | |||
\end{bmatrix}. </math> | |||
:<math> x^{(5)} = | |||
\begin{bmatrix} | |||
0.000 & -0.1875 \\ | |||
0.000 & -0.1193 | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
0.8127 \\ | |||
-0.6646 | |||
\end{bmatrix} | |||
+ | |||
\begin{bmatrix} | |||
0.6875 \\ | |||
-0.7443 | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
0.8121 \\ | |||
-0.6650 | |||
\end{bmatrix}. </math> | |||
:<math> x^{(6)} = | |||
\begin{bmatrix} | |||
0.000 & -0.1875 \\ | |||
0.000 & -0.1193 | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
0.8121 \\ | |||
-0.6650 | |||
\end{bmatrix} | |||
+ | |||
\begin{bmatrix} | |||
0.6875 \\ | |||
-0.7443 | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
0.8122 \\ | |||
-0.6650 | |||
\end{bmatrix}. </math> | |||
:<math> x^{(7)} = | |||
\begin{bmatrix} | |||
0.000 & -0.1875 \\ | |||
0.000 & -0.1193 | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
0.8122 \\ | |||
-0.6650 | |||
\end{bmatrix} | |||
+ | |||
\begin{bmatrix} | |||
0.6875 \\ | |||
-0.7443 | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
0.8122 \\ | |||
-0.6650 | |||
\end{bmatrix}. </math> | |||
As expected, the algorithm converges to the exact solution: | |||
:<math> \mathbf{x} = A^{-1} \mathbf{b} = \begin{bmatrix} 0.8122\\ -0.6650 \end{bmatrix}. </math> | |||
In fact, the matrix A is strictly diagonally dominant (but not positive definite). | |||
===Another example for the matrix version=== | |||
Another linear system shown as <math>A \mathbf{x} = \mathbf{b}</math> is given by: | |||
:<math> A= | |||
\begin{bmatrix} | |||
2 & 3 \\ | |||
5 & 7 \\ | |||
\end{bmatrix} | |||
</math> and <math> b= | |||
\begin{bmatrix} | |||
11 \\ | |||
13 \\ | |||
\end{bmatrix}. | |||
</math> | |||
We want to use the equation | |||
:<math> \mathbf{x}^{(k+1)} = L_*^{-1} (\mathbf{b} - U \mathbf{x}^{(k)}) </math> | |||
in the form | |||
:<math> \mathbf{x}^{(k+1)} = T \mathbf{x}^{(k)} + C </math> | |||
where: | |||
:<math>T = - L_*^{-1} U</math> and <math>C = L_*^{-1} \mathbf{b}.</math> | |||
We must decompose <math>A_{}^{}</math> into the sum of a lower triangular component <math>L_*^{}</math> and a strict upper triangular component <math>U_{}^{}</math>: | |||
:<math> L_*= | |||
\begin{bmatrix} | |||
2 & 0 \\ | |||
5 & 7 \\ | |||
\end{bmatrix} | |||
</math> and <math> U = | |||
\begin{bmatrix} | |||
0 & 3 \\ | |||
0 & 0 \\ | |||
\end{bmatrix}.</math> | |||
The inverse of <math>L_*^{}</math> is: | |||
:<math> L_*^{-1} = | |||
\begin{bmatrix} | |||
2 & 0 \\ | |||
5 & 7 \\ | |||
\end{bmatrix}^{-1} | |||
= | |||
\begin{bmatrix} | |||
0.500 & 0.000 \\ | |||
-0.357 & 0.143 \\ | |||
\end{bmatrix} | |||
</math>. | |||
Now we can find: | |||
:<math> T = - | |||
\begin{bmatrix} | |||
0.500 & 0.000 \\ | |||
-0.357 & 0.143 \\ | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
0 & 3 \\ | |||
0 & 0 \\ | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
0.000 & -1.500 \\ | |||
0.000 & 1.071 \\ | |||
\end{bmatrix}, </math> | |||
:<math> C = | |||
\begin{bmatrix} | |||
0.500 & 0.000 \\ | |||
-0.357 & 0.143 \\ | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
11 \\ | |||
13 \\ | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
5.500 \\ | |||
-2.071 \\ | |||
\end{bmatrix}. </math> | |||
Now we have <math>T_{}^{}</math> and <math>C_{}^{}</math> and we can use them to obtain the vectors <math>\mathbf{x}</math> iteratively. | |||
First of all, we have to choose <math>\mathbf{x}^{(0)}</math>: we can only guess. The better the guess, the quicker will perform the algorithm. | |||
We suppose: | |||
:<math> x^{(0)} = | |||
\begin{bmatrix} | |||
1.1 \\ | |||
2.3 \\ | |||
\end{bmatrix}.</math> | |||
We can then calculate: | |||
:<math> x^{(1)} = | |||
\begin{bmatrix} | |||
0 & -1.500 \\ | |||
0 & 1.071 \\ | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
1.1 \\ | |||
2.3 \\ | |||
\end{bmatrix} | |||
+ | |||
\begin{bmatrix} | |||
5.500 \\ | |||
-2.071 \\ | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
2.050 \\ | |||
0.393 \\ | |||
\end{bmatrix}. </math> | |||
:<math> x^{(2)} = | |||
\begin{bmatrix} | |||
0 & -1.500 \\ | |||
0 & 1.071 \\ | |||
\end{bmatrix} | |||
\times | |||
\begin{bmatrix} | |||
2.050 \\ | |||
0.393 \\ | |||
\end{bmatrix} | |||
+ | |||
\begin{bmatrix} | |||
5.500 \\ | |||
-2.071 \\ | |||
\end{bmatrix} | |||
= | |||
\begin{bmatrix} | |||
4.911 \\ | |||
-1.651 \\ | |||
\end{bmatrix}. </math> | |||
:<math> x^{(3)} = \cdots. \, </math> | |||
If we test for convergence we'll find that the algorithm diverges. In fact, the matrix A is neither diagonally dominant nor positive definite. | |||
Then, convergence to the exact solution | |||
:<math> \mathbf{x} = A^{-1} \mathbf{b} = \begin{bmatrix} -38\\ 29 \end{bmatrix} </math> | |||
is not guaranteed and, in this case, will not occur. | |||
===An example for the equation version=== | |||
Suppose given ''k'' equations where ''x''<sub>''n''</sub> are vectors of these equations and starting point ''x''<sub>0</sub>. | |||
From the first equation solve for ''x''<sub>1</sub> in terms of <math>x_{n+1}, x_{n+2}, \dots, x_n.</math> For the next equations substitute the previous values of ''x''s. | |||
To make it clear let's consider an example. | |||
:<math> | |||
\begin{align} | |||
10x_1 - x_2 + 2x_3 & = 6, \\ | |||
-x_1 + 11x_2 - x_3 + 3x_4 & = 25, \\ | |||
2x_1- x_2+ 10x_3 - x_4 & = -11, \\ | |||
3x_2 - x_3 + 8x_4 & = 15. | |||
\end{align} | |||
</math> | |||
Solving for <math>x_1</math>, <math>x_2</math>, <math>x_3</math> and <math>x_4</math> gives: | |||
:<math> | |||
\begin{align} | |||
x_1 & = x_2/10 - x_3/5 + 3/5, \\ | |||
x_2 & = x_1/11 + x_3/11 - 3x_4/11 + 25/11, \\ | |||
x_3 & = -x_1/5 + x_2/10 + x_4/10 - 11/10, \\ | |||
x_4 & = -3x_2/8 + x_3/8 + 15/8. | |||
\end{align} | |||
</math> | |||
Suppose we choose (0, 0, 0, 0) as the initial approximation, then the first | |||
approximate solution is given by | |||
:<math> | |||
\begin{align} | |||
x_1 & = 3/5 = 0.6, \\ | |||
x_2 & = (3/5)/11 + 25/11 = 3/55 + 25/11 = 2.3272, \\ | |||
x_3 & = -(3/5)/5 +(2.3272)/10-11/10 = -3/25 + 0.23272-1.1 = -0.9873,\\ | |||
x_4 & = -3(2.3272)/8 +(-0.9873)/8+15/8 = 0.8789. | |||
\end{align} | |||
</math> | |||
Using the approximations obtained, the iterative procedure is repeated until | |||
the desired accuracy has been reached. The following are the approximated | |||
solutions after four iterations. | |||
{| class="wikitable" border="1" | |||
|- | |||
! <math>x_1</math> | |||
! <math>x_2</math> | |||
! <math>x_3</math> | |||
! <math>x_4</math> | |||
|- | |||
| <math>0.6</math> | |||
| <math>2.32727</math> | |||
| <math>-0.987273</math> | |||
| <math>0.878864</math> | |||
|- | |||
| <math>1.03018</math> | |||
| <math>2.03694</math> | |||
| <math>-1.01446</math> | |||
| <math>0.984341</math> | |||
|- | |||
| <math>1.00659</math> | |||
| <math>2.00356</math> | |||
| <math>-1.00253</math> | |||
| <math>0.998351</math> | |||
|- | |||
| <math>1.00086</math> | |||
| <math>2.0003</math> | |||
| <math>-1.00031</math> | |||
| <math>0.99985</math> | |||
|} | |||
The exact solution of the system is (1, 2, −1, 1). | |||
===An example using Python=== | |||
The following numerical procedure simply iterates through to produce the solution vector. | |||
<source lang="python"> | |||
#initialize the matrix | |||
mat = [ [3/5.0, 0.0, 1/10.0, -1/5.0, 0.0 ], \ | |||
[25/11.0, 1/11.0, 0.0, 1/11.0, -3/11.0], \ | |||
[-11/10.0, -1/5.0, 1/10.0, 0, 1/10.0 ], \ | |||
[15/8.0, 0.0, -3/8.0, 1/8.0, 0.0 ] ] | |||
x = [0,0,0,0] #initial guess | |||
for i in xrange(6): | |||
x[0] = mat[0][0] + mat[0][1]*0 + mat[0][2]*x[1] + mat[0][3]*x[2] + mat[0][4]*x[3] | |||
x[1] = mat[1][0] + mat[1][1]*x[0] + mat[1][2]*0 + mat[1][3]*x[2] + mat[1][4]*x[3] | |||
x[2] = mat[2][0] + mat[2][1]*x[0] + mat[2][2]*x[1] + mat[2][3]*0 + mat[2][4]*x[3] | |||
x[3] = mat[3][0] + mat[3][1]*x[0] + mat[3][2]*x[1] + mat[3][3]*x[2] + mat[3][4]*0 | |||
print '%f %f %f %f' %(x[0],x[1],x[2],x[3]) #display the iterations to the user | |||
</source> | |||
Produces the output, | |||
<source lang="python"> | |||
0.600000 2.327273 -0.987273 0.878864 | |||
1.030182 2.036938 -1.014456 0.984341 | |||
1.006585 2.003555 -1.002527 0.998351 | |||
1.000861 2.000298 -1.000307 0.999850 | |||
1.000091 2.000021 -1.000031 0.999988 | |||
1.000008 2.000001 -1.000003 0.999999 | |||
</source> | |||
===Program to solve arbitrary no. of equations using Matlab=== | |||
<source lang="matlab"> | |||
disp('Give the input to solve the set of equations AX=B') | |||
a=input('Input the square matrix A : \n'); | |||
b=input('Input the column matrix B : \n'); | |||
m=length(a); | |||
%z is a two dimensional array in which row corresponds to values of X in a | |||
%specific iteration and the column corresponds to values of specific | |||
%element of X in different iterations | |||
c=0;%random assignment | |||
e=1;%'e' represents the maximum error | |||
d=0;%random assignment | |||
for u=1:m | |||
x(u)=b(u,1)/a(u,u); | |||
z(1,u)=0;%initializing the values for matrix X(x1;x2;...xm) | |||
end | |||
l=2;%'l' represents the iteration no. | |||
%loop for finding the convergence factor (C.F) | |||
for r = 1:m | |||
for s = 1:m | |||
if r~=s | |||
p(r)=abs(a(r,s)/a(r,r))+d;%p(r) is the C.F for equation no. r | |||
d=p(r); | |||
end | |||
end | |||
d=0; | |||
end | |||
if min(p)>=1 %at least one equation must satisfy the condition p<1 | |||
fprintf('Roots will not converge for this set of equations') | |||
else | |||
while(e>=1e-4) | |||
j1=1;%while calculating elements in first column we consider only the old values of X | |||
for i1=2:m | |||
q(j1)=(a(j1,i1)/a(j1,j1))*z(l-1,i1)+c; | |||
c=q(j1); | |||
end | |||
c=0; | |||
z(l,j1)=x(j1)-q(j1);%elements of z in the iteration no. l | |||
x(j1)=z(l,j1); | |||
for u=1:m | |||
x(u)=b(u,1)/a(u,u); | |||
z(1,u)=0; | |||
end | |||
for j1=2:m-1%for intermediate columns between 1 and m, we use the updated values of X | |||
for i1=1:j1-1 | |||
q(j1)=(a(j1,i1)/a(j1,j1))*z(l,i1)+c; | |||
c=q(j1); | |||
end | |||
for i1=j1+1:m | |||
q(j1)=(a(j1,i1)/a(j1,j1))*z(l-1,i1)+c; | |||
c=q(j1); | |||
end | |||
c=0; | |||
z(l,j1)=x(j1)-q(j1); | |||
x(j1)=z(l,j1); | |||
for u=1:m | |||
x(u)=b(u,1)/a(u,u); | |||
z(1,u)=0; | |||
end | |||
end | |||
j1=m;%for the last column, we use only the updated values of X | |||
for i1=1:m-1 | |||
q(j1)=(a(j1,i1)/a(j1,j1))*z(l,i1)+c; | |||
c=q(j1); | |||
end | |||
c=0; | |||
z(l,j1)=x(j1)-q(j1); | |||
for v=1:m | |||
t=abs(z(l,v)-z(l-1,v));%calculates the error | |||
end | |||
e=max(t);%evaluates the maximum error out of errors of all elements of X | |||
l=l+1;%iteration no. gets updated | |||
for i=1:m | |||
X(1,i)=z(l-1,i);%the final solution X | |||
end | |||
end | |||
%loop to show iteration number along with the values of z | |||
for i=1:l-1 | |||
for j=1:m | |||
w(i,j+1)=z(i,j); | |||
end | |||
w(i,1)=i; | |||
end | |||
disp(' It. no. x1 x2 x3 x4 ') | |||
disp(w) | |||
disp('The final solution is ') | |||
disp(X) | |||
fprintf('The total number of iterations is %d',l-1) | |||
end | |||
</source> | |||
Program output is | |||
<source lang="matlab"> | |||
Give the input to solve the set of equations AX=B | |||
Input the square matrix A : | |||
[10 -2 -1 -1;-2 10 -1 -1;-1 -1 10 -2;-1 -1 -2 10] | |||
Input the column matrix B : | |||
[3;15;27;-9] | |||
It. no. x1 x2 x3 x4 | |||
1.0000 0 0 0 0 | |||
2.0000 0.3000 1.5600 2.8860 -0.1368 | |||
3.0000 0.8869 1.9523 2.9566 -0.0248 | |||
4.0000 0.9836 1.9899 2.9924 -0.0042 | |||
5.0000 0.9968 1.9982 2.9987 -0.0008 | |||
6.0000 0.9994 1.9997 2.9998 -0.0001 | |||
7.0000 0.9999 1.9999 3.0000 -0.0000 | |||
8.0000 1.0000 2.0000 3.0000 -0.0000 | |||
The final solution is | |||
1.0000 2.0000 3.0000 -0.0000 | |||
The total number of iterations is 8 | |||
</source> | |||
==See also== | |||
*[[Jacobi method]] | |||
*[[Successive over-relaxation]] | |||
*[[Iterative_method#Linear_systems|Iterative method. Linear systems]] | |||
*[[Belief_propagation#Gaussian_belief_propagation_.28GaBP.29|Gaussian belief propagation]] | |||
*[[Matrix splitting]] | |||
==Notes== | |||
{{reflist}} | |||
==References== | |||
* {{citation | first = Carl Friedrich | last = Gauss | authorlink = Carl Friedrich Gauss | title = Werke | publisher = Köninglichen Gesellschaft der Wissenschaften | location = Göttingen | date = 1903 | volume = 9 | language = German}}. | |||
* {{citation | first1=Gene H. | last1=Golub | author1-link=Gene H. Golub | first2=Charles F. | last2=Van Loan | author2-link=Charles F. Van Loan | year=1996 | title=Matrix Computations | edition=3rd | publisher=Johns Hopkins | place=Baltimore | isbn=978-0-8018-5414-9}}. | |||
*{{MathWorld|urlname=Gauss-SeidelMethod|title=Gauss-Seidel Method|author=Black, Noel and Moore, Shirley}} | |||
{{CFDWiki|name=Gauss-Seidel_method}} | |||
==External links== | |||
*{{springer|title=Seidel method|id=p/s083810}} | |||
*[http://www.math-linux.com/spip.php?article48 Gauss–Seidel from www.math-linux.com] | |||
*[http://math.fullerton.edu/mathews/n2003/GaussSeidelMod.html Module for Gauss–Seidel Iteration] | |||
*[http://numericalmethods.eng.usf.edu/topics/gauss_seidel.html Gauss–Seidel] From Holistic Numerical Methods Institute | |||
*[http://www.webcitation.org/query?url=http://www.geocities.com/rsrirang2001/Mathematics/NumericalMethods/gsiedel/gsiedel.htm&date=2009-10-26+01:52:27 Gauss Siedel Iteration from www.geocities.com] | |||
*[http://www.netlib.org/linalg/html_templates/node14.html#figgs The Gauss-Seidel Method] | |||
*[http://arxiv.org/abs/0901.4192 Bickson] | |||
*[http://matlabdb.mathematik.uni-stuttgart.de/gauss_seidel.m?MP_ID=406 Matlab code] | |||
*[http://adrianboeing.blogspot.com/2010/02/solving-linear-systems.html C code example] | |||
{{Numerical linear algebra}} | |||
{{DEFAULTSORT:Gauss-Seidel Method}} | |||
[[Category:Numerical linear algebra]] | |||
[[Category:Articles with example pseudocode]] | |||
[[Category:Relaxation (iterative methods)]] |
Revision as of 20:18, 22 January 2014
In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a linear system of equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either diagonally dominant, or symmetric and positive definite. It was only mentioned in a private letter from Gauss to his student Gerling in 1823.[1] A publication was not delivered before 1874 by Seidel.
Description
The Gauss–Seidel method is an iterative technique for solving a square system of n linear equations with unknown x:
It is defined by the iteration
where the matrix A is decomposed into a lower triangular component , and a strictly upper triangular component U: .[2]
In more detail, write out A, x and b in their components:
Then the decomposition of A into its lower triangular component and its strictly upper triangular component is given by:
The system of linear equations may be rewritten as:
The Gauss–Seidel method now solves the left hand side of this expression for x, using previous value for x on the right hand side. Analytically, this may be written as:
However, by taking advantage of the triangular form of , the elements of x(k+1) can be computed sequentially using forward substitution:
The procedure is generally continued until the changes made by an iteration are below some tolerance, such as a sufficiently small residual.
Discussion
The element-wise formula for the Gauss–Seidel method is extremely similar to that of the Jacobi method.
The computation of xi(k+1) uses only the elements of x(k+1) that have already been computed, and only the elements of x(k) that have not yet to be advanced to iteration k+1. This means that, unlike the Jacobi method, only one storage vector is required as elements can be overwritten as they are computed, which can be advantageous for very large problems.
However, unlike the Jacobi method, the computations for each element cannot be done in parallel. Furthermore, the values at each iteration are dependent on the order of the original equations.
Gauss-Seidel is the same as SOR (successive over-relaxation) with .
Convergence
The convergence properties of the Gauss–Seidel method are dependent on the matrix A. Namely, the procedure is known to converge if either:
- A is symmetric positive-definite,[4] or
- A is strictly or irreducibly diagonally dominant.
The Gauss–Seidel method sometimes converges even if these conditions are not satisfied.
Algorithm
Since elements can be overwritten as they are computed in this algorithm, only one storage vector is needed, and vector indexing is omitted. The algorithm goes as follows:
Inputs: Template:Var, Template:Var Output: Choose an initial guess to the solution repeat until convergence for Template:Var from 1 until Template:Var do for Template:Var from 1 until Template:Var do if Template:Var ≠ Template:Var then end if end (Template:Var-loop) end (Template:Var-loop) check if convergence is reached end (repeat)
Examples
An example for the matrix version
A linear system shown as is given by:
We want to use the equation
in the form
where:
We must decompose into the sum of a lower triangular component and a strict upper triangular component :
Now we can find:
Now we have and and we can use them to obtain the vectors iteratively.
First of all, we have to choose : we can only guess. The better the guess, the quicker the algorithm will perform.
We suppose:
We can then calculate:
As expected, the algorithm converges to the exact solution:
In fact, the matrix A is strictly diagonally dominant (but not positive definite).
Another example for the matrix version
Another linear system shown as is given by:
We want to use the equation
in the form
where:
We must decompose into the sum of a lower triangular component and a strict upper triangular component :
Now we can find:
Now we have and and we can use them to obtain the vectors iteratively.
First of all, we have to choose : we can only guess. The better the guess, the quicker will perform the algorithm.
We suppose:
We can then calculate:
If we test for convergence we'll find that the algorithm diverges. In fact, the matrix A is neither diagonally dominant nor positive definite. Then, convergence to the exact solution
is not guaranteed and, in this case, will not occur.
An example for the equation version
Suppose given k equations where xn are vectors of these equations and starting point x0. From the first equation solve for x1 in terms of For the next equations substitute the previous values of xs.
To make it clear let's consider an example.
Suppose we choose (0, 0, 0, 0) as the initial approximation, then the first approximate solution is given by
Using the approximations obtained, the iterative procedure is repeated until the desired accuracy has been reached. The following are the approximated solutions after four iterations.
The exact solution of the system is (1, 2, −1, 1).
An example using Python
The following numerical procedure simply iterates through to produce the solution vector.
#initialize the matrix
mat = [ [3/5.0, 0.0, 1/10.0, -1/5.0, 0.0 ], \
[25/11.0, 1/11.0, 0.0, 1/11.0, -3/11.0], \
[-11/10.0, -1/5.0, 1/10.0, 0, 1/10.0 ], \
[15/8.0, 0.0, -3/8.0, 1/8.0, 0.0 ] ]
x = [0,0,0,0] #initial guess
for i in xrange(6):
x[0] = mat[0][0] + mat[0][1]*0 + mat[0][2]*x[1] + mat[0][3]*x[2] + mat[0][4]*x[3]
x[1] = mat[1][0] + mat[1][1]*x[0] + mat[1][2]*0 + mat[1][3]*x[2] + mat[1][4]*x[3]
x[2] = mat[2][0] + mat[2][1]*x[0] + mat[2][2]*x[1] + mat[2][3]*0 + mat[2][4]*x[3]
x[3] = mat[3][0] + mat[3][1]*x[0] + mat[3][2]*x[1] + mat[3][3]*x[2] + mat[3][4]*0
print '%f %f %f %f' %(x[0],x[1],x[2],x[3]) #display the iterations to the user
Produces the output,
0.600000 2.327273 -0.987273 0.878864
1.030182 2.036938 -1.014456 0.984341
1.006585 2.003555 -1.002527 0.998351
1.000861 2.000298 -1.000307 0.999850
1.000091 2.000021 -1.000031 0.999988
1.000008 2.000001 -1.000003 0.999999
Program to solve arbitrary no. of equations using Matlab
disp('Give the input to solve the set of equations AX=B')
a=input('Input the square matrix A : \n');
b=input('Input the column matrix B : \n');
m=length(a);
%z is a two dimensional array in which row corresponds to values of X in a
%specific iteration and the column corresponds to values of specific
%element of X in different iterations
c=0;%random assignment
e=1;%'e' represents the maximum error
d=0;%random assignment
for u=1:m
x(u)=b(u,1)/a(u,u);
z(1,u)=0;%initializing the values for matrix X(x1;x2;...xm)
end
l=2;%'l' represents the iteration no.
%loop for finding the convergence factor (C.F)
for r = 1:m
for s = 1:m
if r~=s
p(r)=abs(a(r,s)/a(r,r))+d;%p(r) is the C.F for equation no. r
d=p(r);
end
end
d=0;
end
if min(p)>=1 %at least one equation must satisfy the condition p<1
fprintf('Roots will not converge for this set of equations')
else
while(e>=1e-4)
j1=1;%while calculating elements in first column we consider only the old values of X
for i1=2:m
q(j1)=(a(j1,i1)/a(j1,j1))*z(l-1,i1)+c;
c=q(j1);
end
c=0;
z(l,j1)=x(j1)-q(j1);%elements of z in the iteration no. l
x(j1)=z(l,j1);
for u=1:m
x(u)=b(u,1)/a(u,u);
z(1,u)=0;
end
for j1=2:m-1%for intermediate columns between 1 and m, we use the updated values of X
for i1=1:j1-1
q(j1)=(a(j1,i1)/a(j1,j1))*z(l,i1)+c;
c=q(j1);
end
for i1=j1+1:m
q(j1)=(a(j1,i1)/a(j1,j1))*z(l-1,i1)+c;
c=q(j1);
end
c=0;
z(l,j1)=x(j1)-q(j1);
x(j1)=z(l,j1);
for u=1:m
x(u)=b(u,1)/a(u,u);
z(1,u)=0;
end
end
j1=m;%for the last column, we use only the updated values of X
for i1=1:m-1
q(j1)=(a(j1,i1)/a(j1,j1))*z(l,i1)+c;
c=q(j1);
end
c=0;
z(l,j1)=x(j1)-q(j1);
for v=1:m
t=abs(z(l,v)-z(l-1,v));%calculates the error
end
e=max(t);%evaluates the maximum error out of errors of all elements of X
l=l+1;%iteration no. gets updated
for i=1:m
X(1,i)=z(l-1,i);%the final solution X
end
end
%loop to show iteration number along with the values of z
for i=1:l-1
for j=1:m
w(i,j+1)=z(i,j);
end
w(i,1)=i;
end
disp(' It. no. x1 x2 x3 x4 ')
disp(w)
disp('The final solution is ')
disp(X)
fprintf('The total number of iterations is %d',l-1)
end
Program output is
Give the input to solve the set of equations AX=B
Input the square matrix A :
[10 -2 -1 -1;-2 10 -1 -1;-1 -1 10 -2;-1 -1 -2 10]
Input the column matrix B :
[3;15;27;-9]
It. no. x1 x2 x3 x4
1.0000 0 0 0 0
2.0000 0.3000 1.5600 2.8860 -0.1368
3.0000 0.8869 1.9523 2.9566 -0.0248
4.0000 0.9836 1.9899 2.9924 -0.0042
5.0000 0.9968 1.9982 2.9987 -0.0008
6.0000 0.9994 1.9997 2.9998 -0.0001
7.0000 0.9999 1.9999 3.0000 -0.0000
8.0000 1.0000 2.0000 3.0000 -0.0000
The final solution is
1.0000 2.0000 3.0000 -0.0000
The total number of iterations is 8
See also
- Jacobi method
- Successive over-relaxation
- Iterative method. Linear systems
- Gaussian belief propagation
- Matrix splitting
Notes
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
References
- Many property agents need to declare for the PIC grant in Singapore. However, not all of them know find out how to do the correct process for getting this PIC scheme from the IRAS. There are a number of steps that you need to do before your software can be approved.
Naturally, you will have to pay a safety deposit and that is usually one month rent for annually of the settlement. That is the place your good religion deposit will likely be taken into account and will kind part or all of your security deposit. Anticipate to have a proportionate amount deducted out of your deposit if something is discovered to be damaged if you move out. It's best to you'll want to test the inventory drawn up by the owner, which can detail all objects in the property and their condition. If you happen to fail to notice any harm not already mentioned within the inventory before transferring in, you danger having to pay for it yourself.
In case you are in search of an actual estate or Singapore property agent on-line, you simply should belief your intuition. It's because you do not know which agent is nice and which agent will not be. Carry out research on several brokers by looking out the internet. As soon as if you end up positive that a selected agent is dependable and reliable, you can choose to utilize his partnerise in finding you a home in Singapore. Most of the time, a property agent is taken into account to be good if he or she locations the contact data on his website. This may mean that the agent does not mind you calling them and asking them any questions relating to new properties in singapore in Singapore. After chatting with them you too can see them in their office after taking an appointment.
Have handed an trade examination i.e Widespread Examination for House Brokers (CEHA) or Actual Property Agency (REA) examination, or equal; Exclusive brokers are extra keen to share listing information thus making certain the widest doable coverage inside the real estate community via Multiple Listings and Networking. Accepting a severe provide is simpler since your agent is totally conscious of all advertising activity related with your property. This reduces your having to check with a number of agents for some other offers. Price control is easily achieved. Paint work in good restore-discuss with your Property Marketing consultant if main works are still to be done. Softening in residential property prices proceed, led by 2.8 per cent decline within the index for Remainder of Central Region
Once you place down the one per cent choice price to carry down a non-public property, it's important to accept its situation as it is whenever you move in – faulty air-con, choked rest room and all. Get round this by asking your agent to incorporate a ultimate inspection clause within the possibility-to-buy letter. HDB flat patrons routinely take pleasure in this security net. "There's a ultimate inspection of the property two days before the completion of all HDB transactions. If the air-con is defective, you can request the seller to repair it," says Kelvin.
15.6.1 As the agent is an intermediary, generally, as soon as the principal and third party are introduced right into a contractual relationship, the agent drops out of the image, subject to any problems with remuneration or indemnification that he could have against the principal, and extra exceptionally, against the third occasion. Generally, agents are entitled to be indemnified for all liabilities reasonably incurred within the execution of the brokers´ authority.
To achieve the very best outcomes, you must be always updated on market situations, including past transaction information and reliable projections. You could review and examine comparable homes that are currently available in the market, especially these which have been sold or not bought up to now six months. You'll be able to see a pattern of such report by clicking here It's essential to defend yourself in opposition to unscrupulous patrons. They are often very skilled in using highly unethical and manipulative techniques to try and lure you into a lure. That you must also protect your self, your loved ones, and personal belongings as you'll be serving many strangers in your home. Sign a listing itemizing of all of the objects provided by the proprietor, together with their situation. HSR Prime Recruiter 2010. - Many property agents need to declare for the PIC grant in Singapore. However, not all of them know find out how to do the correct process for getting this PIC scheme from the IRAS. There are a number of steps that you need to do before your software can be approved.
Naturally, you will have to pay a safety deposit and that is usually one month rent for annually of the settlement. That is the place your good religion deposit will likely be taken into account and will kind part or all of your security deposit. Anticipate to have a proportionate amount deducted out of your deposit if something is discovered to be damaged if you move out. It's best to you'll want to test the inventory drawn up by the owner, which can detail all objects in the property and their condition. If you happen to fail to notice any harm not already mentioned within the inventory before transferring in, you danger having to pay for it yourself.
In case you are in search of an actual estate or Singapore property agent on-line, you simply should belief your intuition. It's because you do not know which agent is nice and which agent will not be. Carry out research on several brokers by looking out the internet. As soon as if you end up positive that a selected agent is dependable and reliable, you can choose to utilize his partnerise in finding you a home in Singapore. Most of the time, a property agent is taken into account to be good if he or she locations the contact data on his website. This may mean that the agent does not mind you calling them and asking them any questions relating to new properties in singapore in Singapore. After chatting with them you too can see them in their office after taking an appointment.
Have handed an trade examination i.e Widespread Examination for House Brokers (CEHA) or Actual Property Agency (REA) examination, or equal; Exclusive brokers are extra keen to share listing information thus making certain the widest doable coverage inside the real estate community via Multiple Listings and Networking. Accepting a severe provide is simpler since your agent is totally conscious of all advertising activity related with your property. This reduces your having to check with a number of agents for some other offers. Price control is easily achieved. Paint work in good restore-discuss with your Property Marketing consultant if main works are still to be done. Softening in residential property prices proceed, led by 2.8 per cent decline within the index for Remainder of Central Region
Once you place down the one per cent choice price to carry down a non-public property, it's important to accept its situation as it is whenever you move in – faulty air-con, choked rest room and all. Get round this by asking your agent to incorporate a ultimate inspection clause within the possibility-to-buy letter. HDB flat patrons routinely take pleasure in this security net. "There's a ultimate inspection of the property two days before the completion of all HDB transactions. If the air-con is defective, you can request the seller to repair it," says Kelvin.
15.6.1 As the agent is an intermediary, generally, as soon as the principal and third party are introduced right into a contractual relationship, the agent drops out of the image, subject to any problems with remuneration or indemnification that he could have against the principal, and extra exceptionally, against the third occasion. Generally, agents are entitled to be indemnified for all liabilities reasonably incurred within the execution of the brokers´ authority.
To achieve the very best outcomes, you must be always updated on market situations, including past transaction information and reliable projections. You could review and examine comparable homes that are currently available in the market, especially these which have been sold or not bought up to now six months. You'll be able to see a pattern of such report by clicking here It's essential to defend yourself in opposition to unscrupulous patrons. They are often very skilled in using highly unethical and manipulative techniques to try and lure you into a lure. That you must also protect your self, your loved ones, and personal belongings as you'll be serving many strangers in your home. Sign a listing itemizing of all of the objects provided by the proprietor, together with their situation. HSR Prime Recruiter 2010.
I had like 17 domains hosted on single account, and never had any special troubles. If you are not happy with the service you will get your money back with in 45 days, that's guaranteed. But the Search Engine utility inside the Hostgator account furnished an instant score for my launched website. Fantastico is unable to install WordPress in a directory which already have any file i.e to install WordPress using Fantastico the destination directory must be empty and it should not have any previous installation files. When you share great information, others will take note. Once your hosting is purchased, you will need to setup your domain name to point to your hosting. Money Back: All accounts of Hostgator come with a 45 day money back guarantee. If you have any queries relating to where by and how to use Hostgator Discount Coupon, you can make contact with us at our site. If you are starting up a website or don't have too much website traffic coming your way, a shared plan is more than enough. Condition you want to take advantage of the worldwide web you prerequisite a HostGator web page, -1 of the most trusted and unfailing web suppliers on the world wide web today. Since, single server is shared by 700 to 800 websites, you cannot expect much speed.
Hostgator tutorials on how to install Wordpress need not be complicated, especially when you will be dealing with a web hosting service that is friendly for novice webmasters and a blogging platform that is as intuitive as riding a bike. After that you can get Hostgator to host your domain and use the wordpress to do the blogging. Once you start site flipping, trust me you will not be able to stop. I cut my webmaster teeth on Control Panel many years ago, but since had left for other hosting companies with more commercial (cough, cough) interfaces. If you don't like it, you can chalk it up to experience and go on. First, find a good starter template design. When I signed up, I did a search for current "HostGator codes" on the web, which enabled me to receive a one-word entry for a discount. Your posts, comments, and pictures will all be imported into your new WordPress blog.
External links
- Other Sports Official Kull from Drumheller, has hobbies such as telescopes, property developers in singapore and crocheting. Identified some interesting places having spent 4 months at Saloum Delta.
my web-site http://himerka.com/ - Gauss–Seidel from www.math-linux.com
- Module for Gauss–Seidel Iteration
- Gauss–Seidel From Holistic Numerical Methods Institute
- Gauss Siedel Iteration from www.geocities.com
- The Gauss-Seidel Method
- Bickson
- Matlab code
- C code example