Iterative methods: Jacobi and Gauss Seidel methods

Generalised analysis

1
\begin{equation}
\begin{split}
a_{11}x_{1}+a_{12}x_{2}+.......................+ a_{1n}x_{n} = b1\\
a_{21}x_{1}+a_{22}x_{2}+.......................+ a_{2n}x_{n} = b2\\
.....................................................................\\
.....................................................................\\
a_{n-1,1}x_{1}+a_{n-1,2}x_{2}+....+ a_{n-1,n}x_{n} = bn-1\\
a_{n,1}x_{1}+a_{n,2}x_{2}+....................+ a_{n,n}x_{n} = bn
\end{split}
\end{equation}

Solution

Step 1:

x_{1}^{k+1} = \frac{b_{1} - a_{12}x_{2}^{k}+.......................+ a_{1n}x_{n}^{k}}{a11}

Step 2: (Jacobi)

x_{2}^{k+1} = \frac{b_{2} - a_{21}x_{1}^{k}+.......................+ a_{2n}x_{n}^{k}}{a22}

Step 2: (Gauss Siedal)

x_{2}^{k+1} = \frac{b_{2} - a_{21}x_{1}^{k+1} - [a_{23}x_{3}^{k}.......................+ a_{2n}x_{n}^{k}]}{a22}

In a general way,

[A][X] =[b] \rightarrow [L][D][U][X]=[b] \text{ (not LU factorisation but L, U components of A)}

Jacobi

\begin{split}
(L+D+U)X = b \\
DX^{k+1} + (L+U)X^{k} = b\\
 X^{k+1}  = -D^{-1}(L+U)X^{k}+D^{-1}b\\\\
X^{k+1}  = MX^{k}+C\\
\textcolor{#808080}{\text{where,} \quad M = -D^{-1}(L+U), C =D^{-1}b}
\end{split}

Gauss Siedal:

\begin{split}
(L+D+U)X = b \\
(L+D)X^{k+1} + UX^{k} = b\\
 X^{k+1}  = -(L+D)^{-1}UX^{k}+(L+D)^{-1}b\\\\
X^{k+1}  = MX^{k}+C\\
\textcolor{#808080}{\text{where,} \quad M = -(L+D)^{-1}U, C =(L+D)^{-1}b}
\end{split}

Convergence and spectral radius

Expression for error
\begin{equation}
X^{k+1}  = MX^{k}+C\\
\end{equation}
\begin{equation}
X^{*}  = MX^{*}+C\\
\end{equation}

X^{*} represents the exact solution. Now, taking (2) – (3) gives the error(e ). X^{k+1} - X^{*} = e^{k+1} .

\begin{split}
e^{k+1} &= Me^{k}\\
&=M^{k}e^{0}
\end{split}
\frac{||e^{k}||}{||e^{0}||} <1
Relating error with eigen value

If \lambda_{i} are the eigen values of M and v_{i} are the corresponding eigen vectors, expressing e^{0} in terms of \lambda_{i} and v_{i} where \lambda_{i} are in descending order,

e^{0} = \sum a_{i}v_{i}\\
Me^{0} = \sum a_{i}Mv_{i}\\
\text{but, we know, }Mv_{i} = \lambda_{i} v_{i}\\
Me^{0} = \sum a_{i}\lambda_{i}v_{i}\\
\text{simillarlly, }\\
M^{k}e^{0} = \sum a_{i}\lambda_{i}^{k}v_{i}\\
Spectral radius and sufficient condition for convergence

If we consider the ratio of M^{k}e^{0} with e^{0} , considering only the first term in the summation, since it is the largest among the rest, it is just the value of \lambda_{1} that matters.

hence max(\lambda_{i} ) shall be less than one. The max(\lambda_{i} ) is called spectral radius \rho . It is a sufficient condition for convergence. That means, if spectral radius is less than one, the convergence is guaranteed.However, meeting this criteria is not a must; convergence could be attained in cases where spectral radius higher than one.

Rate of convergence

If m decimal accuracy is needed,

\begin{split}
\frac{||e^{k}||}{||e^{0}||} <10^{-m}\\
\rho^{k} < 10^{-m}\\
k log_{10} (\rho)< -m\\
log_{10} (\frac{1}{\rho})>m\\
k>\frac{m}{log_{10} (\frac{1}{\rho})}
\end{split}

Rate of convergence is defined as follows:

\begin{equation}
R = log_{10} (\frac{1}{\rho})\\
\end{equation}
How to estimate spectral radius

Estimating spectral radius by solving for eigen values is impractical as it is very expensive.

\begin{split}
Mv_{i} = \lambda_{i} v_{i}\\
||Mv_{i}|| \le ||M|||| v_{i}||\\
||\lambda_{i}|||| v_{i}||\le ||M|||| v_{i}||\\
||\lambda_{i}||\le ||M||\\
\end{split}

Hence , \rho \le ||M|| . Norm is taken to be maximum of first and infinity norms. So,

\begin{equation}
\rho \le \text {\large{max} (max(\small{column absolute sum}\large{),max}(\small{row absolute sum}\large{))}}
\end{equation}

Hence, both first and infinity norm of M shall be less than one for ensuring convergence.

For Jacobi method, M = -D^{-1}(L+U) which can be written as :

\begin{equation}
||M|| = \frac{\sum a_{ij}|_{i\ne j}}{a_{ii}} =\frac{\sum a_{nb}}{a_{p}} \le1
\end{equation}

This calls for the diagonal dominance.

Scarborough criteria(Diagonal dominance)

Scarborough criteria is laid out as :

\begin{equation}
\begin {split}
\frac{\sum a_{nb}}{a_{p}} & \le1 \text{ for all equations}\\
&\lt1 \text{ for at least 1 equation }\\ 
\end {split}
\end{equation}

Scarborough criterion is associated with Gauss-Seidel method. If not satisfied, the Gauss-Seidel method is not guaranteed to converge. If satisfied, it means that the system will converge for at least one iterative method.

An example for the requirement of <1 shall be the 1D heat conduction with heat flux condition at both sides. Here, \frac{\sum a_{nb}}{a_{p}}=1. But the system has infinite solutions. If, at least one side temperature is given, then the ratio becomes less than one at that control volume and the system has unique solution.

A numerical example

Consider the syystem of equations:

\begin{split}
2x_{1}+ 3x_{2}+10x_{3} = 10\\
5x_{1} - 2x_{2}+2x_{3} = 5\\
x_{1}+ 10x_{2}+5x_{3} = 6\\
\end{split}

This system after re-ordering to make diagonally dominant becomes,

\begin{split}
5x_{1} - 2x_{2}+2x_{3} = 5\\
x_{1}+ 10x_{2}+5x_{3} = 6\\
2x_{1}+ 3x_{2}+10x_{3} = 10\\
\end{split}

The matrix after dividing with the diagonal element,

\begin{bmatrix}
   1 & -\frac{2}{5}& \frac{2}{5} &0.8\\
   \frac{1}{10}&1 & \frac{5}{10} &0.6\\
   \frac{2}{10}&\frac{3}{10} & 1 &0.5
\end{bmatrix}

infinity norm is 0.8.

\begin{bmatrix}
   1 & -\frac{2}{10}& \frac{2}{10}\\
   \frac{1}{5}&1 & \frac{5}{10} \\
   \frac{2}{5}&\frac{3}{10} & 1 \\
0.6& 0.5&0.7
\end{bmatrix}

The first norm is 0.7 and infinity norm was 0.8. hence the spectral radius is deninitely less than 0.8

Rate of convergence R as per (4) is log_{10}\frac{1}{0.8} (=0.096910013) and k > 41.2 iterations needed for accuracy up to 4 decimal places.

Source

  1. Prof. Suman Chakraborty’s Lecture on CFD: Lecture 25 – Iterative Methods for Numerical Solution of Systems of Linear Algebraic Equations(time – 26:00 onward ) and Lecture 26

Leave a Comment

Your email address will not be published.