Sunteți pe pagina 1din 7

EEE 305 Lecture 9: Iterative Methods

Gauss –Seidel method


The Gauss-Seidel method is the most commonly used iterative method for solving linear algebraic
equations.
Assume that we are given a set of n equations:
[𝐴]{𝑥} = {𝑏}
Suppose that for conciseness we limit ourselves to a 3 × 3 set of equations. If the diagonal elements
are all nonzero, the first equation can be solved for x1, the second for x2, and the third for x3 to yield
𝑏1 − 𝑎12 𝑥2 − 𝑎13 𝑥3
𝑥1 =
𝑎11
𝑏2 − 𝑎21 𝑥1 − 𝑎23 𝑥3
𝑥2 =
𝑎22
𝑏3 − 𝑎31 𝑥1 − 𝑎32 𝑥2
𝑥3 =
𝑎33
The jth iteration values are found from the j-1th iteration using:
b1  a12 x2j 1  a13 x3j 1
x 
1
j

a11
b2  a21x1j  a23 x3j 1
x 
2
j

a22
b3  a31x1j  a32 x2j
x3j 
a33
Jacobi iteration
The Jacobi iteration is similar to the Gauss-Seidel method, except the j-1th information is used to
update all variables in the jth iteration:

1
Page

Figure 1 Graphical depiction of the difference between (a) the Gauss-Seidel and (b) the Jacobi iterative methods for
solving simultaneous linear algebraic equations.

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 9: Iterative Methods

Convergence of iterative method


The convergence of an iterative method can be calculated by determining the relative percent
change of each element in {x}. For example, for the ith element in the jth iteration,
xij  xij 1
 a ,i  100%
xij
for all i, where j and j − 1 are the present and previous iterations.
The method is ended when all elements have converged to a set tolerance, i.e. 𝜀𝑎.𝑖 < 𝜀𝑠

Example 11.3 from Chapra

2
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 9: Iterative Methods

Convergence Criterion for the Gauss-Seidel Method


Gauss –Seidel method is similar in spirit to the technique of Simple Fixed Point Iteration. Recall that
simple fixed-point iteration had two fundamental problems:
(1) it was sometimes non convergent and
(2) when it converged, it often did so very slowly.
The Gauss-Seidel method can also exhibit these shortcomings.
Convergence criteria can be developed by recalling that sufficient conditions for convergence of two
nonlinear equations, u(x, y) and v(x, y), are

and (i)
These criteria also apply to linear equations of the sort we are solving with the Gauss-Seidel method.
For example, in the case of two simultaneous equations, the Gauss-Seidel algorithm can be
expressed as

and (ii)
The partial derivative can be evaluated with respect to each of the unknown as

and (iii)
Which can be substituted into equation (i) to give

and (iv)
In other words, the absolute values of the slopes of Eq. (11.8) must be less than unity to ensure
convergence. This is displayed graphically in Fig. 2. Equation (iv) can also be reformulated as
𝑎11 > 𝑎12 and 𝑎22 > |𝑎21 |
That is, the diagonal element must be greater than the off-diagonal element for each row. The
extension of the above to n equations is straightforward and can be expressed as
n
aii   aij
j 1
j i

This is also known as diagonal dominance

Figure 2 Illustration of (a) convergence and (b) divergence of the Gauss-Seidel method. Notice that the same functions
3

are plotted in both cases (u: 11x1 + 13x2 = 286; v: 11x1 − 9x2 = 99). Thus, the order in which the equations are
Page

implemented (as depicted by the direction of the first arrow from the origin) dictates whether the computation
converges.

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 9: Iterative Methods

Relaxation
To enhance convergence, an iterative program can introduce relaxation where the value at a
particular iteration is made up of a combination of the old value and the newly calculated value:
xinew  xinew  1   xiold
where λ is a weighting factor that is assigned a value between 0 and 2.
• 0 < 𝜆 < 1: underrelaxation
• λ=1: no relaxation
• 1<λ≤2: overrelaxation
Pseudo code for Gauss-Seidel with relaxation (Chapra 306)

4
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 9: Iterative Methods

Non-linear system
Nonlinear systems can also be solved using the same strategy as the Gauss-Seidel method - solve
each system for one of the unknowns and update each unknown using information from the
previous iteration.
This is called successive substitution.
Newton-Raphson
Nonlinear systems may also be solved using the Newton-Raphson method for multiple variables.
For a two-variable system, the Taylor series approximation and resulting Newton-Raphson equations
are:
f 2,i f
f1,i  f 2,i 1,i
f1,i f x2 x2
f1,i 1  f1,i  x1,i 1  x1,i   x2,i 1  x2,i  1,i x1,i 1  x1,i 
x1 x2 f1,i f 2,i f1,i f 2,i

x1 x2 x2 x1
f f
f 2,i 1,i  f1,i 2,i
f 2,i f x1 x1
f 2,i 1  f 2,i  x1,i 1  x1,i   x2,i 1  x2,i  2,i x2,i 1  x2,i 
x1 x2 f1,i f 2,i f1,i f 2,i

x1 x2 x2 x1

Newton-Raphson (NR) method is commonly used in power system for load flow analysis.

Matrix analysis with MATLAB


The table below shows some of the function necessary for matrix operation in MATLAB.

5
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 9: Iterative Methods

Matlab implementation of Gauss-Seidel method


function x = GaussSeidel(A,b,es,maxit)
% GaussSeidel: Gauss Seidel method
% x = GaussSeidel(A,b): Gauss Seidel without relaxation
% input:
% A = coefficient matrix
% b = right hand side vector
% es = stop criterion (default = 0.00001%)
% maxit = max iterations (default = 50)
% output:
% x = solution vector
if nargin<2,error('at least 2 input arguments required'),end
if nargin<4|isempty(maxit),maxit=50;end
if nargin<3|isempty(es),es=0.00001;end
[m,n] = size(A);
if m~=n, error('Matrix A must be square'); end
C = A;
for i = 1:n
C(i,i) = 0;
x(i) = 0;
end
x = x';
for i = 1:n
C(i,1:n) = C(i,1:n)/A(i,i);
end
for i = 1:n
d(i) = b(i)/A(i,i);
end
iter = 0;
while (1)
xold = x;
for i = 1:n
x(i) = d(i)-C(i,:)*x;
if x(i) ~= 0
ea(i) = abs((x(i) - xold(i))/x(i)) * 100;
end
end
iter = iter+1;
if max(ea)<=es | iter >= maxit, break, end
end
6
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 9: Iterative Methods

Implementation of Newton Raphson method for nonlinear system


function [x,f,ea,iter]=newtmult(func,x0,es,maxit,varargin)
% newtmult: Newton-Raphson root zeroes nonlinear systems
% [x,f,ea,iter]=newtmult(func,x0,es,maxit,p1,p2,...):
% uses the Newton-Raphson method to find the roots of
% a system of nonlinear equations
% input:
% func = name of function that returns f and J
% x0 = initial guess
% es = desired percent relative error (default = 0.0001%)
% maxit = maximum allowable iterations (default = 50)
% p1,p2,... = additional parameters used by function
% output:
% x = vector of roots
% f = vector of functions evaluated at roots
% ea = approximate percent relative error (%)
% iter = number of iterations
if nargin<2,error('at least 2 input arguments required'),end
if nargin<3|isempty(es),es=0.0001;end
if nargin<4|isempty(maxit),maxit=50;end
iter = 0;
x=x0;
while (1)
[J,f]=func(x,varargin{:});
dx=J\f;
x=x-dx;
iter = iter + 1;
ea=100*max(abs(dx./x));
if iter>=maxit|ea<=es, break, end
end

1. Chapra examples
a. 11.3
b. 11.5
2. Chapra Chapter 11 exercise 11.8-11.13
7
Page

Prepared BY
Shahadat Hussain Parvez

S-ar putea să vă placă și