Sunteți pe pagina 1din 8

Introduction to Galerkin

and Variational Methods


We have seen that the idea of using the weak form of a differential equation together with the idea
of looking for the solution of the weak form in a finite dimensional subspace lead to an effective
means for generating approximate solutions to our model boundary value problem. This method of
generating numerical approximations is called the Galerkin method.
Before looking further at further examples of the Galerkin method methods, I will list a few comments
intended to enhance your understanding of the method, and how it relates to the concept of best
approximation in a subspace.
1. Energy inner product and norm. It is easy to see that the bilinear form a(u, v), u, v V
satisfies all the requirements for an inner product on the linear space V. In particular, it is

R`
clear that a(u, u) = 0 AE(u0 )2 + cu2 dx 0, and if a(u, u) = 0 it must be true that u 0.
This is obvious when c > 0, if c 0 then u0 = 0 so u = b =constant. Then since u V entails
u(0) = 0, it follows that b = 0. Thus, the pair V, a(, ) forms an inner product
space (we
p
call a the energy inner product), and we can define the energy norm kukE = a(u, u) so that
V, k kE is a normed space.
2. Best approximation. We now see that the problem of finding u V such that a(u, v) =
(f, v) v V takes place in a normed space, and we can now show that the Galerkin method
leads to the best approximation un to the solution from the finite dimensional subspace Vn .
That is un = minwn Vn kwn ukE . This follows directly from the observations: a(un , vn ) =
(f, vn ) vn Vn V , and a(u, vn ) = (f, vn ) for u the actual solution. Subtracting we find that
a(u un , vn ) = 0 vn Vn so that u un is perpendicular to Vn . This property, as weve
previously seen, implies that un is the best approximation from Vn .
3. Need for sparse stiffness matrices. An objection to the use of polynomials as the basis
elements {i } of our approximating subspace is that the resulting stiffness matrix, [a(i , j )],
may have many nonzero entries, and this fact can pose a heavy computational burden. For
example, with approximating subspace of, say, dimension 100, a fully populated stiffness matrix
would have 10,000 elements to be computed and would, in addition, be difficult to invert. The
ideal basis functions should be easy to work with and lead to sparse stiffness matrices (i.e.,
matrices with many 0 elements).
4. Finite element basis functions. The finite element approach to choosing basis elements for
the Galerkin method is (in 1D) to divide the interval over which the equation is to be satisfied
into a finite number of parts, and in each of these parts to use a polynomial approximation to
the solution. E.g., choose points x0 = 0 < x1 , < xn < xn+1 = ` in the interval [0, `], and
take Vn as the set of continuous functions that are linear in each subinterval [xi , xi+1 ], i =
0, . . . , n and vanish at 0, ` (i.e., continuous, piecewise linear functions). Vn is a subspace of
C([0, `]) of dimension n corresponding to values of v Vn at the points xi , i = 1, . . . , n. This
choice of basis functions leads to sparse stiffness matricies as we will discover shortly.
5. A technical difficulty There is problem that has to be mentioned with the finite element basis
functions just described. Consider the continuous piecewise linear functions with, say, n = 2
for simplicity. A typical v V2 looks like

x0

x1

x3

x2

It is clear that the function v is continuous but has only one derivative, and this derivative is
only piecewise continuous (actually piecewise constant). Thus, V2 is not a subspace of V whose
elements must have two continuous derivatives. The way around this dilemma is to extend the
notion of a solution, i.e., enlarge the space V so that it contains subspaces of finite element
piecewise polynomials. We wont worry about how to do this, and will verify by examples that
we havent made the approximating subspaces too large. Thus, at least within the context of
this course, the difficulties just discovered may be forgotten, and we may assume whenever
necessary that the space V contains the finite element subspaces to be introduced.
To present the basic ideas of the finite element method and at the same time generalize our sample
problem, well consider a boundary value problem (BVP) for the ordinary differential equation of
the type weve been studying, but with some extra features. Let p 6= 0 R be given and set
Vp = {w C 2 ([0, 1]) : w(0) = p}, i.e. functions having two continuous derivatives and taking a
specified value at the left end point. We formulate our BVP as follows:
Problem P1 : find u Vp satsifying L(u) = u00 + u = f (x), 0 < x < 1,

u0 (1) = q,

where q R and f C([0, 1]) are given.


The new features of this BVP are: 1.) A nonzero boundary condition on the unknown function
has been added at the left end point. 2.) The right hand boundary condition has been changed to
require that the first derivative take a given value. (In the elastic rod case, this would correspond
to a prescribed value of stress at x = 1 recall = Eu0 . Note that the left end condition has
been incorporated into the definition of Vp conditions on the unknown function are called essential
conditions whereas the right hand condition is given explicitly.
Our approach to numerial approximation requires that we start with the weak form. How do we get
it for this new problem? Recall that we showed the equvalence of the weak form and the BVP simply
by using integration by parts. We can do this with our new problem. Define the linear subspace
of C 2 ([0, 1]) by V0 = {w C 2 ([0, 1]) : w(0) = 0} (note only the value at the left hand end point is
required to be 0). For u Vp satisfying P1 and any v V0 we have
Z 1
Z 1
(u00 v + uv f v) dx,
v(Lu f ) dx =
0=
0

and integrating the first term by parts


Z 1
Z
Z 1
((u0 v)0 + u0 v 0 ) dx = u0 v|10 +
u00 v dx =
0

u0 v 0 dx = qv(1) +

Z
0

u0 v 0 dx

where weve used the facts that v(0) = 0 and u0 (1) = q. Substuting we have
qv(1) +

(u0 v 0 + uv f v) dx = qv(1) + a(u, v) (f, v) = 0,

where as before
a(u, v) =

(u0 v 0 + uv) dx,

(f, v) =

f v dx, u Vp , v V0

In summary, u Vp satisfies P1 implies


a(u, v) = (f, v) + qv(1)

for all v V0 .

But now we can reverse the process just completed and find the P1 is equivalent to
Problem P2 : find u Vp such that a(u, v) = (f, v) + qv(1) for all v V0
Note that the only difference between Vp and V0 is that functions in Vp are required to take the value
p 6= 0 at x = 0, where as those in V0 vanish there. V0 is a linear space (since v, w V0 v + w and
v V0 ), but Vp is not e.g. if v(0) = p then v
/ Vp unless = 1. In fact, if v Vp is any fixed
element of Vp , then all other elements v are of the form v = v + w, w V0 (since the difference of
any two elements of Vp belongs to V0 ). Thus, Vp is just a translated version of the linear space V0
the dimension of Vp is defined as the dimension of V0 (it is called an affine subspace of V rather
than a linear subspace).
Ignoring the fact P1 and hence P2 can be solved exactly for any f , we will try to find an approximate
numerical solution. Using a small generalization of the previous Galerkin approach, we seek the
approximate solution u in a finite dimensional subspace of Vp,n Vp , and we require that P2 be
satisfied for all v V0,n where Vp,n and V0,n are n dimensional subspaces of V (any u Vp,n is
on the form u
+ v with u
a fixed element of Vp,n and v V0,n ). (u will usually be called the trial
function and any v will be called a test function.)
Of course, we could work with polynomials again, but now we introduce the finite element ideas.
We define a grid of points in the interval [0, 1], and call these points xi , i = 1, . . . , n + 1 where
0 = x1 < < xn+1 = 1, and xi+1 xi hi , i = 1, . . . , n is the grid spacing. Usually, these points
will be evenly spaced, hi = 1/n, but at this point we wont enforce this restriction. We take Vp,n
to be the space of continuous, piecewise linear functions u on [0, 1] such that u(0) = p, u is linear
in each subinterval [xi , xi+1 ], with u(xi ) = ui , i = 1, . . . , n. We take V0,n to be the same space but
with the value 0 specified at x = 0. Any function in either space is determined entirely by its values
at the n points xi , i = 2, . . . , n + 1, and this implies that the spaces are n dimensional. In fact well
now define a useful system of basis functions with n elements.
Consider three grid points xe1 , xe , xe+1 , and let v V0,n (or Vp,n ) have the values ve1 , ve , ve+1
at these points as shown in the sketch below.

The trial or test functions are linear on each interval, or element, I e = [xe , xe+1 ] of the grid.
Thus, on element e 1 we have v = ve1 (xe x)/he1 + ve (x xe1 )/he1 and on element e,
v = ve (xe+1 x)/he + ve+1 (x xe )/he . That is, on each element I e , a test or trial function v can
be expressed as a linear combination of two element basis functions H1e (x) = (xe+1 x)/he and
P2
H2e (x) = (x xe )/he , i.e., we can write v|I e = j=1 ve+j1 Hje , where the notation v|I e indicates the
restriction of v to element e. Combining the expressions for v on the separate elements, we arrive at
a formula which holds for all of I
n+1
X
v(x) =
i (x)vi
i=1

with
1 (x) =
and

H11 (x), x1 = 0 x < x2 ,


0,
x2 x < 1

n+1 (x) =

0,
H2n (x),

0 x xn ,
xn < x 1 = xn+1

0,
0 x < xj1

H j1 (x), x
j1 x < xj
2
, j = 2, . . . , n,
j (x) =
j
H1 (x),
xj x < xj+1

0,
xj+1 x 1

where v1 = 0 for a test function and v1 = p for the trial function. The i s, sketched below, are
often called tent functions for obvious reasons.

Now substitute the trial function into the weak form and use the fact that a is linear in its second
factor, i.e., a(w, u1 + u2 ) = a(w, u1 ) + a(w, u2 )
a(w, u) = a(w,

n+1
X

uj j ) =

j=1

n+1
X

a(w, j )uj = F (w)

for all test fncs w,

j=1

where F (w) = (f, w) + qw(1). The now any test u function is a linear combination of tent functions
(excluding 1 , since w1 = 0). In addition, a is linear in its first argument w and F is also linear in
w. This means that requiring a(w, u) = F (w) for any test function is equivalent to the requirement
that a(i , u) = F (i ), i = 2, . . . , n + 1. Thus, we arrive at the system of n equations for the n
unknown quantities u2 , . . . , un+1
n+1
X

a(i , j )uj = F (i ), i = 2, . . . , n + 1

j=1

It is often convenient to ignor the fact that u1 = p and allow i to range over the range 1, . . . , n + 1
in the above system. Prior to obtaining an algebraic solution, of course, the constraint on u1 must
be imposed. Let us compute the coefficient matrix K = [a(i , j )] and the right hand side vector
F = [F (1 ), , F (n+1 ]T . As noted previously, K is often called the stiffness matrix and F the
load vector because of the structural origins of the FEM.
For the load vector we have (because only n+1 is nonzero at x = 1)
Fe = F (e ) =

e f (x) dx + e,n+1 q,

where
i,j =

i = j,
otherwise

1,
0,

is the Kronecker delta. If e = 1, then


Z 1
Z
H11 (x)f (x) dx =
F1 =

I1

H11 (x)f (x) dx

Given the functional form of f , we can evaluate these integrals either analytically or numerically.
For our first example, we take f to be a constant. In this case,

e=1
f h1 /2,
Fe = f (he1 + he )/2, e = 2, . . . , n

f hn /2 + q,
e=n+1
Next we turn to the stiffness matrix K. The element basis functions Hje (x) have constant derivatives
H1e 0 = 1/he ,

H2e 0 = 1/he .

As a result the tent functions e have piecewise constant derivatives




0,
1/h1 , 0 x < x2 ,
01 (x) =
n+1 (x) =
1/hn ,
0,
x2 x < 1
5

0 x xn ,
xn < x 1

and
0j (x)

0,

1/h

j1 ,

1/hj ,
0,

x xj1
xj1 < x < xj
, j = 2, . . . , n
xj < x < xj+1
x xj+1

Thus, if c = 0 and the grid spacing is constant grid spacing we have

1 1 0
0 0
1 2 1 0 0

..
..
..

1
.
.
.
0

K|c=0 = .
..
h ..
.

0
0 1 2 1
0
0 0 1 1
R1
If c 6= 0, we must add the elements kc,ij = c 0 i j dx. We find for the diagonal elements
Z
Z
1
kc,ee =
H2e1 (x)2 dx +
H1e (x)2 dx = (he1 + he ), e = 2, . . . , n,
3
I e1
Ie
and kc,11 = h1 /3, kc,n+1,n+1 = hn /3. Of the off diagonal elements, only those on the first subdiagonal
and superdiagonal are nonzero, and we find
Z
kc,e1,e =
H2e1 (x)H1e (x) dx = he1 /3,
I e1

and kc,e,e+1 = he /3.


We now have all the elements necessary for obtaining the solution of the differential equation.
function fem_ex00(n, c, f, p, q)
% finite element solution of -u+cu=f, u(0)=p,u(1)=q
% using n intervals in [0,1]. Known value of u at x=0 is
% included as a additional equation so we start with n+1 unknowns.
x=linspace(0,1,n+1); h=1/n; % define grid points and spacing
% construct the load vector
ff=zeros(n+1,1);
ff(2:n)=h*f; ff(n+1)=h*f/2+q;
kk=kstif(n,c);
% enforce boundary condition u(0)=u1=p at x=0
kk(1,:)=0; kk(1,1)=1; ff(1)=p;
uu=kk\ff; % solve for u
uex=uexac(x,c,f, p, q);
fprintf(1,%10s%10s%10s%10s\n,x, uapprox, uexact, error);
disp([x, uu, uex, abs(uex-uu)]);
plot(x,uu,k-, x,uex, ko);
%===================================
function kmat=kstif(n,c)
% stiffness matrix for c=0
% kk(1,1)=kk(n+1,n+1)=1/h, all other diagonal elements kk(i,i)=2/h
% first sub and super diagonal elements kk=-1/h
6

% contribution for c!=0


% kk(1,1)=kk(n+1,n+1)=c*h/3, all other diagonal elements kk(ii)=2*c*h/3
% first sub and super diagonal elements kk=c*h/6
% all other kk=0
h=1/n;
kmat=(2*c*h/3+2/h)*diag(ones(1,n+1),0)+(c*h/6-1/h)*(diag(ones(1,n),-1)+diag(ones(1,n),1));
kmat(1,1)=(c*h/3+1/h); kmat(n+1,n+1)=(c*h/3+1/h);
%===================================
function u=uexac(x,c,f,p,q)
% exact solution to -u+cu=f, u(0)=p, u(1)=q
rtc=sqrt(c);
c1=q/(rtc*cosh(rtc)); c2=(p-f/c)/cosh(rtc);
u=f/c+c1*sinh(rtc*x)+c2*cosh(rtc*(x-1));
The results of a sample computation with f = 1, c = 4, p = 1, q = 2 are given below:

1.5

1.4

1.3

1.2

1.1

0.9

0.8

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

A more quantative comparison is given in the output data


>> fem_ex00(10,4,1,1,2)
x
uapprox
0
1.0000
0.1000
0.9226
0.2000
0.8723
0.3000
0.8470
0.4000
0.8458
0.5000
0.8685
0.6000
0.9162
0.7000
0.9907
0.8000
1.0951
0.9000
1.2334
1.0000
1.4114

uexact
1.0000
0.9230
0.8730
0.8480
0.8470
0.8700
0.9178
0.9925
1.0969
1.2354
1.4134

error
0.0000
0.0004
0.0007
0.0010
0.0012
0.0014
0.0016
0.0018
0.0019
0.0020
0.0020

The computation above is not arranged in the usual FEM fashion. We have not taken advantage
of the fact that all computations can be performed locally on elements. We will in our future
work arrange things so that the element stiffness matrix and load vectors are computed and then
assembled into the global stiffness matrix and load vector. For the problem just completed, this
element approach would not achieve any gains in simplicity, but in more complex problems the
assembly process becomes necessary for computational efficiency.

S-ar putea să vă placă și