Sunteți pe pagina 1din 11

Lecture 4: Introduction to numerical methods for ODEs

We have learnt to solve ODEs with Mathematica, and for that purpose we don't need to know much about what Mathemat-
ica actually does to solve ODEs. However, Mathematica is not the final answer to all problems. The solution of very
complex problems, such as three dimensional fluid dynamics on very large computational grids, may require a C or Fortran
code to achieve the necessary memory and computational efficiency. We will therefore venture a little into the world of
numerical methods for the solution of ODEs, and still use Mathematica, as a learning tool, to program those methods.

Solving an ODE of order N is equivalent to performing N integrations. As the integral of a function corresponds to the
surface area below the function, each integration involves a computation of that area. The accuracy of a numerical method
for solving ODEs is therefore the accuracy with which we compute that surface area. For initial value problems, for exam-
ple with time as the free variable, the basic idea of most numerical methods is to discretize time into a number of small
intervals, and advance the solution step by step. In other words, we are computing that surface area below the function by
subdividing it into pieces. Each step of the computation then needs to evaluate the area of one of those pieces. The accuracy
of the numerical solution increases as the time-step size decreases.
The discretization of the problem is equivalent to defining a map that corresponds to the original function. Maps are known
to present chaotic solutions at lower phase-space dimensionality than functions. Two dimensional maps can be chaotic,
while differential equations must explore at least a 3 dimensional phase space to exhibit chaotic behavior. That should
immediately alert you of a problem for our numerical solutions based on a discretization of the problem: You may start with
a system that is not chaotic, and design a numerical scheme corresponding to a map that is instead chaotic, for certain values
of the parameters or initial conditions. This is far from a simple academic discussion. The stability of numerical methods to
solve ODEs is a major concern, especially when advancing in time the solution of partial differential equations, where the
other variable (e.g. space dimensions) have also been discretized.
We will discuss here the Euler and the Predictor-Corrector schemes. The final problem of this week homework is an
implementation of a centered-difference scheme.

The Euler Method

Å = f Ht, vL, vH0L = vo .


Let's not be too ambitious for now and consider the most simple first-order ODE:

„v
ÅÅÅÅÅÅ
„t
As we said above, the basic idea is to discretize the time into small intervals of size Dt:

tn = n Dt , n = 0, 1, 2, 3, ....
Discretizing the problem means that, while the integral of the original function could be evaluated at any real value of time,
we now instead evaluate the integral of the function only at a finite number of time steps:

Ÿt0 f Ht, vHtLL dt Ø Ÿt1 f Ht, vHtLL dt Ø ... ... .. Ø Ÿtn-1 f Ht, vHt0LL dt
t1 t2 tn

In this way we map v1 Ø v2, v2 Ø v3, ......vn-1 Ø vn. Designing the numerical scheme corresponds to defining such map.
2 Lecture4.nb
Discretizing the problem means that, while the integral of the original function could be evaluated at any real value of time,
we now instead evaluate the integral of the function only at a finite number of time steps:

Ÿt0 f Ht, vHtLL dt Ø Ÿt1 f Ht, vHtLL dt Ø ... ... .. Ø Ÿtn-1 f Ht, vHt0LL dt
t1 t2 tn

In this way we map v1 Ø v2, v2 Ø v3, ......vn-1 Ø vn. Designing the numerical scheme corresponds to defining such map.

The integrals are not trivial to compute, because f depends on the unknown v, which we have not solved yet for the values
of time covered by the next integral. The way in which we approximate f over the next time interval defines our numerical
method.

In Euler's method we simply take f = constant in the whole interval!

Each step of the integration of the simple first order ODE is

Ÿtn-1 ÅÅÅÅÅÅ „ t = vHtn L - vHtn-1 L =Ÿt f Ht, vHtLL „ t.


tn„v tn
„t n-1

If we impose that f = constant in the whole time interval, we can simply choose its initial value, which can be evaluated

Ÿtn-1 f Ht, vHtLL „t º Dt f Htn-1 , vHtn-1 LL + OHDt L.


because the initial value of v is known from the previous step or from the initial condition:
t
n 2

The error in this approximation scales as Dt2 (same notation as in power series expansions). You can easily verify the order
of approximation of that integral with a Taylor expansion.

With this approximation, the solution of each time step becomes:

vHtn L = vHtn-1 L + Dt f Htn-1 , vHtn-1 LL + OHDt 2 L

Which is Euler's method. Now you can really see what the discrete map looks like: It is recursion relation.

Now you will suddenly appreciate the advantage of testing a numerical scheme like this one with Mathematica, because
Mathematica has a trivial way of defining a recursion relations: You basically write them down as they are!!

t@n_D := n Dt;
v@n_D := v@n - 1D + Dt f@t@n - 1D, v@n - 1DD;
v@0D := v0

IMPORTANT: The delayed evaluation in the recursion relation is necessary.


COMPUTER EXAMPLE 1: Solve a simple ODE, play with different time steps, plot the function. Stress the importance of
the /; condition and its parenthesis, stress the importance of the extra = , in order to remember the value of each recursion
step, so Mathematica does not recompute the whole recursion every time.
Lecture4.nb 3

v@n_D := Hv@nD =
t@n_D := n Dt;

v@n - 1D + Dt f@t@n - 1D, v@n - 1DDL ê;


n > 0 && n œ Integers;
v@0D := v0

Dt = 0.2;
f@t_, v_D = t - v;
v0 = 0;

solution = Table@8t@nD, v@nD<, 8n, 0, 4 ê Dt<D

880, 0<, 80.2, 0<, 80.4, 0.04<, 80.6, 0.112<, 80.8, 0.2096<,
81., 0.32768<, 81.2, 0.462144<, 81.4, 0.609715<, 81.6, 0.767772<,
81.8, 0.934218<, 82., 1.10737<, 82.2, 1.2859<, 82.4, 1.46872<,
82.6, 1.65498<, 82.8, 1.84398<, 83., 2.03518<, 83.2, 2.22815<,
83.4, 2.42252<, 83.6, 2.61801<, 83.8, 2.81441<, 84., 3.01153<<

a = ListPlot@solution, PlotStyle Ø PointSize@0.015D,

b = Plot@E ^ -t + t - 1, 8t, 0, 4<, DisplayFunction Ø IdentityD;


DisplayFunction Ø IdentityD;

Show@a, b, DisplayFunction Ø $DisplayFunctionD;

2.5

1.5

0.5

1 2 3 4

vEuler = Interpolation@solutionD

InterpolatingFunction@880., 4.<<, <>D


4 Lecture4.nb

p1 = Plot@vEuler@tD - vExact@tD, 8t, 0, 4<D;


vExact@t_D = E ^ -t + t - 1;

1 2 3 4

-0.01

-0.02

-0.03

-0.04

At this point, before venturing any further in Mathematica programming, we need to learn an important tool of Mathemat-
ica: The Module.
Mathematica modules are like module in C or subroutines in Fortran, they are pieces of programs with specific functions
that we can call, when needed, from another program. Modules are good ways to define new functions. The main advantage
of using a Module is that the values defined as internal variables of the Module are only local to the Module, their definition
is not remembered outside of the Module. If the variables inside the module have the same name as variables outside the
module there is no conflict. So there is no need to clear the variables used in the module before calling the module. This is
crucial if you are planning to define lots of new functions that will form a large library of computational tools, because you
don't want to worry about new variables defined by each functions everytime you call them. You really want that function
you call to behave like a black box giving out the correct result wihtout interfering in any other way with the calling pro-
gram, or the program interfering with the function. This is precisely what modules are made for.
This is the syntax:

Module[{internal variables}, statements] creates a module in Mathematica

As an example of creating a function using a module, we can directly put the Euler method into a function called Eulersol,
that is created as a module:

COMPUTER EXAMPLE 2:

Eulersol@v0_, time_, Dt_D := Module@8t, v, solution<,

v@n_D := Hv@nD =
t@n_D := n Dt;

v@n - 1D + Dt f@t@n - 1D, v@n - 1DDL ê;


n > 0 && n œ Integers;

solution = Table@8t@nD, v@nD<, 8n, 0, time ê Dt<D;


v@0D := v0;

vEuler = Interpolation@solutionD;D
Lecture4.nb 5

This is how we can compute the solution and test its error relative to the exact solution for different timestep sizes:
6 Lecture4.nb

p2 = Plot@vEuler@tD - vExact@tD, 8t, 0, 4<, DisplayFunction Ø IdentityD;


Eulersol@0, 4, 0.1D;

p3 = Plot@vEuler@tD - vExact@tD, 8t, 0, 4<, DisplayFunction Ø IdentityD;


Eulersol@0, 4, 0.05D;

Show@p1, p2, p3, DisplayFunction Ø $DisplayFunctionD;

1 2 3 4

-0.01

-0.02

-0.03

-0.04

The error of a single step is second order, Dt2 , while the error of the final solution is first order, because we have used N
timesteps to cover the time T, so N = T / Dt. The errors of all timesteps are summed up into the total error, giving
N * Dt2 = T * Dt. So the total error is first order in Dt.

In general, the error of a numerical method of order N scales as

T * Dt N ,
(although the error of a single timestep scales as Dt N+1 ) where T is the total integration time, and Dt the timestep size.
Therefore, accuracy may be low for three different reasons:

i) Integrating for a very long time,

ii) Using a very large timestep size,

iii) Using a low order method (for example Euler's method!).


Lecture4.nb 7

The 2nd Order Predictor-Corrector Method


We can definitely do better than the approximation f = constant. In fact, a very large number of timestepping methods of
various orders (larger than 1st!!) have been developed. The choice of method depends on the required accuracy, on the
importance of memory efficiency and other considerations. The predictor-corrector method approximate the function f with
the average of its values at the initial and final points of integration. The value of f at the final point is unknown, because
the value of v there is still unknown. The trick is to take a guess, and the guess is the value that would be computed with the
Euler method. So the first order approximation is used only for the guess, while the method itself is more accurate. Average
is almost always more accurate than assuming f = constant, with the obvious exception of a function that is indeed constant
during most of the time interval, and then decide to suddenly change just a very small time before the final time of the
timestep interval. Luckily, most functions are not aware of our intention to slice them up, so they are not set up for the worst
case.......

Here's the approximation used in the Predictor-Corrector method:

f Htn-1 , vHtn-1 LL + f Htn , vHtn LL


‡ f Ht, vHtLL „ t º Dt ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
ÅÅÅÅÅÅÅÅÅÅÅÅÅ + OHDt3 L
tn

tn-1 2

vHtn L = vHtn-1 L + Dt ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ


f Htn-1 ,vHtn-1 LL+ f Htn ,vHtn LL
2
ÅÅÅÅÅÅÅÅ
Å ÅÅÅÅÅÅÅ
Å Å + OHDt 3
L

The error in each timestep scales as Dt3 , as you can easily verify with a Taylor series expansion, and therefore the error of
the final solution scales like N * Dt3 = (T /Dt) * Dt3 = T * Dt2 , showing that the method is second order accurate.

v E n = v Htn-1 L + Dt f Htn-1 , v Htn-1 LL


With the guess based on Euler's method, the predictor-corrector recursion becomes:
The error in each timestep scales as Dt3 , as you can easily verify with a Taylor series expansion, and therefore the error of
the final solution scales like N * Dt3 = (T /Dt) * Dt3 = T * Dt2 , showing that the method is second order accurate.
8 Lecture4.nb

v E n = v Htn-1 L + Dt f Htn-1 , v Htn-1 LL


With the guess based on Euler's method, the predictor-corrector recursion becomes:

v Htn L =
v Htn-1 L + Dt ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
f Htn-1 ,v Htn-1 LL+f Htn ,v E n L
2 ÅÅÅÅÅÅÅÅÅÅÅÅÅ
This is great, but notice the extra work: 1) we need to evaluate the function f at two points, vn and vn-1; 2) we also have an
intermediate evaluation of the solution, that we also need to remember (more memory is used) 3) the program will be more
complex.

Here is the Mathematica module:

PCsol@v0_, time_, Dt_D :=


Module@8t, v, f0, v1, solution<, t@n_D = n Dt;
v@0D = v0;
f0 := f@t@n - 1D, v@n - 1DD;

v@n_D := v@nD = v@n - 1D + Dt Hf0 + f@t@nD, v1DL ê 2;


v1 := v@n - 1D + Dt f0;

solution = Table@8t@nD, v@nD<, 8n, 0, time ê Dt<D;


vPC = Interpolation@solutionD;D

COMPUTER EXAMPLE 3:

f@t_, v_D = t - v;
vExact@t_D = E ^ -t + t - 1;
Lecture4.nb 9

p1 = Plot@vPC@tD - vExact@tD, 8t, 0, 4<, DisplayFunction Ø IdentityD;


PCsol@0, 4, 0.2D;

p2 = Plot@vPC@tD - vExact@tD, 8t, 0, 4<, DisplayFunction Ø IdentityD;


PCsol@0, 4, 0.1D;

p3 = Plot@vPC@tD - vExact@tD, 8t, 0, 4<, DisplayFunction Ø IdentityD;


PCsol@0, 4, 0.05D;

Show@p1, p2, p3, DisplayFunction Ø $DisplayFunctionD;

0.0025

0.002

0.0015

0.001

0.0005

1 2 3 4

Numerical Solution of Systems of ODEs

Å = f Ht, vL, vH0L = vo


The numerical schemes above have been developed for the solution of the most basic ODE,
„v
ÅÅÅÅÅÅ
„t

However, it is easy to generalize them to solve a system of such simple ODEs. The idea is that even if the ODEs are
coupled, it doesn't really make a difference, because all you need to know for the numerical solution of each of them
at every timestep is the value of the solution from the previous timestep. That is easily achieved simply by solving the
equations at the same time for the same time intervals. Coding the solver for this is trivial: You simply rewrite the solver
with functions and arguments that are vectors, each of their components being one of the functions and one of the
unknown respectively.

Furthermore, because higher order ODEs can be reduced to a system of first order ODEs, then the same numerical
solvers are also good for higher orders ODEs. This is how you go from the ODE of order 2 to the system of 2 ODEs of
order 1:

„ x
ÅÅÅÅÅ = f Ht, x, ÅÅÅÅ
„x
ÅÅ L, „x
ÅÅ H0L = vo .
2
ÅÅÅÅ
„t2 „t
xH0L = xo , ÅÅÅÅ
„t

„x
ÅÅÅÅ
„t
„v
ÅÅ = vHtL, ÅÅÅÅ
„t
ÅÅ = f Ht, x, vL, xH0L = xo , vH0L = vo .

zHtL = 8xHtL, vHtL<, zH0L = zo = 8xo , vo <


ÅÅÅÅÅ = f Ht, x, ÅÅÅÅÅÅ L, ÅÅ H0L = vo .
2
„ x „x „x
10 ÅÅÅÅ
„t2 „t
xH0L = xo , ÅÅÅÅ
„t
Lecture4.nb

„x
ÅÅÅÅ
„t
„v
ÅÅ = vHtL, ÅÅÅÅ
„t
ÅÅ = f Ht, x, vL, xH0L = xo , vH0L = vo .

zHtL = 8xHtL, vHtL<, zH0L = zo = 8xo , vo <

„z
ÅÅÅÅ
„t
ÅÅ = fHt, zL, where fHt, zL = 8vHtL, f Ht, x, vL<

Then Euler's method becomes:

zHtn L = zHtn-1 L + Dt fHtn-1 , zHtn-1 LL

You can repeat the same story for any order N. Coding this with Mathematica involves little more than cutting and paste
the old solver. In the case of the Euler solver:

Clear@"Global`*"D;
Eulersol@z0_, time_, Dt_D := Module@8t, z, sol<,
t@n_D := n Dt;
z@n_D := z@nD = z@n - 1D + Dt f@t@n - 1D, z@n - 1DD;

sol = Table@Table@8t@nD, z@nD@@mDD<, 8n, 0, time ê Dt<D,


z@0D := z0;

8m, 1, Length@z0D<D;
zEuler = Table@Interpolation@sol@@mDDD, 8m, 1, Length@z0D<D;D

COMPUTER EXAMPLE 4: Let's apply the vector Euler solver to the damped oscillator (hey for fun you can couple two or
more oscillators and see what you get...... You should be able to generate chaos pretty easily. Then maybe try to add damp-
ing and random forcing....):
Lecture4.nb 11

f@t_, z_D := 8v, -x - 0.1 v< ê. 8x Ø z@@1DD, v -> z@@2DD<;

Plot@zEuler@@1DD@tD, 8t, 0, 100<D;


Eulersol@81, 0<, 100, .02D;

0.75

0.5

0.25

20 40 60 80 100
-0.25

-0.5

-0.75

Notice the first line in the above cell, where one defines both the vector function f and the vector of unknowns z with that
local replacement trick. The point is that the module above is written compactly, in vector notation, so f is a function of t
and z and we must keep those as the arguments of f. If that line is confusing to you, don't worry, you could have written it
directly like this:

f@t_, z_D := 8z@@2DD, -z@@1DD - 0.1 z@@2DD<;

Plot@zEuler@@1DD@tD, 8t, 0, 100<D;


Eulersol@81, 0<, 100, .02D;

0.75

0.5

0.25

20 40 60 80 100
-0.25

-0.5

-0.75

S-ar putea să vă placă și