Documente Academic
Documente Profesional
Documente Cultură
We have learnt to solve ODEs with Mathematica, and for that purpose we don't need to know much about what Mathemat-
ica actually does to solve ODEs. However, Mathematica is not the final answer to all problems. The solution of very
complex problems, such as three dimensional fluid dynamics on very large computational grids, may require a C or Fortran
code to achieve the necessary memory and computational efficiency. We will therefore venture a little into the world of
numerical methods for the solution of ODEs, and still use Mathematica, as a learning tool, to program those methods.
Solving an ODE of order N is equivalent to performing N integrations. As the integral of a function corresponds to the
surface area below the function, each integration involves a computation of that area. The accuracy of a numerical method
for solving ODEs is therefore the accuracy with which we compute that surface area. For initial value problems, for exam-
ple with time as the free variable, the basic idea of most numerical methods is to discretize time into a number of small
intervals, and advance the solution step by step. In other words, we are computing that surface area below the function by
subdividing it into pieces. Each step of the computation then needs to evaluate the area of one of those pieces. The accuracy
of the numerical solution increases as the time-step size decreases.
The discretization of the problem is equivalent to defining a map that corresponds to the original function. Maps are known
to present chaotic solutions at lower phase-space dimensionality than functions. Two dimensional maps can be chaotic,
while differential equations must explore at least a 3 dimensional phase space to exhibit chaotic behavior. That should
immediately alert you of a problem for our numerical solutions based on a discretization of the problem: You may start with
a system that is not chaotic, and design a numerical scheme corresponding to a map that is instead chaotic, for certain values
of the parameters or initial conditions. This is far from a simple academic discussion. The stability of numerical methods to
solve ODEs is a major concern, especially when advancing in time the solution of partial differential equations, where the
other variable (e.g. space dimensions) have also been discretized.
We will discuss here the Euler and the Predictor-Corrector schemes. The final problem of this week homework is an
implementation of a centered-difference scheme.
„v
ÅÅÅÅÅÅ
„t
As we said above, the basic idea is to discretize the time into small intervals of size Dt:
tn = n Dt , n = 0, 1, 2, 3, ....
Discretizing the problem means that, while the integral of the original function could be evaluated at any real value of time,
we now instead evaluate the integral of the function only at a finite number of time steps:
Ÿt0 f Ht, vHtLL dt Ø Ÿt1 f Ht, vHtLL dt Ø ... ... .. Ø Ÿtn-1 f Ht, vHt0LL dt
t1 t2 tn
In this way we map v1 Ø v2, v2 Ø v3, ......vn-1 Ø vn. Designing the numerical scheme corresponds to defining such map.
2 Lecture4.nb
Discretizing the problem means that, while the integral of the original function could be evaluated at any real value of time,
we now instead evaluate the integral of the function only at a finite number of time steps:
Ÿt0 f Ht, vHtLL dt Ø Ÿt1 f Ht, vHtLL dt Ø ... ... .. Ø Ÿtn-1 f Ht, vHt0LL dt
t1 t2 tn
In this way we map v1 Ø v2, v2 Ø v3, ......vn-1 Ø vn. Designing the numerical scheme corresponds to defining such map.
The integrals are not trivial to compute, because f depends on the unknown v, which we have not solved yet for the values
of time covered by the next integral. The way in which we approximate f over the next time interval defines our numerical
method.
If we impose that f = constant in the whole time interval, we can simply choose its initial value, which can be evaluated
The error in this approximation scales as Dt2 (same notation as in power series expansions). You can easily verify the order
of approximation of that integral with a Taylor expansion.
Which is Euler's method. Now you can really see what the discrete map looks like: It is recursion relation.
Now you will suddenly appreciate the advantage of testing a numerical scheme like this one with Mathematica, because
Mathematica has a trivial way of defining a recursion relations: You basically write them down as they are!!
t@n_D := n Dt;
v@n_D := v@n - 1D + Dt f@t@n - 1D, v@n - 1DD;
v@0D := v0
v@n_D := Hv@nD =
t@n_D := n Dt;
Dt = 0.2;
f@t_, v_D = t - v;
v0 = 0;
880, 0<, 80.2, 0<, 80.4, 0.04<, 80.6, 0.112<, 80.8, 0.2096<,
81., 0.32768<, 81.2, 0.462144<, 81.4, 0.609715<, 81.6, 0.767772<,
81.8, 0.934218<, 82., 1.10737<, 82.2, 1.2859<, 82.4, 1.46872<,
82.6, 1.65498<, 82.8, 1.84398<, 83., 2.03518<, 83.2, 2.22815<,
83.4, 2.42252<, 83.6, 2.61801<, 83.8, 2.81441<, 84., 3.01153<<
2.5
1.5
0.5
1 2 3 4
vEuler = Interpolation@solutionD
1 2 3 4
-0.01
-0.02
-0.03
-0.04
At this point, before venturing any further in Mathematica programming, we need to learn an important tool of Mathemat-
ica: The Module.
Mathematica modules are like module in C or subroutines in Fortran, they are pieces of programs with specific functions
that we can call, when needed, from another program. Modules are good ways to define new functions. The main advantage
of using a Module is that the values defined as internal variables of the Module are only local to the Module, their definition
is not remembered outside of the Module. If the variables inside the module have the same name as variables outside the
module there is no conflict. So there is no need to clear the variables used in the module before calling the module. This is
crucial if you are planning to define lots of new functions that will form a large library of computational tools, because you
don't want to worry about new variables defined by each functions everytime you call them. You really want that function
you call to behave like a black box giving out the correct result wihtout interfering in any other way with the calling pro-
gram, or the program interfering with the function. This is precisely what modules are made for.
This is the syntax:
As an example of creating a function using a module, we can directly put the Euler method into a function called Eulersol,
that is created as a module:
COMPUTER EXAMPLE 2:
v@n_D := Hv@nD =
t@n_D := n Dt;
vEuler = Interpolation@solutionD;D
Lecture4.nb 5
This is how we can compute the solution and test its error relative to the exact solution for different timestep sizes:
6 Lecture4.nb
1 2 3 4
-0.01
-0.02
-0.03
-0.04
The error of a single step is second order, Dt2 , while the error of the final solution is first order, because we have used N
timesteps to cover the time T, so N = T / Dt. The errors of all timesteps are summed up into the total error, giving
N * Dt2 = T * Dt. So the total error is first order in Dt.
T * Dt N ,
(although the error of a single timestep scales as Dt N+1 ) where T is the total integration time, and Dt the timestep size.
Therefore, accuracy may be low for three different reasons:
tn-1 2
The error in each timestep scales as Dt3 , as you can easily verify with a Taylor series expansion, and therefore the error of
the final solution scales like N * Dt3 = (T /Dt) * Dt3 = T * Dt2 , showing that the method is second order accurate.
v Htn L =
v Htn-1 L + Dt ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ
f Htn-1 ,v Htn-1 LL+f Htn ,v E n L
2 ÅÅÅÅÅÅÅÅÅÅÅÅÅ
This is great, but notice the extra work: 1) we need to evaluate the function f at two points, vn and vn-1; 2) we also have an
intermediate evaluation of the solution, that we also need to remember (more memory is used) 3) the program will be more
complex.
COMPUTER EXAMPLE 3:
f@t_, v_D = t - v;
vExact@t_D = E ^ -t + t - 1;
Lecture4.nb 9
0.0025
0.002
0.0015
0.001
0.0005
1 2 3 4
However, it is easy to generalize them to solve a system of such simple ODEs. The idea is that even if the ODEs are
coupled, it doesn't really make a difference, because all you need to know for the numerical solution of each of them
at every timestep is the value of the solution from the previous timestep. That is easily achieved simply by solving the
equations at the same time for the same time intervals. Coding the solver for this is trivial: You simply rewrite the solver
with functions and arguments that are vectors, each of their components being one of the functions and one of the
unknown respectively.
Furthermore, because higher order ODEs can be reduced to a system of first order ODEs, then the same numerical
solvers are also good for higher orders ODEs. This is how you go from the ODE of order 2 to the system of 2 ODEs of
order 1:
„ x
ÅÅÅÅÅ = f Ht, x, ÅÅÅÅ
„x
ÅÅ L, „x
ÅÅ H0L = vo .
2
ÅÅÅÅ
„t2 „t
xH0L = xo , ÅÅÅÅ
„t
„x
ÅÅÅÅ
„t
„v
ÅÅ = vHtL, ÅÅÅÅ
„t
ÅÅ = f Ht, x, vL, xH0L = xo , vH0L = vo .
„x
ÅÅÅÅ
„t
„v
ÅÅ = vHtL, ÅÅÅÅ
„t
ÅÅ = f Ht, x, vL, xH0L = xo , vH0L = vo .
„z
ÅÅÅÅ
„t
ÅÅ = fHt, zL, where fHt, zL = 8vHtL, f Ht, x, vL<
You can repeat the same story for any order N. Coding this with Mathematica involves little more than cutting and paste
the old solver. In the case of the Euler solver:
Clear@"Global`*"D;
Eulersol@z0_, time_, Dt_D := Module@8t, z, sol<,
t@n_D := n Dt;
z@n_D := z@nD = z@n - 1D + Dt f@t@n - 1D, z@n - 1DD;
8m, 1, Length@z0D<D;
zEuler = Table@Interpolation@sol@@mDDD, 8m, 1, Length@z0D<D;D
COMPUTER EXAMPLE 4: Let's apply the vector Euler solver to the damped oscillator (hey for fun you can couple two or
more oscillators and see what you get...... You should be able to generate chaos pretty easily. Then maybe try to add damp-
ing and random forcing....):
Lecture4.nb 11
0.75
0.5
0.25
20 40 60 80 100
-0.25
-0.5
-0.75
Notice the first line in the above cell, where one defines both the vector function f and the vector of unknowns z with that
local replacement trick. The point is that the module above is written compactly, in vector notation, so f is a function of t
and z and we must keep those as the arguments of f. If that line is confusing to you, don't worry, you could have written it
directly like this:
0.75
0.5
0.25
20 40 60 80 100
-0.25
-0.5
-0.75