Documente Academic
Documente Profesional
Documente Cultură
Email: dionisis.zel@gmail.com
CONTENTS
Results
i. Standard routine.8
ii. Inverse power iteration with shift routine..10
iii. Time duration of our methods.13
References
i.
Introduction
When we want to treat a quantum mechanical system, we usually have to solve an
eigenvalue problem H=, where H is the Hamilton operator. For a one
2
2
V ( x)
dimensional problem: H
2m x 2
As a test case we will use the one dimensional harmonic oscillator Hamiltonian. This
problem can be solved analytically and we will use that to check our numerical
approach. The importance of the harmonic oscillator problem steams
from the fact that whenever there is a local potential minimum the harmonic
oscillator model gives the first approximation to the physics. If the potential V(x) has
a minimum in x = x0 ,we can expand it in a Taylor series around the minimum:
1
1
V ( x) V ( x0 ) V '( x)( x x0 ) V ''( x)( x x0 ) 2 ... V ( x) V ( x0 ) k ( x x0 ) 2
2
2
where we have used that V '( x0 ) 0 since x x0 is a minimum. We have further put
1
k ( x x0 ) 2 . Such an oscillator will oscillate with period
2
k
. Using instead of k, we can rewrite the Hamilton operator as:
m
, with
2 1
m 2 x 2 ,where we have put x0 in the origin. Hence we have:
2m x 2 2
2
H (
2 1
m 2 x 2 )
2
2m x
2
2
x , thus we will
1 2 1 2
E
1
z )
, but we also know that n (n ) , so:
2
2
2 z
2
1 2 1 2
1
z ) (n )
2
2 z
2
2
2 f
c0 f i c1 f i 1 c1 f i 1 c2 f i 2 c2 f i 2
x 2
The constants are determined by Taylor expanding around xi and after a few
calculations and solving the system of equations obtained, we find:
c2
15
8
, c1
,
2
12h
12h 2
1
, c1 c1 , c2 c2 where h is the step size that we are using in our model.
24h 2
c1
c2
0
0
0
c0 V ( x1 )
c0 V ( x2 )
c1
c2
0
0
c1
c
c1
c0 V ( x3 )
c1
c2
0
H 2
c2
c1
c0 V ( x4 )
c1
c2
0
0
0
c2
c1
c0 V ( x5 ) c1
0
0
c2
c1
0
0
0
0
c2
0
0
0
0
0
00
0 0
00
00
00
and
f ( x1 )
f ( x2 )
f ( x3 ) . The matrix above is banded and it is also symmetric since cn c n .
f ( x4 )
( x)
*
( x)dx ij
Further the eigenstates to a Hermitian operator form a complete set. The matrix H
we just obtained by discretization of H is an Hermitian matrix (this means that it is
self adjoint; H H ( H T )* )and its eigenvectors will be orthogonal in a similar
manner:
X i X j ij , when
Ei E j
Our matrix above belongs actually to the subclass of Hermitian matrices that are real
and symmetric. Then also the eigenvectors Xi will be real. If H is an nxn matrix then
there will be n eigenvectors. These eigenvectors, as the eigenvectors to the Hamilton
operator, form a complete set. The difference is though that the eigenfunction to
the operator H can span any function (x) defined anywhere on the real x-axis, the
finite set of eigenstates to our matrix H can span any function defined on our grid.
Diagonalization
There are several methods to solve the matrix eigenvalue equation. One method is
to diagonalize and find all eigenvalues and (optionally) all eigenvectors. The scheme
is then to find the similarity transformation of the matrix H such that X 1HX D
where D is a diagonal matrix. The eigenvalues are now found at the diagonal of D
and the eigenvectors are the columns of X. A method that uses the fact that the
matrix is banded and symmetric is much faster than a general diagonalizer. Another
possibility is to find a few eigenvalues (the lowest, the highest, in a specified energy
region) and their eigenvector with an iterative method. This type of methods are
often faster when we want a particular solution and have a sparse matrix. One such
method is the (Inverse) Power Iteration method .
AX n n X n n1 X n A1 X n
We will now solve the system of linear equations AY2 Y1 , where Y1 is our first guess for an
eigenvector. In the next step we put the solution, Y2 on the right hand side and solve again,
so we have an iterative scheme AYi 1 Yi . To analyze the situation we note that any vector
can be expanded in eigenvectors to the matrix. After the first step we have for instance:
Y2 A1Y1 A1 cn X n n1cn X n .
n
It is clear that in the iterative procedure the solution Yi 1 will converge towards the
eigenvector with the largest value of n1 , i.e. towards the eigenvector with the smallest
eigenvalue. At every step in the iteration the current approximation of the inverse of the
smallest eigenvalue is given by:
Yi Yi 1 Yi A1Yi
1
, when
Yi Yi
Yi Yi
min | n |
Also here it is a good idea to normalize in every step. Finally at every step in the
iteration we solve a system of linear equations AYi 1 Yi . The left-hand side matrix
is the same every time, but the right-hand side changes.
This is a typical situation where it is an advantage to first perform an LUdecomposition, A = LU, for fast solutions in the following iterations.
Results
To begin with, we are taking a linear grid x= [-7,7] and a step size h=0,1. Our
1
potential is given from the formula : V x 2 . Below we are plotting the potential as a
2
function of distance:
Standard routine
Then, we use the build-in Matlab command eig in order to get the eigenvalues and
the eigenvectors of our Hamiltonian. We check if the Hamiltonian matrix is
symmetric by calculating the quantity H(i,j)-H(j,i) where i , j are the line and column
of the matrix respectively. We find that this quantity is zero, hence it is indeed
symmetric. We also check that the eigenvectors we have found are correct by
inserting them into the eigenvalue equation.
For the lowest energies we notice that the equation (1) is true since as we can see in
the matrix below, the energies (eigenvalues) are describing by the formula (n+1/2)
where n=0,1,2,3
State (n)
0
1
2
3
Energy
0.5
1.5
2.5
3.4999
We also present the eigenfunctions for the quantum harmonic oscillator for the first
4 states.
We notice that the wave functions for higher n have more humps within the
potential well. This corresponds to a shorter wavelength and therefore by the de
Broglie relation, they may be seen to have a higher momentum and therefore higher
energy.
Below, we present the two highest energy solutions with E=284.1615. We are
expecting the higher energy solutions to be unphysical and depend on the
approximations, for instance the finite box, the grid and the representation of
the derivative.
Making our grid double, [-14,14], we plot the highest energy solutions with
E=353,57515.
Below, we present the lowest odd solution obtained after 3 iterations, having shift
being set as 1.5.
In addition, we present the lowest even solution obtained after five iterations,
having shift being set as 2.
We conclude that the convergence rate depends on the value of the shift. For
instance, if we have set the shift in the previous case as 2.5, then with one iteration
we have the right answer, while we need 5 iterations if we set it 2.
In order to examine the shift dependence further, we present a table varying the
shift from 1.9 to 2.5 for the lowest odd solution and note the number of iterations
that we want in order to find the exact solution.
Shift
1.9
2
2.1
2.2
2.3
2.4
2.5
Iterations
6
5
4
3
2
2
1
As expected, the closer to the value we are, the less iterations our routine wants in
order to return the right eigenvalue and its corresponding eigenvector.
Stepsize
Matrix
elements
0.1
0.01
141*141
1401*1401
Build in
command
(sec)
2.183901
26.217114
Routine
(1 it.)
(sec)
0.016805
0.774165
2
iterations
(sec)
0.018968
0.937791
3
iterations
(sec)
0.022248
0.963312
For step size 0.1, our routine needs 130 times less time to compute the eigenvalue
and the corresponding eigenvector while for step size 0.01 it is quicker 33 times from
the standard routine.
Last but not least, we want to compare our calculation with the analytical solutions
of the Harmonic oscillator problem. We can see that for the lower states, we get
multiples of (1/2) (since we work on a dimensionless model) as expected from our
theory. Hence, from the fourth eigenvalue, we start having a small divergence from
the expected value in the third decimal. This can be explained by the fact that the
second order derivative is approximated with a finite difference formula and as a
result this gives a specific error. Moreover, we force our solution to be zero outside
the last point in our grid since this is necessary in order to making it possible to
normalize the wave function.
Our solutions are normalized and tend to be orthogonal to each other. Their inner
product is almost but not exactly zero. This can also be explained by the fact that we
have used an approximation for the second derivative hence maybe a more accurate
method could be used to get better results.
References