Documente Academic
Documente Profesional
Documente Cultură
While MD solves the time dependent development of the system studied and
averaging is done over time, MC is based on the statistical mechanics notion of
averaging over ensembles. In MC simulations we choose an appropriate statistical
mechanics ensemble, with a distribution function describing probability of
occurrence of various states, and evaluate physical quantities in this ensemble. The
ergodic theorem guarantees the equivalence of the MD and MC approaches.
! define the 6N
and velocities of the particles. Positions X and velocities X
dimensional space, called the phase space of the system studied, and in the following,
! 1.
we shall denote vectors in this space Q ! (X, X)
If F(Q) is the distribution function in the phase space, which determines the
probability that the system configuration corresponds to the vector Q, for example the
Boltzmann distribution exp(!H / k B T) , then the average value of a physical quantity
A is
A = Z!1
"
A(Q)F(Q)dQ
(MC1.1)
F(Q)dQ
(MC1.2)
Phase space
where
Z=
Phase space
is called the partition function; the integration extends over the whole phase space
1
In general, we could assume that there are si degrees of freedom associated with each particle i so
N
!s
!
dimensional space for this system. X and X
i=1
!s
i =1
!.
and dQ = dX dX
When evaluating A we have to compute a very high-dimensional integral and
this is possible analytically only for a very limited number of problems. In principal,
one could do it numerically but the problem becomes quickly unmanageable with
increasing number of particles. Take as an example 100 particles, each with three
degrees of freedom, in a cube of fixed dimensions. Consider integration over the
accessible space, X, which is for this system 300 dimensional. To carry out a
standard numerical integration over the cube we choose, for example, 10 points along
300
each of the 300 coordinates and thus in the 300 dimensional space we have 10
points at which the integrand needs to be evaluated. This is obviously impossible.
What we can do instead is to evaluate the average value of a quantity A such that
1
A !
M
"
k =1
with the weight p (Q k )
A(Q k ) ,
(MC2)
where the summation extends over M randomly chosen points in the phase space
each taken with the weight p(Q k ) = Z! 1F(Q k ) , which is the probability of the
existence of the state Q k . For this purpose we need to be able to pick the points Qk
with the probability p(Qk ) and the Monte Carlo procedure is the method to do just
this. It is the scheme for construction of an assembly of states in the phase space, Q ,
such that various states will occur in this assembly with the probability
p(Q) = Z! 1 F(Q) . The states in this assembly are then used in the formula (MC2).
This formula becomes exact when M ! " and it is the better approximation for A
the larger is M.
!!
Over
qua ter circle
circle OCA in Fig. 1, which we know is equal to /4. This could be done numerically
by a straightforward numerical integration using a grid of points inside the quarter
circle. This integration can also be done employing the following stochastic, MonteCarlo like process. We assign to the points inside the circle the weight p = 1 and to
2
those outside the circle the weight p = 0. The quantity that we average is A = 1.
"
1! p ! dxdy =
Square OCBA
"
Over
quater circle
the integral we want to evaluate. If we distribute randomly into the area of the
square OCBA (see Fig. 1) n tot points then nin of these points will fall inside the
circle2.
When evaluating
k =1
with the weight p(Q k )
A(Qk ) = n in since the points outside the circle have the weight zero.
Therefore
dx dy "
Over
quater circle
n in
n tot
1
O
Such random distribution of points within the square OCBA can be produced using the generator
of random numbers within the interval (0, 1). In each trial two independent random numbers are
generated and these numbers represent coordinates inside the square OCBA.
3
! f(x)dx ,
where f(x) is a
function? We can choose randomly a large number, M, of points in the interval a,b
and then
b
b #a M
(MC3)
f(x i )
! f(x)dx " M $
i=1
a
This calculation uses so-called straightforward sampling that does not take into
account the shape of the function f(x). However, f(x) may be a very 'sharp' function,
for example such as shown in the Fig. 2. The contribution to the integral from the
'tails', i. e. regions far away from the maximum, is then very small. In this case it is
more efficient to sample the function more frequently at points near the maximum
while limiting the sampling far away from the maximum. This can be achieved by a
non-uniform sampling, called importance sampling, which emphasizes the peak of
this function and can be carried out as follows:
f(x)
Fig. 2. Schematic picture of a function with the sharp peak where sampling has to be
more frequent.
We construct a weight function p(x) > 0 that mimics the function f(x) in such a way
that it is large for those values of x for which f(x ) is large and vice versa, and
b
! p(x)dx
a
= 1 . The
!
a
f( x)
p(x)dx .
p(x)
(MC4)
f(x)
with the weight
p(x)
p(x) . We again choose randomly M points x i in the interval a, b and
!
a
b#a
f (x)dx "
M
i=1
with weight
p (x i )
f (x i )
p( xi )
(MC5)
where the summation extends over points xi but now with the weight p(xi) instead of
uniformly. This is called importance sampling since it prefers points that give the
dominant contribution to the integral.
This approach means that points xi within the interval a, b are chosen with the
probability p(x i ) rather than all with the same probability. Hence, the points
used in the summation occur with the frequency p(x i ) .
When we calculate the average value of a quantity A according to equation (MC1.1),
we can choose
!1
(MC6)
p(Q) = Z F(Q) .
Then A =
! A(Q)p(Q )dQ
is
Phase
space
determined according to (MC2), but now the summation extends over M randomly
chosen points Qi in the phase space, each taken with the weight p(Q i ) . This
calculation will be most efficient if we can sum in (MC2) over a set of points Q i that
has been constructed in such a way that a point Q i (which represent a phase space
configurations) occurs in this set with the probability p(Q i ) . For this purpose we
need an algorithm that generates a set of phase space states in accordance with
!1
the distribution p(Q) = Z F(Q) . This is what Monte Carlo simulation achieves.
MONTE-CARLO SIMULATION
As explained above, the goal is to construct (select) from the phase space, Q , an
assembly of states such that in this assembly various states occur with the probability
p(Q) . For this purpose we devise a random walk (Markov chain3) such that, starting
from an initial state Q 0 , other states are generated by transitions Q ! Q " and,
ultimately, in the steady state, they are distributed according to the probability
(distribution function) p(Q) . The transitions Q ! Q " occur in this process with a
transition probability !(Q, Q" ) that defines the nature of this process and must be
chosen so as to attain the distribution p(Q) in the steady state.
The transition probability !(Q, Q" ) must satisfy the following conditions:
!(Q, Q" ) # 0
(i)
(MC7.1)
# !(Q, Q" ) = 1
(ii)
(MC7.2)
Q"
where the summation extends over all available states Q! . The meaning of this
condition is that every state Q is eventually attained in this random walk process.
(iii)
(MC7.3)
Q"
where the summation again extends over all available states Q! . This is the selfconsistency condition that defines the transition probability !(Q, Q" ) such that in
equilibrium the states in the phase space are distributed according to the prescribed
distribution function p(Q) . This is the goal of the construction of this random walk.
The trick used in MC is to replace the last condition by a stronger condition of
microscopic reversibility
!( Q ", Q)p( Q ") = !(Q, Q ")p(Q)
(MC8)
Clearly, equation (MC7.3) follows from equation (MC8). By summation over Q! in
(MC8) we obtain
3
Markov chain is a random walk in which the probability that a system is in the state k depends
only on the preceding state k-1 but not on other states. Markov chain is called ergodic if any state
can be obtained from any other state in a finite number of steps.
6
because
Q"
(MC7.3) is
Q"
satisfied. However, (MC7.3) could also be satisfied in a different way and thus
equation (MC8) does not follow from (MC7.3). This means that the use of (MC8) is
just one approach how to satisfy the more general condition (MC7.3)
A possible transition probability Q ! Q " that satisfies equation (MC8) is
!(Q " Q# ) =
p( Q# )
when p(Q) > p(Q #)
p(Q)
(MC9)
% !(Q, Q# )
Q #$Q
The first equation determines the probability of the transition Q ! Q " when the state
Q is more probable to exist than the state Q ! . The second equation states that the
transition will always take place if the state Q ! is more probable to exist than the
state Q . The last equation, which follows from (MC7.2), determines the probability
with which the system remains in the state Q when it is already in this state.
The following is the proof that the choice (MC9) satisfies (MC8).
According to (MC9):
p(Q ")
If p(Q) > p( Q! ) then !(Q, Q ") =
and !( Q", Q) = 1 . In this case
p(Q)
p(Q)
. Hence,
p( Q" )
If !(Q, Q" ) # $ accept the new state, i.e. Q ! Q " , and go to (2).
The steps (5) and (6) correspond to making the transition Q ! Q " with the
probability !(Q, Q" ) since the probability that !(Q, Q" ) # $ is equal to !(Q, Q" ) .
Note that if !(Q, Q") = 1 then it is larger than any random number from the interval
<0,1>.
In any Monte Carlo study the system must be allowed to relax first since the
starting configuration generally does not correspond to the equilibrium state
with the distribution p(Q) .
In the case of the Boltzmann distribution this means that it may not be in the
thermodynamic equilibrium.
How many steps are needed before the equilibrium is attained needs to be tested
empirically.
Phase
spa ce
1
A !
M
where
" A(Q )
(MC10.1)
i=1
The fluctuation of a physical quantity A is again defined generally as the root mean
square (RMS) deviation
! A
(MC10.2)
and
1
!
M
" A (Q )
2
i=1
Errors in MC calculations
The question that always arises is how many MC steps are needed before a prescribed
accuracy, , is attained for a quantity A, i.e. before
A ! A < ".
The results of Monte Carlo calculations are subject to systematic and statistical
errors. Sources of systematic errors include the size of the relaxed block, the
boundary conditions and insufficient number of MC steps. These errors can be
alleviated by proper choices of simulation conditions. The statistical errors cannot be
avoided but the variance, i. e. the average fluctuation away from equilibrium
quantities, can be evaluated as:
( )
1
! A ==
M
# (A " A )
2
i=1
(MC11)
" H(Q) %
!1
p(Q) = Z exp $ !
'
# kBT &
(MC12)
(MC13)
where !H = H( Q ") # H(Q) is the change of the energy H during the transition
Q ! Q" . The former case corresponds to the increase of the energy as the transition
occurs and the latter to the decrease of the energy. This means that if the energy
decreases the transition Q ! Q" is always accepted but if the energy increases it is
accepted with the probability exp #$ ! "H kB T %& that is always smaller than one.
! , X and X
! are independent generalized coordinates.
In the phase space Q ! (X, X)
The Hamiltonian is usually composed of the kinetic and potential energy such that
! while the potential energy is only a function of
only the kinetic energy depends on X
X . In this case
!
p(Q) = p1 (X)p2 ( X)
(MC14.1)
where
" E (X) %
! %
"
! = Z!1 exp $ ! Ek ( X) ' . (MC14.2)
' and p2 ( X)
p1 (X) = Z1!1 exp $ ! p
2
$# k B T '&
$# k B T '&
E p is the potential (internal) energy of the system, E k the kinetic energy and
10
Z1 =
" E p (X) %
' dX and
k
T
$#
'&
B
exp $ !
Accessible
space
Z2 =
Accessible
velocities
! %
" E k ( X)
! (MC14.3)
' dX
$# k B T '&
exp $ !
!
! = A(X)p (X)dX
dX
! A(Q)p(Q)dQ = ! A(X)p (X)p (X)dX
!
since p ( X)d
! ! X! = 1 and therefore
1
# Ep (X) &
A(X )p1 (X )dX = Z1"1 A( X)exp % "
( dX
(MC15)
%$ k B T ('
Hence, distribution of velocities need not be considered explicitly in the Monte
Carlo study if A is not an explicit function of particle velocities. The sampling is
done only in the accessible space of particle positions, X, with transition probabilities
% $E p (
!(X, X " ) = exp ' #
* when $E p > 0
(MC16)
k
T
B )
&
A =
In the following we assume that the kinetic energy depends only on velocities and
potential energy depends only on positions of atoms and thus equation (MC16)
determines the transition probability.
Note on quantities dependent only on velocities
since
! &
# E ( X)
"1
k
!
!
!
!
!
B = B( X)p2 ( X)d X = Z2 B( X)exp % "
(dX
%$ k B T ('
11
For example, the stress tensor, given by equations (G15a, b) of the Section on
General Aspects of Modeling, is composed of two parts, one depends only on
positions of the particles and the other only on their velocities. The average value of
the kinetic part that depends only on velocities, given by (G14), is
! T"# = $
,
/ 1 2 3
1 &
" #)
2
3
m
v
v
exp
$
m
v
2k
T
.
1 dv1 dv1 dv1 222dv1N dv2N dvN
%
%
(
i
i
i
+
j
j
B
3
VZ 2 ' i
*
.- j
10
Using
spherical
coordinates
in
the
space
of
velocities
when
1
2
3
1
2
3
2
2
2
dv 1dv 1 dv 1 !!! dv N dv N dv N = 4"(v1 ) (v2 ) ....( v N ) dv1dv 2 !!! dv N and integrating over the
angles we get
+*
! T"# = $
% "#
m i + v 4i exp &' $m i v 2i 2k B T () dv i
3V ,
+*
+ v exp &' $m i v
0
i=1
! T"# = $
2
i
2
i
2k B T () dv i
N
k T% = $&k B T%"#
V B "#
as in equation (G16) that was derived directly using the equipartition theorem. In fact
if the quantity A is the kinetic energy then
Ek =
)
, 1 2 3
1 "
2%
2
1
2
3
m
v
exp
(
m
v
2k
T
!
!
+
. dv 1 dv 1 dv 1 // /dv N dv N dv N
$
'
i
i
j
j
B
0
&
2Z2 # i
* j
-
Using
spherical
coordinates
in
the
space
of
velocities
1
2
3
1
2
3
2
2
2
dv 1dv 1 dv 1 !!! dv N dv N dv N = 4"(v1 ) (v2 ) ....( v N ) dv1dv 2 !!! dv N and integrating over the
angles we get
" +& 4
$
" !m v 2 2k T$ dv )
(
m
v
exp
N
i
i
i i
B %
i
#
(
)
0
( +&
)
i=1 ( 2 v 2 exp " !m v2 2k T $ dv )
i
i i
B %
i )
#
(
# 0
%
Ek = *
'
'
3Nk B T
, which is the equipartition theorem.
2
12
METROPOLIS METHOD
This algorithm assumes that the kinetic energy depends only on velocities and the
potential energy depends only on positions of atoms and thus equation (MC16)
determines the transition probability. The algorithm then proceeds as follows:
(1) Specify an initial configuration X 0, for example positions of all particles in the
system studied.
(2) Starting from a state X generate randomly a new state (configuration) X' . This
is done using a random number generator to change randomly the coordinates 0f
particles.
(3) Compute the potential energy difference !E p = E p (X' ) " E p (X) .
(4) If !E p < 0 accept the new configuration, i. e. X ! X' and go to (2).
# "Ep &
(.
(5) If !E p > 0 compute exp % !
k
T
%$
('
B
(6) Generate a random number ! such that 0 < ! < 1 .
# "Ep &
( ) * accept the new state, i.e. X ! X' and go to (2).
(7) If exp % !
%$ k B T ('
(8) Otherwise retain the old configuration and return to (2).
In the step (2) we can change randomly positions of all particles but usual and more
practical approach is to choose randomly one particle, decide whether change its
coordinates or not and then move randomly to another particle and continue with its
possible motion.
x ! x + "# 1
y ! y + "# 2
z ! z + "# 3
where is the maximum allowed displacement and ! i (i=1,2,3) are random numbers
13
E p = !J " s i s j
i, j
where i and j are always the nearest neighbors. This model represents a
ferromagnetic system for J positive and an antiferromagnetic system for J negative
(see Fig. 3).
Non-magnetic disordered
Ferromagnetic
J>0
Antiferromagnetic
Fig. 3 Three varieties of spin states.
14
J<0
Let us consider the ferromagnetic case. The magnetization (an order parameter) for a
given configuration of spins, K, is defined as:
m(K ) =
The question is how does the magnetization depend on temperature. This can be
studied using MC as follows:
We shall consider only single spin flips as possible transitions in the MC study, these
are transitions corresponding to the changes s k ! "s k . The change in the potential
energy for such transition is
!Ep (s k " # sk ) = Js k $ si
i=neighbors
of k
MC Procedure
(1) Specify an initial configuration of spins.
(2) Choose randomly a lattice site k and flip the spin on this site.
(3) Compute !Ep (s k " # sk ) as defined above.
(4) If !Ep < 0 accept the new configuration.
# "E p &
(5) If !Ep > 0 compute exp % !
(.
k
T
$ B '
(6) Generate a random number ! such that 0 < ! < 1 .
# "E p &
(7) If exp % !
( ! " accept the new state and go to (2).
$ k BT '
(8) Otherwise retain the old configuration and return to (2).
This process is repeated L times, where L is a large number. First L0 steps are then
disregarded since the system is first attaining equilibrium. The magnetization is then
evaluated for a given temperature as the average for individual states
1
m=
L ! L0
# m (K)
K "L0
15
Magnetization
1.0
0.5
0.0
0.5
T/Tc
1.0
1.5
!=
f" # cA
1 # cA
(MC22)
potential energy. This relaxation can be performed using the Monte Carlo procedure
for structural relaxation as described above or employing the molecular statics.
The following procedure is then carried out:
(1) Evaluate the change of the potential energy !Ep associated with the above
exchange of the configuration of atoms
(2) If !E p < 0 accept the new configuration and start again.
# "E p &
(3) If !Ep > 0 compute exp % !
(.
$ k BT '
(4) Generate a random number ! such that 0 < ! < 1 .
# "E p &
(5) If exp % !
( ! " accept the new configuration and start again.
k
T
$ B '
(6) Otherwise retain the old configuration and start again.
When the thermodynamic equilibrium is attained at a given temperature the longrange order parameter is evaluated by averaging over a sufficient number of MC
steps using equation (MC22). Performing this calculation for various temperatures
the dependence of the long-range order parameter on temperature is obtained. The
result is shown in Fig. 5. Calculation determines the temperature at which the orderdisorder transformation occurs
!
1.0
DISORDER
ORDER
0.5
TEMPERATURE
" E p (X) + pV %
!1
p (X) = Z exp $ !
'
k BT
#
&
and the partition function
Z=
((
" E p ( X) + pV %
exp $ !
' dX dV
k
T
$#
'&
B
(MC17.1)
(MC17.2)
If !" < 0 accept the new configuration, i. e. V ! V' , X ! X " and go to (2).
$ "# '
If !" > 0 compute exp & !
).
% kB T(
Generate a random number ! such that 0 < ! < 1 .
$ "# '
If exp & !
) ! " accept the new state, V ! V " , X ! X " and go to (2).
% kB T(
Otherwise retain the old configuration and volume and return to (2).
18
" *
N
where X
(N )
$ Ep (X (N ) ) # N '
!N
) dX
exp & #
N!
k
T
&%
)(
B
(MC17.1)
#3/2
( N)
# E p (X (N ) ) " N &
!N
(
)=
exp % "
N!
k
T
%$
('
B
(MC17.2)
again only on !Ep = E p ( X " ) # E p( X) since N is fixed. This can be treated in the
same way as before.
corresponding
to
changes
N ! N + 1 and N ! N "1 .
of
the
number
of
particles,
in
particular,
( N+1)
" ) # E (X ( N) )
p
( N+1)
(N )