Sunteți pe pagina 1din 21

MONTE CARLO

While MD solves the time dependent development of the system studied and
averaging is done over time, MC is based on the statistical mechanics notion of
averaging over ensembles. In MC simulations we choose an appropriate statistical
mechanics ensemble, with a distribution function describing probability of
occurrence of various states, and evaluate physical quantities in this ensemble. The
ergodic theorem guarantees the equivalence of the MD and MC approaches.

THE PRINCIPLE OF MONTE CARLO STUDIES


Let us consider a system of N particles the positions of which are determined by
vectors ri and velocities by vectors v i = dri dt = r!i . Accessible space for this system
is the 3N dimensional space defined by the 3N dimensional vector X = ( r1 ,r2 ,...., rN )
that describes possible spatial configurations of the system studied. Similarly,
accessible velocities of all the particles are described by the 3N dimensional vector
! = (r!1 , r!2 ,...., r!N ) . The behavior of the system is determined by the Hamiltonian
X

! that depends, in general, on both positions


(kinetic plus potential energy) H(X, X)

! define the 6N
and velocities of the particles. Positions X and velocities X
dimensional space, called the phase space of the system studied, and in the following,
! 1.
we shall denote vectors in this space Q ! (X, X)
If F(Q) is the distribution function in the phase space, which determines the
probability that the system configuration corresponds to the vector Q, for example the
Boltzmann distribution exp(!H / k B T) , then the average value of a physical quantity
A is
A = Z!1

"

A(Q)F(Q)dQ

(MC1.1)

F(Q)dQ

(MC1.2)

Phase space

where

Z=

Phase space

is called the partition function; the integration extends over the whole phase space
1

In general, we could assume that there are si degrees of freedom associated with each particle i so
N

that (s1, s2, ....., sN) defines an accessible

!s

!
dimensional space for this system. X and X

i=1

!s

then define the 2

i =1

dimensional phase space.

!.
and dQ = dX dX
When evaluating A we have to compute a very high-dimensional integral and
this is possible analytically only for a very limited number of problems. In principal,
one could do it numerically but the problem becomes quickly unmanageable with
increasing number of particles. Take as an example 100 particles, each with three
degrees of freedom, in a cube of fixed dimensions. Consider integration over the
accessible space, X, which is for this system 300 dimensional. To carry out a
standard numerical integration over the cube we choose, for example, 10 points along
300
each of the 300 coordinates and thus in the 300 dimensional space we have 10
points at which the integrand needs to be evaluated. This is obviously impossible.
What we can do instead is to evaluate the average value of a quantity A such that
1
A !
M

"

k =1
with the weight p (Q k )

A(Q k ) ,

(MC2)

where the summation extends over M randomly chosen points in the phase space
each taken with the weight p(Q k ) = Z! 1F(Q k ) , which is the probability of the
existence of the state Q k . For this purpose we need to be able to pick the points Qk
with the probability p(Qk ) and the Monte Carlo procedure is the method to do just
this. It is the scheme for construction of an assembly of states in the phase space, Q ,
such that various states will occur in this assembly with the probability

p(Q) = Z! 1 F(Q) . The states in this assembly are then used in the formula (MC2).
This formula becomes exact when M ! " and it is the better approximation for A
the larger is M.

Example: Monte Carlo Integration


As a simple example we evaluate

!!

dxdy that determines the area of the quarter

Over
qua ter circle

circle OCA in Fig. 1, which we know is equal to /4. This could be done numerically
by a straightforward numerical integration using a grid of points inside the quarter
circle. This integration can also be done employing the following stochastic, MonteCarlo like process. We assign to the points inside the circle the weight p = 1 and to
2

those outside the circle the weight p = 0. The quantity that we average is A = 1.

"

The integral (MC1.1) is then A =

1! p ! dxdy =

Square OCBA

"

1!1! dxdy which is

Over
quater circle

the integral we want to evaluate. If we distribute randomly into the area of the
square OCBA (see Fig. 1) n tot points then nin of these points will fall inside the
circle2.

When evaluating

k =1
with the weight p(Q k )

according to (MC2) we have M = ntot and

A(Qk ) = n in since the points outside the circle have the weight zero.

Therefore

dx dy "

Over
quater circle

n in
n tot

The ratio will converge towards /4 as the number ntot increases.


y

1
O

Fig. 1. Schematic picture demonstrating Monte Carlo type integration.

Such random distribution of points within the square OCBA can be produced using the generator
of random numbers within the interval (0, 1). In each trial two independent random numbers are
generated and these numbers represent coordinates inside the square OCBA.
3

How can we evaluate using this approach the integral

! f(x)dx ,

where f(x) is a

function? We can choose randomly a large number, M, of points in the interval a,b
and then
b
b #a M
(MC3)
f(x i )
! f(x)dx " M $
i=1
a
This calculation uses so-called straightforward sampling that does not take into
account the shape of the function f(x). However, f(x) may be a very 'sharp' function,
for example such as shown in the Fig. 2. The contribution to the integral from the
'tails', i. e. regions far away from the maximum, is then very small. In this case it is
more efficient to sample the function more frequently at points near the maximum
while limiting the sampling far away from the maximum. This can be achieved by a
non-uniform sampling, called importance sampling, which emphasizes the peak of
this function and can be carried out as follows:

f(x)

Fig. 2. Schematic picture of a function with the sharp peak where sampling has to be
more frequent.
We construct a weight function p(x) > 0 that mimics the function f(x) in such a way
that it is large for those values of x for which f(x ) is large and vice versa, and
b

! p(x)dx
a

= 1 . The

! f (x)dx can now be written as


a

!
a

f( x)
p(x)dx .
p(x)

(MC4)

f(x)
with the weight
p(x)
p(x) . We again choose randomly M points x i in the interval a, b and

The integration in (MC4) can be interpreted as integration of

!
a

b#a
f (x)dx "
M

i=1
with weight
p (x i )

f (x i )
p( xi )

(MC5)

where the summation extends over points xi but now with the weight p(xi) instead of
uniformly. This is called importance sampling since it prefers points that give the
dominant contribution to the integral.
This approach means that points xi within the interval a, b are chosen with the
probability p(x i ) rather than all with the same probability. Hence, the points
used in the summation occur with the frequency p(x i ) .
When we calculate the average value of a quantity A according to equation (MC1.1),
we can choose
!1
(MC6)
p(Q) = Z F(Q) .
Then A =

! A(Q)p(Q )dQ

and in the numerical stochastic evaluation A

is

Phase
space

determined according to (MC2), but now the summation extends over M randomly
chosen points Qi in the phase space, each taken with the weight p(Q i ) . This
calculation will be most efficient if we can sum in (MC2) over a set of points Q i that
has been constructed in such a way that a point Q i (which represent a phase space
configurations) occurs in this set with the probability p(Q i ) . For this purpose we
need an algorithm that generates a set of phase space states in accordance with
!1
the distribution p(Q) = Z F(Q) . This is what Monte Carlo simulation achieves.

MONTE-CARLO SIMULATION
As explained above, the goal is to construct (select) from the phase space, Q , an
assembly of states such that in this assembly various states occur with the probability
p(Q) . For this purpose we devise a random walk (Markov chain3) such that, starting
from an initial state Q 0 , other states are generated by transitions Q ! Q " and,
ultimately, in the steady state, they are distributed according to the probability
(distribution function) p(Q) . The transitions Q ! Q " occur in this process with a
transition probability !(Q, Q" ) that defines the nature of this process and must be
chosen so as to attain the distribution p(Q) in the steady state.
The transition probability !(Q, Q" ) must satisfy the following conditions:

!(Q, Q" ) # 0

(i)

(MC7.1)

which simply means that the transition probability cannot be negative.

# !(Q, Q" ) = 1

(ii)

(MC7.2)

Q"

where the summation extends over all available states Q! . The meaning of this
condition is that every state Q is eventually attained in this random walk process.
(iii)

# !(Q ", Q)p( Q " ) = p(Q)

(MC7.3)

Q"

where the summation again extends over all available states Q! . This is the selfconsistency condition that defines the transition probability !(Q, Q" ) such that in
equilibrium the states in the phase space are distributed according to the prescribed
distribution function p(Q) . This is the goal of the construction of this random walk.
The trick used in MC is to replace the last condition by a stronger condition of
microscopic reversibility
!( Q ", Q)p( Q ") = !(Q, Q ")p(Q)
(MC8)
Clearly, equation (MC7.3) follows from equation (MC8). By summation over Q! in
(MC8) we obtain
3

Markov chain is a random walk in which the probability that a system is in the state k depends
only on the preceding state k-1 but not on other states. Markov chain is called ergodic if any state
can be obtained from any other state in a finite number of steps.
6

# !(Q ",Q)p(Q" ) = # !(Q, Q ")p(Q) = p(Q)


according to (MC7.2) # !(Q, Q" ) = 1 and thus equation
Q"

because

Q"

(MC7.3) is

Q"

satisfied. However, (MC7.3) could also be satisfied in a different way and thus
equation (MC8) does not follow from (MC7.3). This means that the use of (MC8) is
just one approach how to satisfy the more general condition (MC7.3)
A possible transition probability Q ! Q " that satisfies equation (MC8) is

!(Q " Q# ) =

p( Q# )
when p(Q) > p(Q #)
p(Q)

!(Q, Q ") = 1 when p(Q) < p( Q" )


!(Q, Q) = 1"

(MC9)

% !(Q, Q# )

Q #$Q

The first equation determines the probability of the transition Q ! Q " when the state
Q is more probable to exist than the state Q ! . The second equation states that the
transition will always take place if the state Q ! is more probable to exist than the
state Q . The last equation, which follows from (MC7.2), determines the probability
with which the system remains in the state Q when it is already in this state.
The following is the proof that the choice (MC9) satisfies (MC8).
According to (MC9):
p(Q ")
If p(Q) > p( Q! ) then !(Q, Q ") =
and !( Q", Q) = 1 . In this case
p(Q)

!(Q, Q ")p(Q) = p(Q" ) = !(Q", Q)p( Q")


since !( Q" ,Q) = 1. Hence, equation (MC8) is satisfied.
If p(Q) < p( Q! ) then !(Q, Q ") = 1 and according to (MC9) !( Q", Q) =

!( Q", Q)p( Q" ) = p(Q) = !(Q, Q")p(Q)


since !(Q, Q ") = 1 and equation (MC8) is again satisfied.

p(Q)
. Hence,
p( Q" )

GENERAL MONTE-CARLO ALGORITHM


(1) Specify an initial point, Q 0 , in the phase space of the system studied. This
means we define a starting configuration, in general determined by positions and
velocities of particles studied.
The process should be independent of this choice.
(2) Starting from a state Q generate randomly a new state Q! .
This is done with the help of a random number generator used to change
randomly the coordinates and velocities. Randomness is essential!
(3) Evaluate the transition probability !(Q, Q" ) defined by (MC9).
(4) Generate a random number ! such that 0 < ! < 1.
(5) If !(Q, Q" ) < # then remain in the old state Q and go to (2).
(6)

If !(Q, Q" ) # $ accept the new state, i.e. Q ! Q " , and go to (2).

The steps (5) and (6) correspond to making the transition Q ! Q " with the
probability !(Q, Q" ) since the probability that !(Q, Q" ) # $ is equal to !(Q, Q" ) .
Note that if !(Q, Q") = 1 then it is larger than any random number from the interval
<0,1>.
In any Monte Carlo study the system must be allowed to relax first since the
starting configuration generally does not correspond to the equilibrium state
with the distribution p(Q) .
In the case of the Boltzmann distribution this means that it may not be in the
thermodynamic equilibrium.
How many steps are needed before the equilibrium is attained needs to be tested
empirically.

Averages and fluctuations


Meaningful averages can only be calculated after equilibration when the distribution
p(Q) has been attained. The average value of a physical quantity A is generally
defined as A =

A(Q)p(Q)dQ . In the framework of a Monte Carlo study when

Phase
spa ce

the states Q are distributed according to p(Q)

1
A !
M
where

" A(Q )

(MC10.1)

i=1

M is the number of Monte Carlo steps after equilibration.

The fluctuation of a physical quantity A is again defined generally as the root mean
square (RMS) deviation

! A

(MC10.2)

and

1
!
M

" A (Q )
2

i=1

Errors in MC calculations
The question that always arises is how many MC steps are needed before a prescribed
accuracy, , is attained for a quantity A, i.e. before

A ! A < ".

The results of Monte Carlo calculations are subject to systematic and statistical
errors. Sources of systematic errors include the size of the relaxed block, the
boundary conditions and insufficient number of MC steps. These errors can be
alleviated by proper choices of simulation conditions. The statistical errors cannot be
avoided but the variance, i. e. the average fluctuation away from equilibrium
quantities, can be evaluated as:

( )

1
! A ==
M

# (A " A )
2

i=1

where M is the number of MC steps.

(MC11)

MONTE-CARLO SIMULATION WITH BOLTZMANN


DISTRIBUTION OF STATES
CANONICAL ENSEMBLE (N, V, T)
Quantities conserved: The temperature T, the total number of particles N and the
total volume V (see also MD). The distribution function is

" H(Q) %
!1
p(Q) = Z exp $ !
'
# kBT &

(MC12)

where H = E p + E kin is the Hamiltonian of the system, i. e. its energy. Following


equation (MC 9)
% $H (
!(Q, Q ") = exp ' #
* when $H > 0
& k BT )
!(Q, Q ") = 1 when $H < 0

(MC13)

where !H = H( Q ") # H(Q) is the change of the energy H during the transition
Q ! Q" . The former case corresponds to the increase of the energy as the transition
occurs and the latter to the decrease of the energy. This means that if the energy
decreases the transition Q ! Q" is always accepted but if the energy increases it is
accepted with the probability exp #$ ! "H kB T %& that is always smaller than one.

! , X and X
! are independent generalized coordinates.
In the phase space Q ! (X, X)
The Hamiltonian is usually composed of the kinetic and potential energy such that
! while the potential energy is only a function of
only the kinetic energy depends on X
X . In this case
!
p(Q) = p1 (X)p2 ( X)
(MC14.1)
where
" E (X) %
! %
"
! = Z!1 exp $ ! Ek ( X) ' . (MC14.2)
' and p2 ( X)
p1 (X) = Z1!1 exp $ ! p
2
$# k B T '&
$# k B T '&
E p is the potential (internal) energy of the system, E k the kinetic energy and

10

Z1 =

" E p (X) %
' dX and
k
T
$#
'&
B

exp $ !

Accessible
space

Z2 =

Accessible
velocities

! %
" E k ( X)
! (MC14.3)
' dX
$# k B T '&

exp $ !

If a physical quantity, A, depends on the temperature T but not explicitly on the


! , and the number of particles is fixed, then
velocities X

!
! = A(X)p (X)dX
dX
! A(Q)p(Q)dQ = ! A(X)p (X)p (X)dX
!
since p ( X)d
! ! X! = 1 and therefore
1

# Ep (X) &
A(X )p1 (X )dX = Z1"1 A( X)exp % "
( dX
(MC15)
%$ k B T ('
Hence, distribution of velocities need not be considered explicitly in the Monte
Carlo study if A is not an explicit function of particle velocities. The sampling is
done only in the accessible space of particle positions, X, with transition probabilities
% $E p (
!(X, X " ) = exp ' #
* when $E p > 0
(MC16)
k
T
B )
&
A =

!(X, X " ) = 1 otherwise

In the following we assume that the kinetic energy depends only on velocities and
potential energy depends only on positions of atoms and thus equation (MC16)
determines the transition probability.
Note on quantities dependent only on velocities

! and not on X then


If a physical quantity, B, depends only on the velocities X
! (X)p ( X)dX
!
! = B( X)p
! X
!
dX
! B(Q)p(Q)dQ = ! B(X)p
! ! (X)d
1

since

! p (X)dX = 1and thus


1

! &
# E ( X)
"1
k
!
!
!
!
!
B = B( X)p2 ( X)d X = Z2 B( X)exp % "
(dX
%$ k B T ('

11

For example, the stress tensor, given by equations (G15a, b) of the Section on
General Aspects of Modeling, is composed of two parts, one depends only on
positions of the particles and the other only on their velocities. The average value of
the kinetic part that depends only on velocities, given by (G14), is

! T"# = $

,
/ 1 2 3
1 &
" #)
2
3
m
v
v
exp
$
m
v
2k
T
.
1 dv1 dv1 dv1 222dv1N dv2N dvN
%
%
(
i
i
i
+
j
j
B
3
VZ 2 ' i
*
.- j
10

Using
spherical
coordinates
in
the
space
of
velocities
when
1
2
3
1
2
3
2
2
2
dv 1dv 1 dv 1 !!! dv N dv N dv N = 4"(v1 ) (v2 ) ....( v N ) dv1dv 2 !!! dv N and integrating over the
angles we get
+*

! T"# = $

% "#

m i + v 4i exp &' $m i v 2i 2k B T () dv i

3V ,

+*

+ v exp &' $m i v
0

i=1

After integration by parts

! T"# = $

2
i

2
i

2k B T () dv i

N
k T% = $&k B T%"#
V B "#

as in equation (G16) that was derived directly using the equipartition theorem. In fact
if the quantity A is the kinetic energy then

Ek =

)
, 1 2 3
1 "
2%
2
1
2
3
m
v
exp
(
m
v
2k
T
!
!
+
. dv 1 dv 1 dv 1 // /dv N dv N dv N
$
'
i
i
j
j
B
0
&
2Z2 # i
* j
-

Using
spherical
coordinates
in
the
space
of
velocities
1
2
3
1
2
3
2
2
2
dv 1dv 1 dv 1 !!! dv N dv N dv N = 4"(v1 ) (v2 ) ....( v N ) dv1dv 2 !!! dv N and integrating over the
angles we get
" +& 4
$
" !m v 2 2k T$ dv )
(
m
v
exp
N
i
i
i i
B %
i
#
(
)
0
( +&
)
i=1 ( 2 v 2 exp " !m v2 2k T $ dv )
i
i i
B %
i )
#
(
# 0
%

Ek = *

and after integration by parts Ek =

'

'

3Nk B T
, which is the equipartition theorem.
2

12

METROPOLIS METHOD
This algorithm assumes that the kinetic energy depends only on velocities and the
potential energy depends only on positions of atoms and thus equation (MC16)
determines the transition probability. The algorithm then proceeds as follows:
(1) Specify an initial configuration X 0, for example positions of all particles in the
system studied.
(2) Starting from a state X generate randomly a new state (configuration) X' . This
is done using a random number generator to change randomly the coordinates 0f
particles.
(3) Compute the potential energy difference !E p = E p (X' ) " E p (X) .
(4) If !E p < 0 accept the new configuration, i. e. X ! X' and go to (2).
# "Ep &
(.
(5) If !E p > 0 compute exp % !
k
T
%$
('
B
(6) Generate a random number ! such that 0 < ! < 1 .
# "Ep &
( ) * accept the new state, i.e. X ! X' and go to (2).
(7) If exp % !
%$ k B T ('
(8) Otherwise retain the old configuration and return to (2).
In the step (2) we can change randomly positions of all particles but usual and more
practical approach is to choose randomly one particle, decide whether change its
coordinates or not and then move randomly to another particle and continue with its
possible motion.

Examples of the use of Metropolis method


Structural relaxation and atomic vibrations
One atom is picked randomly in the simulation block with the help of a random
number generator. The position of this atom is then changed in small increments
according to the following prescription:

x ! x + "# 1

y ! y + "# 2

z ! z + "# 3

where is the maximum allowed displacement and ! i (i=1,2,3) are random numbers
13

from the interval between -1 and +1.


following the Metropolis algorithm.

Accept or reject this new configuration

This type of configurational change simulates:


(i)
Local structural relaxations, which will lead to the decrease of the potential
energy.
(ii) Vibrations of atoms.
The value of must be chosen judiciously. If is too small, a large fraction of
atomic displacements are accepted but the phase space of the system is explored very
slowly and the convergence to the equilibrium state is thus very slow. On the other
hand, if is too large, nearly all the trial moves are rejected and the convergence is
again very slow. Experience shows that the value of which leads to the fastest
convergence corresponds to the acceptance ratio between 20% and 40%. Therefore,
in MC codes the value of is often adjusted during the simulation process according
to the current acceptance ratio, in order to attain the average acceptance ratio between
20% and 40%.
Ising model
Let us consider a lattice and associate with each site i the spin si that is either +1
or -1. The spins interact via (exchange) coupling J, and with no external magnetic
field applied, the potential energy of this system is

E p = !J " s i s j
i, j

where i and j are always the nearest neighbors. This model represents a
ferromagnetic system for J positive and an antiferromagnetic system for J negative
(see Fig. 3).
Non-magnetic disordered

Ferromagnetic

J>0

Antiferromagnetic
Fig. 3 Three varieties of spin states.
14

J<0

Let us consider the ferromagnetic case. The magnetization (an order parameter) for a
given configuration of spins, K, is defined as:

m(K ) =

(num ber of spins up ! num ber of spins dow n)


total number of spins

The question is how does the magnetization depend on temperature. This can be
studied using MC as follows:
We shall consider only single spin flips as possible transitions in the MC study, these
are transitions corresponding to the changes s k ! "s k . The change in the potential
energy for such transition is
!Ep (s k " # sk ) = Js k $ si
i=neighbors
of k

MC Procedure
(1) Specify an initial configuration of spins.
(2) Choose randomly a lattice site k and flip the spin on this site.
(3) Compute !Ep (s k " # sk ) as defined above.
(4) If !Ep < 0 accept the new configuration.

# "E p &
(5) If !Ep > 0 compute exp % !
(.
k
T
$ B '
(6) Generate a random number ! such that 0 < ! < 1 .
# "E p &
(7) If exp % !
( ! " accept the new state and go to (2).
$ k BT '
(8) Otherwise retain the old configuration and return to (2).
This process is repeated L times, where L is a large number. First L0 steps are then
disregarded since the system is first attaining equilibrium. The magnetization is then
evaluated for a given temperature as the average for individual states

1
m=
L ! L0

# m (K)

K "L0

Temperature dependence of m calculated in this way is shown in Fig. 4. There is a


transition temperature Tc at which spins become disordered. This is obviously an
entropic (configuration) effect that is automatically included when constructing the
canonical ensemble.

15

Magnetization

1.0

0.5

0.0

0.5

T/Tc

1.0

1.5

Fig. 4. Magnetization vs temperature dependence for the Ising model of spins.

Order-disorder transition in a binary alloy


Consider a binary alloy composed of species A and B in which the number of each
species is fixed and at 0K the lowest energy state is an ordered structure in which all
the A atoms occupy the sites and all the B atoms the sites of a given lattice. For
example, in the B2 structure AB with the underlying bcc lattice (e. g. CuZn, NiAl)
A atoms occupy the corners of the cube and B atoms the centers; in the L12
structure A3B with the underlying fcc lattice (e. g. Ni3Al) B atoms occupy the
corners of the cube and A atoms the centers of the faces.
In order to describe the order-disorder transition of the alloy system we employ the
long-range order parameter of Bragg and Williams defined as:

!=

f" # cA
1 # cA

(MC22)

where f! is the fraction of sites occupied by A atoms and cA is the concentration of


species A. This parameter is unity for perfect order when f! = 1, and zero for the
completely disordered state when f! = cA since and sites are then occupied by
species A equally.
MC Procedure for the study of order-disorder transition
In each step two atoms are picked randomly in the block. Their positions are
exchanged and the new configuration relaxed structurally to attain a minimum
16

potential energy. This relaxation can be performed using the Monte Carlo procedure
for structural relaxation as described above or employing the molecular statics.
The following procedure is then carried out:
(1) Evaluate the change of the potential energy !Ep associated with the above
exchange of the configuration of atoms
(2) If !E p < 0 accept the new configuration and start again.
# "E p &
(3) If !Ep > 0 compute exp % !
(.
$ k BT '
(4) Generate a random number ! such that 0 < ! < 1 .
# "E p &
(5) If exp % !
( ! " accept the new configuration and start again.
k
T
$ B '
(6) Otherwise retain the old configuration and start again.
When the thermodynamic equilibrium is attained at a given temperature the longrange order parameter is evaluated by averaging over a sufficient number of MC
steps using equation (MC22). Performing this calculation for various temperatures
the dependence of the long-range order parameter on temperature is obtained. The
result is shown in Fig. 5. Calculation determines the temperature at which the orderdisorder transformation occurs

!
1.0

DISORDER

ORDER

0.5

TEMPERATURE

Fig. 5 Long-range order parameter as a function of temperature obtained by the


Monte Carlo simulation of a binary alloy.
17

ISOTHERMAL-ISOBARIC CANONICAL ENSEMBLE (N, p, T)


Quantities conserved: Temperature T, total number of particles N, and pressure p.
The enthalpy, H = E p + pV , rather then the internal potential energy, E p , enters all
the formulas. Hence, the distribution function is

" E p (X) + pV %
!1
p (X) = Z exp $ !
'
k BT
#
&
and the partition function
Z=

((

" E p ( X) + pV %
exp $ !
' dX dV
k
T
$#
'&
B

(MC17.1)

(MC17.2)

Monte Carlo Procedure leading to the change in volume at a given pressure


We consider the volume as another variable so that there will be 3N variables
determining the particle positions plus one, the volume. Proceed then as follows:
(1) Specify an initial configuration and initial volume V.
(2) Choose randomly one of the variables, which is either position of a particle or
the volume. If it is position of a particle proceed just as in the case of the
canonical ensemble by carrying out the structural relaxation. If it is the volume
proceed as follows:
(3) Generate randomly, using a random number generator, a new volume
V'=V+V. The coordinates of the particles must be consistent with the new
volume and this is achieved by scaling all the coordinates by the factor
(1 + !V) 1/3 . This scaling produces new coordinates X! .
(4) Compute !" = E p ( X #) $ E p (X) + p( V # $ V) .
(5)
(6)
(7)
(8)

If !" < 0 accept the new configuration, i. e. V ! V' , X ! X " and go to (2).
$ "# '
If !" > 0 compute exp & !
).
% kB T(
Generate a random number ! such that 0 < ! < 1 .
$ "# '
If exp & !
) ! " accept the new state, V ! V " , X ! X " and go to (2).
% kB T(
Otherwise retain the old configuration and volume and return to (2).
18

Order-disorder transition in a binary alloy at constant pressure


The calculation proceeds in exactly the same way as when the volume is constant but
!Ep is replaced by !H p and the change in volume has to be included into the
structural relaxation in order to keep the pressure fixed.

GRAND CANONICAL ENSEMBLE (, V, T)


Quantities conserved: Temperature T, total volume V, and chemical potential .
The number of particles is variable and thus fluctuations of the concentration are
allowed.
Instead of internal potential energy, E p , the Gibbs potential G = E p ! N , where N
is the number of particles, enters all the formulas. In particular, the partition function
is now
Z=

" *
N

where X

(N )

$ Ep (X (N ) ) # N '
!N
) dX
exp & #
N!
k
T
&%
)(
B

(MC17.1)

corresponds to a certain configuration with N particles. The factor

#3/2

arises from the integration of the part associated with the


! = h 2"mk B T
kinetic energy over the velocities at varying numbers of particles; h is the Planck's
(N )
constant. Hence the distribution function for the configuration X is
p(X

( N)

# E p (X (N ) ) " N &
!N
(
)=
exp % "
N!
k
T
%$
('
B

(MC17.2)

The Monte-Carlo procedure now involves, in general, three different steps:


(i) Changes of the configuration
(ii) Creation of particles
(iii) Destruction of particles
For a given number of particles the probability of the transition X ! X " depends
19

again only on !Ep = E p ( X " ) # E p( X) since N is fixed. This can be treated in the
same way as before.
corresponding

to

However, we also need probabilities for transitions

changes

N ! N + 1 and N ! N "1 .

of

the

number

of

particles,

in

particular,

These probabilities will depend on the ratios of

probabilities of existence of the states with N, N+1 or N-1 particles:


$ E p (X (N +1) ) # Ep (X (N) ) # '
"
N ! N +1 :
=
exp & #
)
(N )
N +1
k BT
p( X )
&%
)(
(MC18)
(N #1)
( N)
( N#1)
$
'
Ep (X
) # E p (X ) +
p(X
)
N
For N ! N #1 :
=
exp
&
#
)
"
k BT
p( X( N ) )
&%
)(
p(X

( N+1)

Monte Carlo Procedure


(1) Specify an initial configuration with N particles inside the volume V.
(2) Select randomly, albeit with equal probability, one of the following procedures:
move particle, create particle, destroy particle
Move particle
(1.1) Select randomly a particle and displace it.
( N)
(N )
configurational change X ! X " .
(1.2) Compute the potential energy change !Ep = E p (X

This corresponds to the


(N )

" ) # E (X ( N) )
p

(1.3) If !E p < 0 accept the configuration and go to (2).


# "E p &
(1.4) If !E p > 0 compute exp % !
(.
k
T
$
B '
(1.5) Generate a random number ! such that 0 ! " ! 1.
# "E p &
( N)
(N )
(1.6) If exp % !
( ! " accept the new state, i.e. X ! X " and go to (2).
$ k BT '
(1.7) Otherwise retain the old configuration and return to (2).
Create particle
(2.1) Select randomly coordinates inside the volume for a new particle and insert it
at this position.
20

( N+1)

(N )

(2.2) Compute the potential energy change !E p = E p (X


) " E p (X )
!
" %
" )E p %
exp $
exp
(2.3) Evaluate
'
$(
'
N+1
k
T
# B &
# kB T &
!
" %
" )E p %
exp $
exp
(2.4) If
'
$(
' > 1 accept the new configuration and go to
N+1
k
T
k
T
# B &
# B &
(2.1). Otherwise proceed as follows.
(2.5) Generate a random number ! such that 0 ! " ! 1.
!
" %
" )E p %
exp $
exp
(3)
If
'
$(
' ! " accept the creation of the particle and go
N+1
k
T
k
T
# B &
# B &
to (2.1). Otherwise, reject the creation of the particle and go to (2.1).
Destroy particle
(3.1) Select randomly a particle and remove it from the assembly.
( N"1)
( N)
(3.2) Compute the energy change !E p = E p (X
) " E p (X )
N
# &
# )E p &
exp
(3.3) Evaluate exp % "
(
%"
(
!
$ k BT '
$ k BT '
N
# &
# )E p &
exp
(3.4) If exp % "
(
%"
( > 1 accept the new configuration and
!
$ k BT '
$ k BT '
go to (3.1). Otherwise proceed as follows.
(3.5)
(3.6)

Generate a random number ! such that 0 ! " ! 1.


N
# &
# )E p &
exp
If exp % "
(
%"
( ! " accept the destruction of the particle and
!
$ k BT '
$ k BT '
go to (3.1). Otherwise, reject the destruction of the particle and go to (3.1).

ISOTHERMAL-ISOBARIC ENSEMBLE WITH VARIABLE


NUMBER OF PARTICLES (, p, T)
Quantities conserved: Temperature T, pressure p, chemical potential .
The number of particles is variable and thus fluctuations of the concentration are
allowed.
The procedure is the same as in the previous case but the E p + pV, rather then just
the internal potential energy, E p , enters all the formulas.
21

S-ar putea să vă placă și