Sunteți pe pagina 1din 59

Thermal Physics of Bose & Fermi Gases

Based on lectures given by J.Forshaw at the University of Manchester Sept-Dec 07


Please e-mail me with any comments/corrections: jap@watering.co.uk
J.Pearson

January 2, 2008

Contents

1 Einsteins Model of a Solid 1

2 The Gibbs Factor 4


2.0.1 Example: CO Poisoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 My Grand Partition Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Identical Particles 8
3.1 Distinguishable Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Indistinguishable Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Pauli Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4 The Bose-Einstein & Fermi-Dirac Distributions 9


4.1 Fermi-Dirac Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Bose-Einstein Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.3 Spin Multiplicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

5 Classical Limit 11
5.1 Chemical Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.1.1 Internal Energy & Heat Capacity . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.2 Entropy of an Ideal Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

6 Fermi Gases 19

i
6.1 Ideal Fermi Gas at T 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.2 Density of States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.2.1 3D Density of States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2.2 Low T Corrections to N, U . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.3 Example: Electrons in Metals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.4 Example: Liquid 3 He . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.5 Example: Electrons in Stars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.5.1 White Dwarf Stars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

7 Bose Gases 38
7.1 Black Body Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.2 Spectral Energy Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.2.1 Pressure of a Photon Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

8 Lattice Vibrations of a Solid 47

A Colloquial Summary I

B Calculating the Density of States IV


B.1 Energy Space: Non-Relativistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV
B.2 Energy Space: Ultra-Relativistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V

C Deriving FD & BE Distributions VI

ii
1 Einsteins Model of a Solid

Assume: each atom in a solid vibrates independantly about an equilibrium position. The vibrations
are assumed to be simple harmonic, and all of the same frequency.
In a 3D solid, each atom can oscillate in 3 independant directions.
i.e. if N oscillators, then N3 atoms.
Our system is a collection of N independant oscillators. Each oscillator has energy:
 
1
ni + h (1.1)
2

Where ni is the quantum number of the ith oscillator: ni = 0, 1, 2, . . .


= angular frequency of oscillator - remember they all have the same frequency.
Now, we proceed by measuring all energies relative to the ground state (ni = 0). Hence:

i = ni h (1.2)

Where i is the energy of the ith oscillator.


Now, we have N such oscillators; so the total energy is:

U = (n1 + n2 + n3 + . . . + nN )h (1.3)
= nh (1.4)

Hence, n represents the energy of the system, in units of h.


The quantum state of the solid as a whole is specified by the list (n1 , n2 , n3 , . . . , nN ).
Clearly, there are several quantum states corresponding to the same total energy. So, how may
states can we have with a system of a particular energy?
Let g(n, N ) be the number of possible quantum states when the total energy is nh.
We can easily see/show that:

(N + n 1)!
g(n, N ) = (1.5)
n!(N 1)!

The proof of which is trivial combinatorial arguments.


Now, the fundamental assumption is:
If a system is closed and in equillibrium, then it is equally likely to be in any of the accessible
quantum states.
So, the probability of finding the system in a particular state is just:
1
(1.6)
g(n, N )

To summarise: g(n, N ) is the total number of quantum states. The probability that an Einstein
1
solid is in any partiular state is given by g(n,N ) . These ideas lead directly to the definitons & the

1
concept of entropy & 2nd law of thermodynamics.
To get an idea of temperature, well consider 2 Einstein solids connected to each other so that they
can exchange energy.
UA is the energy of solid A = nA
h.
UB is the energy of solid B = nB
h.
Hence, the total energy is U = UA + UB ; and is constant.
Now, the number of possible quantum states for A + B is:

gA+B (n, N ) (1.7)


N NA + NB (1.8)
n nA + nB (1.9)

Hence, N is the total number of osscillators, and n the total number of quanta. Then:
X
gA+B (n, N ) = g(nA , NA )g(n nA , N NA ) (1.10)
nA

Intuitively, we say that provided NA and NB are large enough, the system will settle down into a
macrostate with nA = n A . Remarkably, this is already present in our analysis.
We claim that:
X
gA+B (n, N ) = g(nA , NA )g(nB , NB ) (1.11)
nA
nA

To see this, pick NA = NB = 12 N , for simplicity. Now, using Stirlings approximation:

ln n! n ln n n (1.12)

We can show that


    (nA 1 2
1 1 2 n)
g N, nA g N, nB e 2 (1.13)
2 2
Where we have defined the width of the Gaussian
n(N + n)
2 = (1.14)
2N
This proof is done in Schroeder CH2.
If n, N >> 1, then << 12 n. So, for example, if N = n = 1022 , then we have a = 1011 .
Hence, the equilibrium state is very well defined. i.e. nA = 12 n = 1022 1011 .
The cluster of macrostates around nA = 12 n have a much bigger statistical weight than all other
macrostates, and by fundamental assumption, are much more likely. Now, we can locate the equi-
librium configuration.
It occours when gA gB is a maximum. Hence, we have the differential:

d(gA gB ) = 0 (1.15)
d
(gA gB ) dnA = 0 (1.16)
dna
gA gB
gB dna + gA dnB = 0 (1.17)
na nB

2
Now, we see that dnA = dnB (as n is fixed.) Hence, we have
gA gB
gB gA = 0 (1.18)
nA nB
gA gB
gB = gA (1.19)
nA nB
1 gA 1 gB
= (1.20)
gA nA gB nB
Hence, we find that something is equal in equilibrium.
We define temperature:
1 1 g
= (1.21)
kB T g U
ln g
= (1.22)
U
This definition gives T units of Kelvin, and according with our desire to have heat flow from hotter
to cooler bodies.
Let a hotter body loose energy U > 0. Then, we can write (from (1.22)) as:
U
ln ghot = (1.23)
kB Thot
U
ln gcold = (1.24)
kB Tcold
ln(ghot gcold ) = ln ghot + ln gcold > 0 (1.25)
Which should be so, as the system moves towards equilibrium. Thus:
 
U 1 1
ln ghot + ln gcold = (1.26)
kB Tcold Thot
Which is > 0 if Tcool < Thot .
The necessity that ln g increases as the system evolves towards the equilibrium state (/equiv state
with maximum g) is just the second law of thermodynamics:
S = kB ln g (1.27)
Where S is entropy. As we have that ln g > 0, then this implies that S > 0. Systems evolve to
states of higher statistical weight.

Aside The law of increase of entropy appears to signal a violation of time reversal invariance.
This is not actually so: imagine molecules in a gas.
Although our discussion followed an Einstein solid, it should be clear that our discussion is of much
broader generality.
So, so we have:
S = kB ln g (1.28)
1 S
= (1.29)
T U

3
Since we worked hard to get g(n, N ), we may as well make use of it. Lets predict the heat capacity
of a solid.
S
= ln g (1.30)
kB
= (N + n) ln(N + n) n ln n N ln N (1.31)
1 S 1 1 S
= = (1.32)
T U  T h n
N +n h
ln = (1.33)
n kB T
N
n(T ) = h
(1.34)
e kB T 1
Hence, we have the energy of the solid as a function of temperature: n(T ); remembering that
U = nh. Hence, we have that the specific heat capacity C is:
U
C =
T
n
= h
T N
h

N h kBhT 2 e kB T
= h

(e kB T 1)2
N kB X 2 e X
h
= X
(eX 1)2 kB T
Hence, for X << 1 (i.e. high T ), we have the Dulong-Petit law: C = N kB . For X >> 1 (i.e. low
T ), we have that C = N kB X 2 ex . So, graphically, we have a curve, starting from 0 at 0, which
increases to a constant at high temperatures.

2 The Gibbs Factor

We can go further (than the previously closed systems), and figure out the probability that a system
is in a particular quantum state.
If the system is closed, then we know the answer: the probability of the system being in any one
state is just g1 .
Generally however, we are interested in systems which are not isolated (i.e. are not closed).
Suppose we have a really really big box (denoted the reservior R), which is closed; and a smaller
box (our system S), within R, and is allowed to exchange particles and energy with R.
At equilibrium, the total number of states available to the system as a whole is:
gT = gS gR (2.1)
Where gi is the number of accesible states to the system i.
In equilibrium, gT is a maximum. As gT (US , NS ) we can hence write its differential:
dgT = 0 (2.2)
gT gT
= dUs + dNS (2.3)
US NS

4
And, as gT gR + gS , we use the product rule to get:
gR gS gR gS
gS dUS + gR dUS + gS dNS + gR dNS = 0 (2.4)
US US NS NS
   
gR gS gR gS
gS + gR dUS + gS + gR dNS = 0 (2.5)
US US NS NS

Now, using dUS = dUR and dNS = dNR , we have:


1 gR 1 gS
= (2.6)
gR UR gS US
1 gR 1 gS
= (2.7)
gR NR gS NS

Now, (2.6) gives us the previous definition of temperature: TR = TS in equilibrium. The other
equation, (2.7) will give us that the chemical potential is the same in equilibrium:

R = S (2.8)
S T kB g
T = (2.9)
N g N
S g
As S = kB ln g, we have that N = g1 N .
So far, we have assumed that the volume of our system is constant. If this is not the case, all we
g
have, is that g(U, N, V ), and hence an added term of V dV (for both S, R) in (2.5). This will give
us, that in equilibrium, we have:
1 gR 1 gS
= (2.10)
gR VR gS VS
Which leads us to give the definition of pressure:
S
p=T (2.11)
V
And that pS = pR in equilibrium.
Now, our goal is to find P (NS , US ), the probability to find the system in a particular quantum
state. i.e. gS = 1.
So,

P (NS , US ) gR (NT NS , UT US ) 1 (2.12)


 
SR (NT NS , UT US )
P (NS , US ) exp (2.13)
kB

Now, we Taylor expand around (NT , UT ), giving:

SR SR
SR (NT NS , UT US ) = SR (NT , UT ) NS US (2.14)
N U
N US
= SR (NT , UT ) + (2.15)
T T

5
Neglecting terms in higher differentials, as reservior is big enough, then its temperature T and
chemical potential are independant of NS , US . So, we have:
 
SR (NT , UT ) NS US
P (NS , US ) exp + (2.16)
kB kB T kB T

Now, we notice that the first term in the exponential is a constant (or assumed to be so constant
that it is!); also, we now drop the subscripts, and write that US = S = . This gives us:
 
N 
P (N, ) exp (2.17)
kB T

Which is known as the Gibbs distribution function.


Notice, that if N is fixed, then the kNBT term is another constant, and we have that:
 

P () exp (2.18)
kB T

Which is the Boltzmann distribution function, we saw last year.


To use the probability properly, we must normalise it. That is:
XX
P (N, ) = 1 (2.19)
N 

So, to do this, we define the grand partition function Z:


 
XX N 
Z(, T ) exp (2.20)

kB T
N

So that we now have the probability to find the system as a whole, in a particular state with energy
, and number of particles N , is given by:
 
exp NkB
T
P (N, ) = (2.21)
Z
Note, if the system has more than one type of particle, then the exponent changes thus:
   
N  N1 1 + N2 2 + . . . 
exp exp (2.22)
kB T kB T

We can compute the average, in the usual way:


X
hXi = Pi Xi (2.23)
 
N 
X exp kB T
= X(N, ) (2.24)
Z
N,

6
2.0.1 Example: CO Poisoning

Suppose that our system of interest is a Haemoglobin molecule, in one of 3 states: unbound (1);
bound to O2 (2); bound to CO (3).
So, we can write the particle numbers, for each state, in terms of (NHb , NO2 , NCO ), with their
associated energies (given)

(1) : (1, 0, 0) 1 = 0
(2) : (1, 1, 0) 2 = 0.7eV
(3) : (1, 0, 1) 3 = 0.85eV

The chemical potentials are: Hb =dont care! O2 = 0.6eV , CO = 0.7eV (again, values given).
T = 310K.
So, to calculate Z, we have:
!
N1Hb Hb + N1O2 O2 + N1CO CO 1
Z = exp (2.25)
kB T
!
N2Hb Hb + N2O2 O2 + N2CO CO 2
+ exp (2.26)
kB T
!
N1Hb Hb + N3O2 O2 + N3CO CO 3
+ exp (2.27)
kB T
= eHb /kB T + e(Hb +O2 +0.7eV )/kB T + e(Hb +CO +0.85eV )/kB T (2.28)
= 161 (2.29)
Hence, we can write what the probability of each state (1), (2) or (3) is, of occuring:

eHb /kB T 1
(1): Hb is unbound: P1 = Z = 161 .
(Hb +O +0.7eV )/kB T
e 2 40
(2): Hb is bound to O2 : 2 = Z = 161 = 25%.
e(Hb +CO +0.85eV )/kB T 120
(3): Hb is bound to CO: P3 = Z = 161 = 75%

2.1 My Grand Partition Function

I find the following formula easier to use, to write the probability of finding the system in a state
with energy j , with N particles:

ej /kT i eNi i /kT


Q
P (N, j ) =
Z
Thus, the grand partition function Z is the sum over all possible energy states as well:
X Y
Z= ej /kT eNi i /kT
j i

7
3 Identical Particles

We shall be looking at cases where the system is a bunch of particles. We need to be able to
count quantum states for such system. We start with a simple system.

3.1 Distinguishable Particles

If we have 2 distinguishable particles (A, B), and 2 accesible quantum states (1 , 2 ), and if the
particles are distinguishable, then we can have 4 configurations: (AB, 0), (0, AB), (A, B), (B, A).
Hence, 4 allowed states of the system.

3.2 Indistinguishable Particles

If we have again 2 accesible states, but 2 identical particles, we now have that the possible config-
urations are: (AA, 0), (0, AA), (A, A). Hence, 3 allowed states. This is the case if we allow the two
particles to be in the same state at the same time, We denote these types of particles bosons.

The other possibility is that we dont allow the two particles to be in the same state. Hence, we
have that the only configuration is (A, A). We denote these types of particles fermions. That there
is only one state allowed, is a consequence of the Pauli Principle. Which we shall now derive.

3.3 Pauli Principle

Let i (x) be an energy eigenstate for a single particle in the system, so that:

H(x) i (x) = Ei i (x)

If our two particles do not interact, then i (x)j (y) is an energy eigenstate of the two particle
system, with E = Ei + Ej :
h i

H(x)
+ H(x) i (x)j (y) = Ei i (x)j (y) + Ej i (x)j (y)

If the particles are indistinguishable, then no observable can depend upon which particle is where.
Hence, we have that observables should be unchanged if we swap the positions x y:
D E Z
O = (x, y)O(x,
y)(x, y)d

Should be unchanged under the swap.


Recall that any linear combination of 2 eigenstates is also an eigenstate with the same eigenvalue.
So, we want a linear combination:

ai (x)j (y) + bi (y)j (x) = ij (x, y)


h i

H(x)
+ H(x) ij (x, y) = (Ei + Ej )ij (x, y)

8
Now, we want:
|ij (x, y)|2 = |ij (y, x)|2
This implies that:
ij (x, y) = ei ij (y, x)
Where swaping over the spatial coordinates has the effect of picking up some phase ei . Now, if we
swap back:
ij (x, y) = ei ei ij (x, y)
e2i = 1
ei = 1
So, we have that:
ij (x, y) = ij (y, x)
Now, there are two linear combinations of i (x)j (y) and i (y)j (x) which satisfy this requirement:
ij (x, y) = i (x)j (y) i (y)j (x) (3.1)
If the positive sign is taken, then the wavefunction is symmetric (under interchange of two identical
particles). We classify this type as being bosons. If the negative sign is taken, then the wavefunction
is anti-symmetric, and we classify these as fermions.
We see that the Pauli principle is a direct consequence of the symmetry property for fermions:
If we have 2 identical fermions in the same state i, then:
ii (x, y) = i (x)i (y) i (y)i (x) = 0
|ii (x, y)|2 = 0
That is, there is no probability for the system to be found in such a state. Therefore, two identical
fermions can not be in the same quantum state. There is no such problem for bosons.
This is actually a pretty weird conclusion. If we have the idea that an electrons wavefunction is
never zero (just really small) anywhere, then any two electrons in the universe must be considered
dependantly, and we must conclude that these two electrons can never be in the exact same state.
That is, we cannot treat an electron sitting on the earth independantly to one on the other side
of the galaxy. These two electrons simply can never be in the same state. We end up concluding
that they are allowed to sit very very close to being in the same state, but not quite. This can be
shown in potential well arguments. There is a very fine difference between the ground state of two
electrons whose wavefunctions overlap.
We ought to conclude by stating that bosons have integer spin, and fermions half-integer. This is
proved using relativistic quantum mechanics, and is hard to do!
If a collection of particles have an odd number of fermions, then the system is fermionic; even
number of fermions gives a bosonic system.

4 The Bose-Einstein & Fermi-Dirac Distributions

We now focus on an ideal gas of fermions or bosons.


We recall that the Gibbs distribution gives the probability to find the system S in a single quantum

9
state.
A single state of S is specified by the set {ni , i }; where ni is the number of particles in the energy
level i . So:
exp[(n1 + n2 , + . . .)/kT (1 n1 + 2 n2 + . . .)/kT ]
P ({ni , i }) = P (4.1)
n1 ,n2 ,... exp[(n1 + n2 , + . . .)/kT (1 n1 + 2 n2 + . . .)/kT ]
en1 (1 )/kT en2 (2 )/kT . . .
= P n1 (1 )/kT
P n2 (2 )/kT . . .
(4.2)
n1 e n2 e
= P1 (n1 )P2 (n2 ) . . . (4.3)

Where:
eni (i )/kT
Pi (ni ) P n ( )/kT
ni e
i i

Thus, Pi (ni ) is the probability to find ni particles in energy level i . What we have done is to show
that the probability to find the system in a particular state is just the product of the probabilities
of finding a particular number of particles in the particular state.
Now, we can figure out the mean number of particles in a particular energy level:
X
hn(i )i = Pi ni

It is the calculation of this sum which we now consider:

4.1 Fermi-Dirac Distribution

Here, we just condsider the fermionic case. We know that we can only have either one or zero
particles in each state. Thus, the sum over n is just for n = 0, 1. Hence:
X
hn()iF D = P (n)n (4.4)
n=0,1

1 0 + e()/kT
= (4.5)
1 + e()/kT
1
= (4.6)
1 + e()/kT
1
hn()iF D = ()/kT
(4.7)
1+e
Where hn()iF D is called the Fermi-Dirac Distribution.

4.2 Bose-Einstein Distribution

Here, we can have any number of particles in each state. Hence:

0 + e()/kT + 2e2()/kT + . . .
hn()iBE = (4.8)
1 + e() + e2() + . . .

10
Now, if we define:
a e()/kT
We see that the denominator is a geometric series:
1
1 + a + a2 + a3 + . . . =
1a
And that the top is:

a + 2a2 + 3a3 + . . . (4.9)


= a(1 + 2a + 3a2 + . . .) (4.10)
d
= a( da (1 +a+ a2 + . . .)) (4.11)
a
= (1a)2
(4.12)

Hence, we have that:


a
(1a)2
hn()iBE = 1 (4.13)
1a
1
= 1 (4.14)
a 1
1
= (4.15)
e()/kT 1
1
hn()iBE = (4.16)
e()/kT 1
Where hn()iBE is called the Bose-Einstein Distribution.

4.3 Spin Multiplicity

In a gas of particles with spin s, the mean number of particles per state is hence given by:
2s + 1
hn()i =
e()/kT 1

5 Classical Limit

We have the classical limit when:


hn()i << 1 
i.e. we have that:  

hn()i exp
kT
Using the approximation, we can start to calculate things.

11
Figure 1: Graph showing how the three distributions change. Notice that the Fermi-Dirac distri-
bution stays below 1.

5.1 Chemical Potential

We can develop insight into by computing the mean number of particles in our system as a whole.
Suppose we have N particles, within a big cube, with sides length L. Our system is within the big
cube. So, we have:
X
N = hn(i )i
i
 
X 
= exp
kT
i

Now, let us attempt the summation.


We need the single particle energy levels for (non-interacting) particles in a box of side L. To do
this, just solve the Schrodinger equation, to find:

22 2
h
(n1 , n2 , n3 ) = (n + n22 + n23 )
2mL2 1
Where the integers ni run from 1, 2, 3, . . . , . Hence we have that the total number of particles in
the system is:
 
X (n1 , n2 , n3 )
N= exp
n ,n ,n
kT
1 2 3

To find typical values of n1 , n2 , n3 via noting that we have that  = kT ; hence:

kT 2mL2
n21 + n22 + n23
h2 2

Now, if L = 1m & m = 1026 kg, we have that n1 , n2 , n3 1019/2 .

12
Going back to evaluation of the summation: As we find that ni >> 1, we will make the approxi-
mation of making the energy levels continuous:
  Z Z Z  
X (n1 , n2 , n3 ) (n1 , n2 , n3 )
exp dn1 dn2 dn3 exp
n ,n ,n
kT kT
1 2 3

So, we have that:


Z Z Z 2 2 n2
h 1 2 2 n2
h 2 2 2 n2
h 3
N dn1 dn2 dn3 e/kT e 2mL2 kT e 2mL2 kT e 2mL2 kT
0 0 0
3
22
Z
/kT n2 h
= e dn e
0 2mL2 kT
Now, the integral is given by:

Z r
x2 1
dx e =
0 2
Hence:
e/kT
r
N
2
r !3
  1 2mL2 kT
= exp
kT 2 h2

Rearinaging, we get:
 
n
= kT ln (5.1)
nQ

Where we have defined the number density & quantum density:


N
n (5.2)
V
mkT 3/2
 
nQ (5.3)
2h2
So, we have managed to define the chemical potential in terms of density quantities; i.e. local
quantities, independant of the big box.
Notice, for classical limit, we have that e()/kT >> 1. Thus, as  kT , we have that must be
very negative; hence in (5.1), this corresponds to nQ >> n for classical physics; and n > nQ for
quantum physics.
Thus, the classical limit occurs when the system has a low density (n small) and/or high temperature
(T large).
And quantum physics becomes important for high density (n large) and/or low temperatures (T
small).
We can estimate nQ by assuming it to be equal to the concentration when each particle occupies
a volume 3Q , where Q is the de Broglie wavelength.
p2
Now, we have that Q p = h, and that E kT 2m kT , for non-relativistic. Hence, estimating

13

gives: p mkT . Therefore, Q h . Now, we want nQ 1
, and we have hence acheived
mkT 3Q
the previous value of nQ .
Hence, we may say that the quantum density nQ is that at which particles occupy boxes the size of
their de Broglie wavelengths.

Comments

Dont forget the spin multiplicity factor. If the gas particles have spin s, then
n = (2s + 1)exp(/kT )nQ = kT ln(n/(2s + 1)nQ ).

Actual values of depend of what we choose as the zero of the energy scale. If  = 0 +
(n1 , n2 , n3 ), then a change of + 0 would keep everything the same. So = 0 +
kT ln(n/nQ ).

We have assumed that the gas so far is monoatomic. Now, if we give the system a set {j} of
internal quantum numbers (e.g angular momentum), then we have that we can find the new chemical
potential . We proceed by finding the average number of particles (in total) in the system, which
is just the sum over probabilities to find a number of particles in an energy state:
X
N = ni
i
X i
= e kT

i
X
= e /kT
ei /kT
i
/kT
= e Z
n
= Z
nQ
i /kT . N
P
Where we have defined the partition function Z ie Now, from the definition n = L3
.
Hence:
NZ
N = (5.4)
L3 nQ
Z = nQ L3 (5.5)

So, now, if we introduce internal degrees of freedom, we introduce another energy term j ; hence:
X X
N = e/kT ei /kT ej /kT
i j

ei /kT
P
N X
n= = e/kT i
ej /kT
L3 L3
j
Z
n = e/kT Z
3 int
L
 
n
= kT ln
nQ Zint

14
Hence, we see that if we introduce internal motion of the particles, described by the internal partition
function Zint , the chemical potential is given by:
 
n
= kT ln (5.6)
nQ Zint

5.1.1 Internal Energy & Heat Capacity

We know that the internal energy U is given by the sum over the different states, where each state
is the mean number of particles in each energy level (state):
X
U = hni ii
i
X
= e(i )/kT i
i
X
= e /kT
ei /kT i
i
n X i /kT
= e i
nQ
i
N L3 X i /kT
= e i
L3 Z
i
N X i /kT
= e i
Z
i

Where we have used that Z = nQ L3 and n = N/L3 . Now, we notice that the last expression is
equal to the differential of the logarithm of the partition function:
ln Z
U = N kT 2
T
2
= N kT ln(nQ L3 )
T
3
= N kT
2
After doing the differentiation, and subbing in for nQ .
We can also compute the heat capacity:
U 3
CV = = Nk
T 2
Where we have now been able to reproduce last years results.

5.2 Entropy of an Ideal Gas

We must count all accesible quantum states. But it is not clear how to do that for a system with
variable particle number and energy.
We can compute the mean entropy by considering the following situation:

15
Suppose we have a big box, and inside the big box is m little boxes. One of these little boxes is our
system. Hence, our system is surrounded by (m 1) replica systems, all in thermal and diffusive
equilibrium with each other. Each system is specified by specifying its quantum state.
Each distinct configuration of the boxes corresponds to 1 quantum state of the entire system (where
the entire system is the collection of m little boxes).
To compute the entropy of the entire system, we just need to count the number of ways of shuffling
the boxes. This is just:
X m! X 1
W = = m!
m1 ,m2 ,...
m1 !m2 ! . . . m ,m ,...
m1 !m2 ! . . .
1 2

Where we have that mi is the number of boxes in state i.


Hence, the entropy of the complete system is:

Sm = k ln W
!
X
= k m ln m mi ln mi
i
P
Where we have used Stirlings approximation, and noting that i mi = m. Hence, we see that:
X
m ln m = mi ln m
i

Therefore:
X m
Sm = k mi ln
mi
i
X mi mi
= km ln
m m
i

But, as we have that m , so we have that:


mi
= fraction of all boxes in ith state = pi
m
Where pi is just the previous Gibbs factor:

e(Ni i )/kT
pi = P (N  )/kT
e i i
Where the sum is over all microstates of the system.
Therefore, we have derived that:
X
S = kB pi ln pi (5.7)
i

Where we have made no distinction between classical or quantum gases: this holds for both.
So, for an ideal gas, we have that:
e(Ni Ei )/kT
pi =
Z

16
Where we have that:

Ni = n1 + n2 + n3 + . . .
Ei = n1 1 + n2 2 + n3 3 + . . .

Hence, let us compute Z generally.


X
Z= en1 (1 )/kT en2 (2 )/kT . . .
{ni }

Where we have a summation over occupancies over single particle states. Hence, we can show that
this summation is the same as the product of the corresponding factors:
Y 
Z = 1 + e(i )/kT + e2(i )/kT + . . . (5.8)
{i }
YX 
= enj (i )/kT (5.9)
{i } {nj }
P i 1
And we see that the sum is just the standard i=0 x = 1x , to give:
Y 1
Z=
1 e(i )/kT
{i }

Taking the logarithm:


Y 1
ln Z = ln
1 e(i )/kT
{i }
X 1
= ln
1 e(i )/kT
{i }
X
= ln 1 e(i )/kT
{i }

Hence, we see that, for bosons, this sum is unrestricted; so:


X  
ln Z boson = ln 1 e(i )/kT (5.10)
{i }

For fermions, the sum in (5.9) is just for nj = 0, 1; and is hence trivial, and we find that, for
fermions:
X  
ln Z f ermion = ln 1 + e(i )/kT (5.11)
{i }

In the classical limit


e()/kT << 1
We have, using ln(1 + X) X, that both the boson and fermion grand partition functions are the
same: X X
ln Z classical e(i )/kT = e/kT e/kT
i i

17
i.e. Z classical Z boson Z f ermion in the classical limit.P
Hence, in the classical limit, we can compute S = k i pi ln pi , where i is over the states of the
gas. Below, j are the single particle energy levels:
X
S = k pi ln pi
i
X e(Ni Ei )/kT e(Ni Ei )/kT
= k ln
Z Z
i
X e(Ni Ei )/kT  
= k ln e(Ni Ei )/kT ln Z
Z
i

X e(Ni Ei )/kT
Ni Ei e/kT
X
= k ej /kT
Z kT
i j
!
X 1 X
= k pi Ni pi Ei e/kT Z
kT kT
i i
 
N U /kT
= k Ze
kT kT
P j /kT
Where we have used the partition function Z
P= j e , and thatP
the expressions for the average
particle number and energy were written: i Ni pi = hN i = N and i Ei pi = hEi = U .
Now, we have previously derived that = kT ln nnQ , and Z = nQ V . Hence, we have:
 
n U n
S = k N ln nQ V
nQ kT nQ

If we put:
3
U = N kT nV = N
2
Then:
 
5 nQ
S = Nk + ln (5.12)
2 n

Which is known as the Sackur-Tetrode equation.


Even though this calculation has been done in the classical limit, we have had to use the quantum
physics of identical particles to get this expression correct. A classical result, built on quantum
physics.

18
We can now easily compute the pressure and Cp . We start with pressure:

S
p = T
V N,U
 
V
= N kT ln nQ + ln
V N N,U

= N kT ln V
V
N kT
=
V
pV = N kT

Where we have used that nQ (T ) only, and that if we fix N and U , we therefore fix T . And finally,
the heat capacity at constant pressure:

S
Cp = T
T p,N
   
N kT 31
= N kT ln +
T p 2T
5
= N kT
2
Notice, we have also recovered a previously known result: Cp CV = N kT .

6 Fermi Gases

We shall focus on a non-interacting gas of spin- 12 particles. At sufficiently low temperatures, quan-
tum effects are crucial. So it remains to show when we we can use the classical limit, and when
quantum physics becomes important.
Examples of such systems are:

Electrons in a metal free electron theory;

3 He atoms;

Electrons in white dwarf stars;

Neutrons in neutron stars;

Nucleons in a nucleus.

Example We will show that free electrons in a metal at room temperature will be in the quantum
regime, whereas hydrogen at STP will not.
We know we need to use quantum physics if n nQ .
Consider the electrons first:

19
If there is roughly 1 conduction electron per atom, in a metal of density 103 kgm3 . So, we can
compute the electron density:
103 kgm3
n 1028 m3
1025 kg
where we have used an approximate atomic mass.
Now,
 3
mkT 2
nQ =
2h2
So, we have quantum physics when:
n nQ
h2
2/3 2
T < nQ
mk
T < 105 K
Where we have used the mass of the electron. Notice, this upper temperature is well above room
temperature, so free electrons in a metal are always acting according to quantum physics.
N
For hydrogen, we compute the number density via pV = N kT , and knowing that n = V . Thus:
p
n
kT
105

1023 102
= 1026 m3
1017 1067
T < K
1027 1023
< 1K
Where we have used the mass of a hydrogen atom. Notice that this temperature is very small, so
quantum effects for hydrogen hardly ever come into play.

6.1 Ideal Fermi Gas at T 0

Intuitively, expect the gas to be in its ground state at T = 0: all fermions occupy the lowest single
particle states. So, up to some energy F , we expect the system to be full, and above F empty. All
particles in the system occupy states below some fermi-energy.
hn()i = 1  < F
hn()i = 0  > F
This is all using Fermi-Dirac statistics.
Notice that this is the case for T = 0. We get a step function. For T > 0, the step is smeared out
slightly.
Now, we know
1
hn()i = ()/kT
e +1

20
If  > , then (under T = 0), we have that hni = 0.
Similarly, if  < , then hni = 1.
Hence, we have the interpretation that the Fermi-energy F is the value of the chemical potential
at T = 0.

F (T = 0) (6.1)

At T = 0, all single particle states below F are filled, whilst all those above are empty.
We denote a system at T 0 as being a degenerate Fermi gas, and one at T = 0 an ideal Fermi
gas.
So a gas of fermions that is cold enough so that nearly all states below F are filled, and nearly all
states above is called a degenerate fermi gas. This is just reiterating what has previously been said.
Now, if the gas is non-relativistic, we have the relations:

p2F h2 kF2
F =
2m 2m
kB TF F

Where we have thus defined the Fermi temperature TF , Fermi momentum pF , and Fermi wavenum-
ber kF in terms of the Fermi energy F . Notice that we must distinguish between Boltzmann factor
kB and Fermi wavenumber kF .
If the gas is ultra-relativistic (m0 << p):

F = cpF
= hckF

Clearly, F must depend upon the number of particles N , so that we have that the number of
particles below F is N .

Example Compute F for a 2D non-relativistic electron gas containing N electrons in an area A.

Let us construct a box, of side length L, so that L2 = A. Then we have the Schrodinger equation,
with its solution:
h2 2
= 
2m
n1 x n2 y
sin sin
L L
2 2
h
 = (n2 + n22 )
2mL2 1
Where n1 , n2 > 0 and are integers.
We must remember that for each state, we can have two spin-half electrons.
Hence, we want N to be twice the total number of single particle states with  F :

F 2mL2
n21 + n22
h2 2

21
It is more convenient to work in terms of wavevectors k = (kx , ky ), so that sin(kx x) sin(ky y).
We obviously have that kx nL1 and ky nL2 . Hence, we can write the condition as:

2 k2
h
F
2m
Where k 2 = kx2 + ky2 .
Notice that the distance between adjacent states in k-space is just L , and hence that the area
2
occupied by one state is L .
Now, we want to know how many states there are with kx2 + ky2 kF2 . That is, the area of the
quarter-circle of radius kF2 , in k-space.
Thus, the number of electrons (which is twice the number of single particle energy states) in such
an area is given by N = 2 area of quarter cricle / area occupied by each state:
kF2

4
N = 2
2

L
N
kF2 = 2
L2
= 2n
N
Where n is just the number density n L2
.

6.2 Density of States

Considering a 2D gas of Fermions:


Suppose we have an annulus, in the positive quarter of k-space. How many states reside here?
The number of states is obviously the area of the annulus, divided by the area of one state.
If the number of states is dn, and the annulus is k dk, then the area of the total circular annulus
is 2kdk, and hence of the quarter annulus is 41 2kdk. We already have that the area taken up
2
by one state is L . Now, its important to note that we have been talking about the number of
states. If we have a spin s-particle, there is allowed to be 2s + 1 such particles in each state. We
hence need to multiply any answer by the number of particles allowed in each state. For electrons,
s = 21 , so everything is multiplied by 2. We hence can write:
1
2kdk
dn = 2 4
2

L
kdkL2
=

dn kL2
=
dk
dn
We hence have that dk is the density of states in k-space. The number of states dn per unit of k.
We can now rederive the Fermi-energy for our 2D electron gas.
The number of particles N below the Fermi-wavenumber kF is the same as the total number of

22
particles in the system, at T = 0. We hence can say that the total number of particles is the
(continuous) sum of the density of states, over all states up to the Fermi-state:
Z kF
N = dn
0
Z kF
dn
= dk
0 dk
kF
kL2
Z
= dk
0
kF2 L2
=
2
N 2
kF2 =
L2
Hence, using the relation:
2 kF2
h
F =
2m
We can write F :

2 2N
h
F =
2m L2
h2 N
 
=
m L2
h2 n
=
m
Notice that the initial integral was for at T = 0. If T 6= 0, then we can write the much more general
integral in terms of the mean number of particles:
Z
dn
N= hnidk (6.2)
0 dk

If you recall we had the condition that hni = 1 for k < kF , and zero elsewhere. This expression can
be very hard to integrate, especially for bosons!

6.2.1 3D Density of States

We do an example to calculate the density of states for a 3D gas of spinless non-interacting particles,
and use it to determine the chemical potential and energy of an ideal gas in the classical regime.
So, to begin, we know that the magnitude of the wavevector is given by k 2 = kx2 + ky2 + kz2 , and
3
that each state occupies a volume L .
We want to know how many states dn there are in the positive shell of k k + dk.
The volume of the entire shell is 4k 2 dk, and thus the volume of only the positive portion of the

23
shell is 81 4k 2 dk. Hence, we can write:
1 2
8 4k dk
dn =
3

L
L3 k 2 dk
=
2 2
dn k 2 L3
=
dk2 2
k2 V
=
2 2
Thus, we have derived the density of states for a 3D gas of spinless fermions.
To get the chemical potential, we need to continue to solve N for , via:
Z
dn
N = hnidk
0 dk
Now, we have that
1
hni =
e()/kB T
+1
Which, in the classical limit (notice that the classical limit dosent care if its a boson or fermion)
reduces to:
hni = e()/kB T
Hence:

V k2
Z
N= dke/kB T e/kB T
0 2 2
h2 k2

Notice, we also have that (k) = 2m , so the integral is pretty hard to do. So, an alternative
method, is to change variable, so right from the start, we have:
Z
dn
N = hnid
0 d
Hence, to do things using this method, we need the density of states in -space, as opposed to
k-space. We can do this by chain rule:
dn dn dk
=
d dk d r
k 2 V 1 2m
=
2 2 2 h2
r
2m 1 2m
=
h2 2 h2

V m3/2 2
= 
2 2 h3
2 2
q
Where we have made use of  = h2m k
k = 2m h2

, and the original density of states dn
dk . We can
immediately see that, for a non-relativistic gas:
dn

d

24
dn
Thus, inserting this expression for d back into the integral for N :

m3/2 2V /kB T /kB T
Z
N= e e d
h3
2 2 0

Now, we start to try & simplfy the integral.


Initially, we notice that we can do the following:
Z Z r
/kB T
 
3/2  /kB T 
e d = (kB T ) e d
0 0 kB T kB T
Z
= (kB T )3/2 XeX dX X /kB T
0

3/2
= (kB T )
2
Where the actual integral has been looked up, and will be given.
Hence, we have:

m3/2 2V 3/2 /kB T
N = (kB T ) e
2 2 h3 2
 3
mkB T 2 /kB T
= V e
2h2
= V nQ e/kB T
N
n= = nQ e/kB T
V
n
= kB T ln
nQ

Which is exactly the result we had before!


Obviously, we can write down an expression for the mean energy:
Z
dn
U= hni d
0 d

Now, we have previously:


dn V m3/2 2
=a  a
d 2 2 h3

25
So:
Z
U = a e()/kB T  d
Z 0
= b 3/2 e/kB T d b ae/kB T
0
Z  3/2  
5/2  /kB T 
= b(kB T ) e d
kB T kB T
Z0
= b(kB T )5/2 X 3/2 eX dX X /kB T
0

3
= b(kB T )5/2
3/2
4
Vm 2 /kB T 5/2 3
= 3 e (kB T )
2 2 h 4
3/2
Vm 2 5/2 3 n
= 2 3 (kB T )
2 h 4 nQ
3
V m3/2 2 2h2 2

5/2 3 N
= (kB T )
2 2 h3 4 V mkB T
5/2 5/2
Vm 3/2 2kB T 3 N 23/2 3/2 h3
= 3/2
2 2 h3 4V m3/2 kB T 3/2
3
= N kB T
2
Which again, is a result we previously knew.
In these two examples, we have used the standard integrals:
Z
3
x3/2 ex dx =
4
Z0
x 1
xe dx =
0 2

Which will be given, if needed.


We can use these couple of examples as a prototype for finding the average value of something: add
up the average number of particles in a particular state, times the density of particles in a particular
state, multiply by the thing you want the average of. This gives the average of the quantity over
the whole system.
So, the average number of particles is given by:
Z
dn
hN i = N = hni dk (6.3)
0 dk

And the average internal energy of the system:


Z
dn
hU i = U = hni dk (6.4)
0 dk

26
In the expression for U , it is required to know if the gas is non-relativistic, relativistic or ultra-
relativistic. For non- and ultra-relatitistic, we have the expressions:
2 k2
h
non = ultra = h
ck
2m
Suppose we want the average velocity of a single particle. We compute this by finding the average
velocity of all particles, and divide by the number of particles:
1 dn
Z
hvi = hniv dk (6.5)
N 0 dk
2 k2
h
To get an expression for v, we note that kinetic energy is both equal to 2m and to 21 mv 2 . Hence,
v = hmk . Note that this is only for non-relativistic.
If we want to find the variance on a quantity, N , say, we merely need to compute:
2
N = hN 2 i hN i2 (6.6)

Back to Fermi gases:


Lets calculate the Fermi-energy of a 3D electron gas.
We have the definition that F (T = 0). So, the mean number of particles is simply given by:
Z kF
dn
N= dk
0 dk
2
Where we will use the previously derived dn Vk
dk = 2 2 for spinless particles. We must use a spin
multiplicity factor of 2; and we have also used the fact that particles either reside under the Fermi-
energy, or not at all: the step-function. Thus:
Z kF
2V k 2 V kF2
N= dk =
0 2 2 3 2
N
Hence, we use n = V , to see that:
1/3
kF = 3 2 n (6.7)

Which obviously holds for non- or relativistic systems.


Now, for non-relativistic, we have that:
h2 kF2
F =
2m
Hence, we can write the Fermi-energy in terms of number density n:
2
h 2/3
F = 3 2 n (6.8)
2m
Now, we shall suppose (and subsequently prove) that if T < TF (remembering that kB TF F ),
then quantum effects are important. Hence, we have the classical regime if:
h2 2/3
kB T >> 3 2 n
2m

27
Or, that:
 3/2
2mkB T 1
n << (6.9)
h2 3 2
Now, we have that the quantum concentration nQ is given by:
 3/2
2mkB T
nQ =
2h2
Hence, we see that (6.9) is equivalent to:
8
n << nQ
3
Hence, we have that n << nQ for classical. Which is our previous definiton, but derived by stating
that T >> TF for classical.
Now, lets calculate the internal energy of a non-relativistic 3D gas of electrons. We have straight
away that:
V k 2 h2 k 2 V h2 kF2
Z kF
U= dk =
0 2 2m 2 2m 5
Now, from (6.7), we have:
1/3 V kF3
kF = 3 2 n N=
3 2
Hence,
3 h2 kF2
U= N
5 2m
2 kF
h 2
But we also have the relation that F = 2m . Hence:
3
U = N F (6.10)
5

Now, if we want to calculate the pressure p, we initially start by writing:

dU = T dS pdV

Which, at T = 0, reduces to: 


U
p=
V N
Hence:
3
p = N F
V 5
3 F
= N
5 V
Where we can do the differentiation:
F 2 F
=
V 3V

28
Hence:
2 N F 2
p= = nF
5 V 5
Which we can rearrange into the equation of state for a Fermi-gas at low temperature:
2
pV = N F (6.11)
5
This is the equivalent of pV = N kB T .
To get CV , we take: 
U
CV = =0
T V
At T = 0, as U is independant of T at 0. This is a little useless, as it does not tell us what happens
as T 0. To find out this behavior, we need to do some small T corrections to U . To do this
properly, we need to evaluate the integral:
Z
dn 1
U= ()/k
 dk
dk e BT + 1
0

Which is hard.
We can estimate the correction by saying that only particles within kB T of F move. So, the number
of excited particles is:
kB T
N
F
So, additional energy is of the order:
kB T
N kB T
F
Hence, U needs to be corrected by this factor:

3 (kB T )2
U = N F + N
5 F
2
Where is some constant. If this correction is done exactly, one finds that = 4 . Hence, if we
now calculate CV using this corrected version of U , we find:
 2 T 2
N kB
U
CV = =
T V 2F

6.2.2 Low T Corrections to N, U

Now, we have that:


Z
dn
N = hni d
d
Z0
dn 1
= ()/k
d
d e BT
Z0
dn 1
U =  d
0 d e()/kB T

29
We have previously derived that:
dn 3 3/2
= N F 1/2
d 2
So, to solve these integrals, we have objects of the form:
Z
f ()
I= ()/k
d
e BT
0

Now, we must be able to write the answer in terms of:


Z
I= f () d + corrections in T 0
0

So, we want to try to get our integral in that form.


Suppose we define:

z
kB T
Hence, we see that z( = 0) = /kB T , and  = kB T z + . I will now omit the subscript B from
Boltzmanns constant. Hence, we have:
Z
f (zkT + )
I = kT dz
/kT ez + 1
Which we can split into 2 integrals:
Z 0 Z
f (zkT + ) f (zkT + )
I = kT z
dz + kT dz
/kT e +1 0 ez + 1
And we can invert the firsts limits:
Z /kT Z
f ( zkT ) f (zkT + )
I = kT z
dz + kT dz
0 e +1 0 ez + 1
Now, if we write:
1 1
=1 z
ez+1 e +1
Then the integral becomes:
Z Z /kT Z /kT !
f (zkT + ) f ( zkT )
I = kT dz + kT f ( zkT ) dz dz)
0 ez + 1 0 0 ez + 1
Z Z /kT Z !
f ( zkT ) f (zkT + )
= kT f () d kT dz dz
0 0 ez + 1 0 ez + 1

Now, in the limit of T 0, z . Hence, the middle integral above can be written to have an
upper limit of : Z Z
f ( + zkT ) f ( zkT )
I= f () d + kT dz
0 0 ez + 1
The second integral can be Taylor expanded to:
Z
df () z
I2 = 2(kT )2 dz
d 0 ez + 1

30
Where we can look up the integral as being:
Z
z 2
dz =
0 ez + 1 12

Thus:

(kT )2 df ()
Z
I= f () d + 2
0 6 d

Now, applying this result to N :



1/2
Z
3 3/2
N = N F d
2 0 e()/kT + 1

We hence see that f () = 1/2 and f () = 1/2 . We hence have, for the integral IN :

2 (kT )2 1 1/2
IN = 3/2 +
3 6 2
Thus:
3 3/2 2 3/2 (kT )2 1 1/2
 
N = N +
2 F 3 6 2
 3/2 
(kT )2 2


= N 1+
F 8
2/3
(kT )2 2
 

1 = 1+
F 8
2/3
(kT )2 2

= F 1 +
8
 2 2 !
kT
= F 1
F 12

Where the last step has been done by a binomial expansion, and noting that (0) = F .
Now, doing so similarly for U , we find:
3 3/2
U = N IU
2 F
Z
3/2
IU = d
0 e()/kT + 1
2 5/2 (kT )2 3 1/2
= +
5 6 2
  !
3 5 2 kT 2
U = N F 1 +
5 12 F

Which gives the previously stated correction factor. The last step has been done by substituting in
the derived expression for .

31
6.3 Example: Electrons in Metals

Now, if T < TF , we need to model electrons as behaving quantum mechanically.


For metals, we have data which show that TF 104 105 K, and vF 1%c. Hence, metals at
room temperature behave via non-relativistic quantum mechanics.
If we suppose that the conduction electrons in a metal form a gas of free electrons, then it will be a
non-relativistic degenerate fermi gas at room temperature. That is, occupancies of states below the
Fermi-energy is = 1. Now, electrons are actually (weakly) bound to the ionic lattice core, where the
binding will have the effect of dragging on the motion of electrons, and to thus increase the mass
of the electron, This effect will be ignored, but will be noted upon when we compare to data.
We have computed that quantum theory predicts:
2
 
quantum kB T
Celectrons = N kB
2 F
Where we have ignored any contributions to C due to the ionic core. So, what would classical
theory predict? We have already done this, to the Einstein solid. It is just 21 kB N per degree of
freedom. Thus:
class 3
Celectrons = N kB
2
Now, if we include the contribution from the lattice 3N kB , where we have supposed 6 degrees of
freedom (!):
class 3 9
Ctotal = 3N kB + N kB = N kB
2 2
Now, experiment yields C 3N kB ; which appears to be in agreement with the classical prediction
where there are no electrons.
quantum
We can explain this discrepancy, by noting that Celectrons is small for T < TF . That is, if kBFT << 1.
Putting typical numbers in, we find that the quantum correction of the classical result is of the order
1%. This would not show up on a crude experiment, but does in a more accurate experiment.
Heat capacities at low T have been accurately measured to go like:
C aT + bT 3
Where bT 3 is due to ionic contributions, and aT from electron contributions.
Experimental data results in a value for a:
a = 2.08 103 Jmol1 K 2
We get a value, from potassium data:
2
 
kB
a= N kB = 1.7 103 Jmol1 K 2
2 F
So, our value is 20% off. This is however, our prediction with no ionic corrections. Our result should
always be smaller than experiment, as ionic cores drag electrons back, effectively increasing their
mass.
1
a m
F

32
Thus, if m were 20% bigger when ionic interactions taken into account, then the agreement would
be perfect.

6.4 Example: Liquid 3 He


3 He is a fermion.
Compute the Fermi-energy and find the temperature below which quantum effects are important.
i.e. compute F and TF . Given that the density is = 81 kgm3 .
Hence, we compute the number densirty n:
N 81
n= = = 1.6 1028 m3
V 3 1.67 1027
Where we have divided the density by the 3 times the mass of the proton. Hence, we have:

2
h
F = (3 2 n)2/3 = 4.2 104 eV
2m
F
TF = = 5K
kB
r
2F
vF = = 160ms1
m
Where we have also compute the Fermi-velocity, to check that it is non-relativistic (which is blatantly
is!).
So, for T < 5K, we expect:

2
 
kB T
CV = N kB = (1.0K 1 )N kB T
2 F

And again, experiment yields 2.8K 1 , which is higher than our prediction.
Hence, interactions between 3 He is obviously not negligible.
Especially at T < 2mK, where we have a discontinuity in the heat capacity at the transition into
a superfluid, where the 3 He atoms pair up into bosonic pairs.

33
6.5 Example: Electrons in Stars

We start by asking: Do the electrons in the Sun form a degenerate fermi gas?
So, given that the core temperature is T 107 K, we need to be able to compare it with the fermi
temperature, which we need to calculate.
Note, if T = 107 K, then the thermal energy  = kB T 107 1023 103 eV. Now, this is a lot
higher than the binding energy (= 13.6eV), hence there will not be much atomic hydrogen - it will
be mostly ionised. Now, we can compute TF via:

kB 2
1 h
TF = = (3 2 n)2/3
F kB 2m
Where we have assumed non-relativistic.
We can work out n via the mass of the Sun. If we assume that the Sun is only made up of protons
and electrons.
N M 1 M 1
n= = =
V M p + Me V M p V
Where we have used the fact that Me << Mp . We have also assumed one electron per proton.
Using M 2 1030 kg and R 7 108 m, we have:

n 1030 m3

From which we can calculate:


TF 105 K
Which is two orders of magnitude less than the core temperature, therefore, T > TF , therefore
electrons in the Sun do not form a degenerate fermi gas.

6.5.1 White Dwarf Stars

Stellar evolution starts with hydrogen gas collapsing under gravity to the point where pp fusion can
occur at T 107 K. The stars radius is stabilised by the outward radiation pressure from the gas of
ions, electrons and photons. The electrons are not (necessarily) degenerate. The outward radiation
pressure is stabilised (or stabilises) the inward gravitational attraction due to mass.
When the proton fuel runs out, the radiation pressure falls, and the star collapses under gravity
untill the core becomes hot enough for the He to ignite at T 108 K.
The process continues until no more nuclear fuel is left to burn. However, such stars are kept from
further collapse by the presence of an outward pressure due to the degenerate electron gas. These
types of stars are called white dwarfs.
Now, given the mass of a white dwarf, we ought to be able to compute its radius.
Consider a shell at radius r, thickness dr. Now, the inward force due to gravity is balanced by the
outward force due to degenerate electrons. We can compute the inward gravitational force:

GM (r)
dF = 4r2 dr(r)
r2

34
Where M (r) is the mass of the star inside a sphere is radius r. Notice that the volume of the shell
is 4r2 dr, hence its mass is just 4r2 dr(r).
An outward force would come from a difference in pressure between the inner and outer surfaces of
the shell, which would be of the form:

dF = [p(r + dr) p(r)]4r2


= dp4r2

Which we can substitute in for:


GM (r)
dp4r2 = 4r2 dr(r) (6.12)
r2
dp GM (r)
= (r) (6.13)
dr r2
Z p(R) Z R
GM (r)
dp = (r) dr (6.14)
p(0) 0 r2

Now, to do this integrals properly requires a numerical approximation. So we use various approxi-
mations.
We assume that p(0) >> p(R), thus helping with the integral on the LHS. We shall also assume
that any integral for the density (r) over r results in some average density . Now, we have that
the mass inside some sphere can be given by:
Z r
4r3
M (r) = 4r2 (r0 )dr0 =
0 3
Where we have used our approximation of average density. Thus, (6.14) becomes:
Z R
GM (r)
p(0) = (r) dr
0 r2
Z R
1 r3
= 4G 2 2
dr
0 r 3
4G2 R 2
=
6
p

If we do the simple substitution that:


M M
= = 4 3
V 3 R

Then we have an expression for the inward pressure due to gravity:


!2
R2 M M2
p = 4G 4
6 3 R
3 R4

Now, we have previously computed the pressure exerted by degenerate fermions:

2 2 h2
p = nF = n5/3 (3 2 )2/3
5 5 2m

35
Where we can compute n as we did for the Sun. We assume the core to be composed of helium,
which is 1 electron per proton. We use u to denote the atomic mass unit:
M 1
n= 4 3
3 R
2u

Therefore, we have that the pressure due to the degenerate electrons is like:

M 5/3
p
R5
Thus, equating these two pressures (due to degeneracy and gravity) we obtain:

R = M 1/3
h2 2/3 4/3 3 5/3 8
 
3
2m 8n 3G
= 4 1016 mkg 1/3

Data obtains = 7 1016 mkg1/3 , which is a pretty good agreement! The agreement becomes
absolute if the density integral is done properly.
Hence, we see that such a star is stable, and degeneracy pressure always wins. The reason for
quotation marks, is that if the mass is above a certain limit, electron degeneracy pressure cannot
sustain the equilibrium, and the star collapses further, into a neutron degenerate state.
We should check that the white dwarfs in the table are made of a degenerate core of electrons, and
are not-relativistic.
We know that the temperature of the star T 107 K. So, we need to compute TF .:
2
h
kB TF = F = (3 2 n)2/3
2m
To compute n, we do an order of magnitude calculation that is VERY rough:
M 1 1030
n= 4 27
1035 m3
u 3 r3 10 1020

Hence, we find that TF 109 K > T . Hence, electrons degenerate.


How fast are the electrons? We proceed via:
1
F = mvF2 = kB T
2
Hence, we have: r
2kB T
vF = = 107 ms1
m
Which is a lot less than c, hence non-relativistic.
What happens for more massive stars? We note that the electrons will become ultra-relativistic,
2 2
and so we move from being able to use  = h2m
k
to having to use:

=h
ck

36
Hence, we recalculate the density of states:

4k 2 dk 1
dn = 2 2
8
L
dn Vk2
=
dk 2
dn dn dk V k2 1
= = 2
d dk d hc
V 2
=
(hc)3 2

Where we now have the density of states in energy-space. Notice that the density of states for
an ultra-relativistis gas of fermions is now proportional to 2 , whereas the density of states for

non-relativistic fermions was .
At T = 0, we can write that: 
U
p
V N
So, we compute the internal energy U via:
Z F
dn V 4F
U=  d =
0 d hc)3 2 4
(

And, to differentiate this properly, we need to figure out F (V ). We do this via:


Z F
dn
N = d
0 d
V 3F
=
(hc)3 2 3
 1/3
2 1/3 N
F (V ) = (3 ) hc
V
 4/3
V 2 4/3 4 N
U = (3 ) (hc)
(hc)3 2 4 V

Hence, we have pressure as being just:



U 1U
p =
V N 3V

One-third of the energy-density of the system. Hence looking at everything, we have that:

N 4/3
p
V 4/3
And we see that N M and V R3 . Thus:

M 4/3
p
R4

37
For a gas of ultra-relativistic fermions.
Now, this dependance for the degeneracy pressure is such that there exists a maximum mass for
which the pressures due to graviy and degeneracy balance. That is, there exists a maximum mass
for which degeneracy pressure can sustain collapse due to gravity. This maximum mass is 1.4M :
the Chandrasekhar Mass.
As the star collapses F rises. Eventually, particle physics becomes important and we get inverse
beta-decay pe ne and all protons and electrons dissapear leaving only neutrons; where the
neutrinos will fly out. There will then be a neutron degeneracy pressure which supports against
further collapse (untill its critical mass of 1.8M is reached).

7 Bose Gases

We are interested in low T behaviour. As T 0 we expect even classically that particles will
occupy only the lowest energy level. So, how low must T be for macroscopic occupation of the
ground state? That is, the majority of particles in the ground state.
So, if we write down expressions for the two lowest possible energy states of a system:

h2  2 2 h2  2 2
0 = (1 + 12 + 12 ) 1 = (2 + 12 + 12 )
2m L 2m L
Thus, the spacing between these levels is:

h 2  2
3
1 0 =
2m L
For the ground state to contain most of the particles, we want:

kB T 1 0

If L = 1cm, m = 6.6 1027 kg, then:

1068 10
T = 1014 K
1026 14
Which is very low! This however, has been calculated using classical arguments, and the result is
very different if we use a quantum mechanical description of identical particles.
Recall:
1
hn()iBE =
e()/kB T 1
Initially notice, when e()/kB T
= 1 then the above becomes singular. We shall use the initial
approximation for the total number of particles:
Z
dn 1
N ()/k
d
d e BT 1
0

38
It will become clear as to why this is only an approximation. One thing to note is tha the density

of states is proportional to , and hence the result of this integral at the lowest single particle
energy state 0 = 0 is zero. The exact form is actually purely a summation:
X
N= hn(i )i
i

So, we start to analyse the distribution function:


If N is fixed, then as T falls, must therefore rise. If we have that  0, and if we take the
minimum energy level to be 0 , then we have 0 so that there are no negative-particle-number
occupancies (equivalent to saying there is always the requirement of  > 0) of any energy levels.
We can set 0 = 0, hence we have that 0.
Now, we see that there is a potential problem. cannot increase beyond 0 = 0 at some critical
temperature Tc . Hence, we have that = 0 at T = Tc . Thus, we can say (and this isnt an
approximation): Z
dn 1
N d
0 d e B Tc 1
/k

So, we have the interpretation that T = Tc is the lowest temperature in which hniBE works. We
say this because for T < TC , we have that > 0, which is impossible.
So, lets compute hniBE for the lowest energy state 0 = 0:

1 1 kB T
= = N0
e/kB T 1 1 kB T 1

Where we have use the Taylor expansion ex = 1 + x + x2 + . . .. Hence, we have that N0 is the
number of particles in the ground state. Notice that N0 as 0. Now, we also know that
N0 N . Hence kBT N ; thus
kB T

N
Hence, as we have that 0 and the above lower restraint, we have narrowed down the position
of very finely. For a system of 1023 particles, is very very close to zero, but not quite. In actual
fact, the splitting between and 0 is a lot less than that between 0 and 1 .
We now have a new particle distribution function for T < Tc :
1
hni =
e/kB T 1
Where we now have solved the problem of having > 0.
So, to summarise thus far, we have shown that as T 0, the occupancy of the ground state, N0 ,
tends to infinity. We also have that as T 0, 0 very quickly for a macroscopic system.
So, we can now write down a correct expression for the number of particle in the system: we take
out of the summation just the term due to the ground state:
Z
dn 1
N = N0 + /kB T 1
d
0 d e

39
That we have not changed the lower limit of the integral is not a problem: the integral is zero for
 = 0 anyway. Notice, we have also put = 0, as the energy level splitting is massive compared
to 0 . Now, we can do this integral, after inserting in the expression for the density of states
in energy-space (for non-relativistic bosons). So, how does N0 vary for T < Tc , where we assume
= 0? So, writing the integral:

V m3/2 1/2 d
Z
N = N0 + (7.1)
0 2 2 h3 e/kB T 1
Now, we previously wrote that:
Z
dn 1
N /kB Tc 1
d
0 d e
Z
V m3/2 1/2 d
=
h3 0 e/kB T 1
2 2
Z
V m3/2 3/2 (/kB Tc )1/2 d(/kB Tc )
= (k T
B c )
h3
2 2 0 e/kB T 1

V m3/2 X 1/2 dX
Z
3/2
= (k B Tc )
h3
2 2 0 eX 1

Infact, the integral can be looked up, to give:



X 1/2 dX
Z
= 2.315
0 eX 1

Thus, (7.1) can be written:


Z Z
dn 1 dn 1
/k T
d = N0 + d
0 d e B c 1 0 d e B T 1
/k

Now, we notice a similarlity between all factors in both integrals, except for T and Tc , which we
can factor out:

Tc3/2 = N0 + T 3/2 (7.2)

Where:

V m3/2 X 1/2 dX
Z
(k )3/2
3 B
2
2 h 0 eX 1
N
= 3/2
Tc
Therefore, inserting this into (7.2) gives a relation for how the number of particles in the ground
state varies as T gets close to Tc :
"  3/2 #
T
N0 = N 1
Tc

40
We are also able to write down the condensation temperature:
2 3 N !2/3
2 h V 1
Tc =
m2/3 2.315 kB

Thus, this is the temperature at which Bose-Einstein condensation (BEC) takes place. Putting
various numbers in:
h2 2/3
Tc = 3.31 n
mkB
Now, if we just write down the inequality for which we get BEC:
T < Tc
h2 2/3
T < n
mkB
mkB T 3/2
 
< n
h2
nQ < n
Where we have recognised the definition of the quantum density.
So, for T < Tc , it is not only the region where BEC takes place, but it is also the region where
quantum effects become important. So, as soon as we get quantum effects (in an ideal Bose gas),
all particles jump into the ground state.
For 4 He, with = 145kgm3 we have Tc = 3K; which is a lot greater than from the previous
classical argument! n is computed from , via:
N 145
n= = = = 2.17 1028 m3
V nn m n 4 1.67 1027
Where nn is the number of nucleons (2p + 2n) each having a mass mn . Thus, Tc is finally computed
by using this, and that m is the mass of a helium-4 atom = 4mn = 4u, where u is the atomic mass
unit.
Experimentally, BEC has been observed in many systems; but for the first time in 1995, using a
gas of Rubdium atoms.

7.1 Black Body Radiation

Consider a gas of photons in thermal equilibrium. So, the gas is ideal (except in early universe
phase), ultra-relativistic (obviously) and = 0.
To see why = 0, we note that dS = 0 in equilibrium. Therefore, so is:

S
dN = 0
N U,V
Now, we have that a gas of photons cannot have a fixed number of particles (hence dN 6= 0), as,
for example, atoms are constantly radiating them. Hence, we see that:

S
=0
N U,V

41
However, we have the definition: 
S
T
N U,V
Therefore, we see that = 0. Hence, we have:
1
hn()i =
e/kB T 1
U
Lets compute the energy density V of a gas of photons:
1 dn
Z
U
= hn()i d
V V 0 d
Using the ultra-relativistic form for the density of states in energy-space (previously derived) and
using a spin multiplicity factor of 2, as there can be two polarisation states in one, we have:
1 V   2 1
Z
U 
= d
V V 0 2 hc hc e/kB T 1
Z
1 3
= d
(hc)3 2 0 e/kB T 1
(kB T )4 (/kB T )3
Z
= d(/kB T )
(hc)3 2 0 e/kB T 1
(kB T )4 X 3
Z
= dX
(hc)3 2 0 eX 1
(kB T )4 4
=
(hc)3 2 15
U 2
= (kB T )4
V 15(hc)3
Now, to proceed, we shall discuss a little bit about blackbody radiation:
Suppose we have a box of photons in thermal equilibrium (whose energy-density we have just
computed). The box is completely isolated, except for a very small hole. Photons are able to leave
this hole, being in thermal equilibrium with the other photons still inside the box, hence blackbody
radiation. Any photons incident upon the hole, from outside, will be able to enter the hole, hence
a blackbody absorber. If all photons inside the box move towards the side with the hole (of area
A), then the volume of photons being ejected per second is just cA. However, not all photons will
be moving in that direction, so we must integrate over the solid angle, to give 14 cA.
So, we ask the question, what is the power being radiated by the hole? This is just the total energy
ejected per second:
1 U
P = cA
4 V
Hence, inserting our expression for the energy density:
2
 
P = kB T 4 T 4
4
60h3 c2
Where we have arrived at Stefans law, with his constant:
5.67 108 W m2 s1 K 4
Now, how is this power distributed over wavelengths? This will lead us to the Planck distribtion,
and being able to predict the CMB.

42
Aside: CMB. The Cosmic Microwave Background arose in the very early universe. When the
temperature of the universe was so high that protons and electrons could not combine to form
hydrogen, or any other elements, the ambient photons were being constantly scattered by this hot-
plasma. As the universe cooled, Hydrogen formed, and photons stopped being scattered by the
protons and electrons. They had, however, been in thermal equilibrium with them. So, at the time
of recombination (where hydrogen formed), the photons suddenly had nothing to scatter off, so they
maintained their temperature/energy from when they were in thermal equilibrium. We are able to
measure this background radiation.
The CMB is a perfect blackbody radiation distribution, at a temperature of 2.7K. On a finer
resolution however (order mK), we find anisotropies in the otherwise perfect blackbody spectrum.
These anisotropies give information about how the very first matter was formed, in its density. Inital
density fluctuations from recombination have left their mark in the form of the CMB anisotropies,
and from this, we can find information about the early universe.

7.2 Spectral Energy Density


U
Now, lets calculate how the energy density V is distributed over wavelengths . That is, the spectral
energy density.
Now, we have that:
u()d
Is the energy per unit volume, in the wavelength interval +d. Now, we previously calculated
the total energy density:
Z
U 1 3
= 2 d (7.3)
V (hc)3 0 e/kB T 1

This is obviously the same as integrating u()d over all . Hence:


Z
U
= u()d (7.4)
V 0
Z
1 3
= d (7.5)
2 (
hc)3 0 e/kB T 1

That is to say:
dn 1
u()d = 2
hni d
d V
Where the factor 2 comes from there being 2 polarisation states.
hc
Now, we can find u() by changing variables. We have that  = h = . Thus:

hc
d = d
2
Thus, inserting these expressions into (7.3) results in:

hc 3
Z 
U 1 hc
= d
V 0 ehc/kB T 1 2 (hc)3 2

43
Where the minus sign has been absorbed into the integrand, by symmetry arguments. If this is now
compared with (7.4), we find (after cleaning up the above):
(2)3 hc 1
u() = 2 5 hc/k
e BT 1

Thus, we have derived what the energy density of a particular wavelength is: u(). This is known
as the Planck Formula. If we let , then we can Taylor expand the exponential, and we will
end up with the classical Rayleigh-Jeans limit:
hc kB T 8kB T
u() = 8 5
=
hc 4
Notice, for this formula now (which is purely classical) there is a huge problem, and it is predicted
that u( = 0) = . So, an infinite distribution for zero wavelength. This is, of course ridiculous,
and is known as the UV-catastrophe.
We can find the wavelength which has maximum power associated with it. That is, a turning point
of the u() curve. This is:
du
=0
d
This is known as Weins displacement law. However, to actually differentiate u() gets tedious, so
we apply a trick. Let:
f (x)
u 5 x T

Hence, if we write:
 
du d f (x)
=
d d 5
1 df d 1
= + f (x)
5 d d 5
1 df dx 5
= 5
f (x) 6
dx d
1 df 5
= T f (x) 6
5 dx
= 0
1 df 5
5 T = f (x) 6
dx
df
x = 5f (x)
dx
Hence, the solution to this equation is that x =constant, or T =constant.
Therefore, we have derived that the wavelength which has maximum power ascribed to it, at a
particular temperature, can be found from T =constant. Thus, for two temperatures T1 , T2 ; their
maximum powers are at wavelengths 1 , 2 . If T1 < T2 , then 1 > 2 .
Sometimes we want u()d, which is the spectral distribution in terms of angular frequency.
We have that  = h
; and thus:
dn h
u()d = 2 dhn()i
d V

44
So, it remains to calculate the density of states in space:
dn dn dk
=
d dk d
Using the relation = ck, we can then write:
dn dn 1
=
d dk c
V k2 1
=
2 2 c
V 2
=
2c2 2 c
V 2
=
2c3 2
Therefore, putting everything in:

V 2 1 h
u()d = 2 3 2
d h/k T
2c e B 1 V
Cleaning up:
h 3
u()d = d
c3 2 (eh/kB T 1)
Now, let us leave that alone.

7.2.1 Pressure of a Photon Gas

Let us now calculate the pressure due to a bose (photon) gas, in thermal equilibrium. To do so, we
calculate S, and differentiate it to get p. Recall that we have previously derived:
X e(Ns Es )/kB T  Ns Es 
S = kB ln Z
states
Z kB T

Where the sum is over all single particle energy states. We have also derived:
X  
ln Z = ln 1 e(i )/kB T
i

Now, we have that = 0. We hence see that the expression for S simplifies somewhat:
X eEs /kB T  Es 
S = kB ln Z
states
Z kB T
P
Notice that the first term is the same as writing i pi Ei , which is just U . The second term is just:
X
kB ln Z pi = kB ln Z
i

45
The expression for ln Z can be made continuous via:
Z
dn
ln Z = d ln(1 e/kB T )
0 d

Which we evaluate:
Z
V 1
ln Z = 2 2 ln(1 e/kB T )d
(hc)2 0
Z  2  
V 1 3  /kB T 
= 2 (kB T ) ln(1 e )d
(hc)3 kB T kB T
Z0
V 1
= 2 (kB T )3 X 2 ln(1 eX )dX
(hc)3 0

The integral is evaluated to give:



1 4
Z
X 2 ln(1 eX )dX =
0 3 15

Therefore, we see that:


X eEs /kB T  Es 
S = kB ln Z
states
Z kB T
kB U
= + kB ln Z
kB T
kB T 3 2 kB
 
U
= +V
T hc 45

Now, we have previously derived that:

U 2 kB
4 T4
=
V 15(hc)3

Hence, inserting things, and clearing up, results in:


U 1U 4U
S= + =
T 3T 3T
Now the, pressure is found from: 
S
p=T
V U
Now, to do this properly, we need to be carefull; and find S(U, V ). Now, we just worked out (putting
all constants together):
U = aV T 4

46
Thus, we see that S = 43 aV T 3 . Therefore:

U 3/4
 
4
S = aV
3 aV
 3/4
4 U
= a V 1/4
3 a
  3/4
S 1 U
= a V 3/4
V U 3 a
U 3/4
 
1
= a
3 aV
1 3
= aT
3 
S
p = T
V U
1 4
= aT
3
1U
=
3V
Hence, we have that the pressure exerted by a gas of photons is one third of the energy-density of the
photons. Which is, incidentally, an identical result as found for the pressure due to ultra-relativistic
electrons (fermions)
Now, for adiabatic expansions, we have that dS = 0. Classically, this corresponded with the result
of P V =const. However, here, if we look at the expression for S, and put it equal to a constant,
we have:
V T 3 = const
For a photon gas.

8 Lattice Vibrations of a Solid

Weve thus far been discussing a gas of photons by specifying the occupancies of each energy level.
Thus, systems previously were determined by the set {ni }; where ni was the number of photons in
energy level i . Thus, we were able to compute the average internal energy of the system via:
Z
X dn
U= hnii i hni d
0 d
i

We use this framework as an exact analogy to lattic vibrations of a solid:


Photons in energy level i = h
i can be viewed as quanta associated with a set of quantum harmonic
oscillators. That is, we model a solid as being a set of harmonic oscillators, each loaded with some
quanta. The analogy goes that the set of harmonic oscillators is the same as the set of energy levels
of a photon gas. In quantising the lattic vibrations, we introduce the term phonon as the analogy
to photon for a photon gas. A phonon is the quanta of lattice vibrations. A phonon is the sound

47
particle, where a photon was a light particle.
For a cube of some solid, with N atoms on some periodic lattice, there will be 3N normal modes.
This is the total number of frequencies the system is allowed to vibrate in; such is the definition of
a normal mode.
So, we have that we have ni phonons in energy level i . Hence, we have an energy ni hi on
oscillator i. As we had no need (or idea) about a fixed number of photons in a system, we also have
that the number of phonons in a system is not fixed; thus, = 0 for lattice vibrations. We also use
Bose-Einstein statistics.
In Einsteins model of a solid that we discussed at the start of the course, it was assumed that all
3N oscillate with the same frequency; thus 1 = 2 = . . . = 3N . So, for an Einstein solid:
3N
X
UE = hni iBE i
i=1
3N
X 1
= hi
i=1
eh/kB T 1
3N h
=
eh/kB T1
Which is a result that previously took us a lot longer to derive, as we previously had to physically
count all states available. From this, we can just take the differential w.r.t. T to find the heat
capacity.
Now, Debye realised that the 3N modes do not oscillate with a single frequency - as Einstein had
assumed. But rather, the allowed frequencies are those of harmonic waves in a box. Thus, we have
a wavenumber ki = nLi . Thus, we are able to write down a density of states. We use the relation
k = u , where u is the speed of wave propogation, which is the speed of phonons, which is the speed
of sound. Thus:
dn dn dk
=
d dk d
V k2 1
=
2 2 u
V 2
=
2 2 u3
Now, the multiplicity factor we use is 3. This comes from the consideration that we are able to
excite 2 transverse and 1 longitudinal sound waves in a 3d cube. Thus, the useable density of states
is thus:
dn V 2
=3 2 3
d 2 u
So now, lets calculate the average energy of a Debye solid:
Z D
dn
UD = hni d
0 d
Now, notice: for a photon gas, the upper limit was infinity. That was because there was an
(essentially) infinite range of frequencies open to the system. We cannot assume this for lattice
vibrations. So, we assume that only < D , that is > D . So, lets try to compute this cut-off.

48
Suppose atoms are spaced by an amount d. Then, the shortest possible wavelength that is able to
be fully supported by atoms are D 2d. If we try to stick more than one wave between atoms,
then nothing is excited, as there are no atoms there, hence pointless. Therefore, we are able to
1
estimate the shortest wavelength that will be excited. The atomic spacing d is of the order n1/3 , the
inverse of the cube-root of the number-density. We see that is of the order u (up to 2). Hence:
 1/3
N
D u
V
We can calculate this exactly, by noting that there should be exacly 3N energy levels. So:
Z
dn
3N = d
0 d
3
3V D
= 2 3
2 u 3
 1/3
2N
D = u 6
V
Therefore, we have an expression for the Debye cut-off frequency D . Notice, its in good agreement
with previous estimation. So, going back to the calculation of the internal energy of a Debye solid,
inserting expressions for the density of states and BE distribution:
Z
D 3V h 3 d
U = 2 3
2 u 0 eh/kB T 1
This integral is hard to do, and we are unable to express it in the dimensionless terms we have used
for bose gases. So, lets compute the heat capacity directly:
U
CD =
T
Z D
3V h 3 eh/kB T h
= 2 3
d
2 u 0 (e h
/k B T 1) kB T 2
2

Now, put:

h
x
kB T
Then:
xD
3V h (kB T )4 1 x4
Z
D
C = dx
2 2 u3 h T 0 (ex 1)2
This can be cleaned up, after a lot of work, to:
Z xD
x4 e x
 
D 3
C = 3N kB dx
x3D 0 (ex 1)2
Let us now look at the high and low temperature limits:
For high T , we see that x is small. Hence, the exponential in the bottom is expanded to ex = 1 + x;
and the exponential in the top just to unity. Thus, the integral itself is:
Z xD Z xD 4
x4 x 1
2
dx = 2
dx = x3D
0 (x + 1 1) 0 x 3

49
Hence, we see:
3 x3D
C D 3N kB = 3N kB
x3D 3
Therefore, the high temperature behaviour of the heat capacity is a constant: C D = 3N kB .
In low temperatures, we see that x . Hence, the upper limit of the integral goes to infinity.
x
We see that the (exe)2 term goes to ex , for large x. Therefore, the integral just becomes:
Z
x4 ex dx
0

Which is just a number, , say. Then:


3
C D 3N kB T3
x3D

Hence, we see that the low temperature heat capacity goes as T 3 . We find that experimental data

Figure 2: Heat capacity predictions of Debye and Einstein, Data will sit on Debyes curve, as
opposed to Einsteins.

sits on the Debye curve, as opposed to that predicted with Einsteins approximation.

50
A Colloquial Summary

This is a summary, based on the summary lecture.


The fundamental assumption pretty much everything is based upon is that for a closed system in
equilibrium, all microstates are equally likely. Now, if a particular macrostate has more microstates
than others, the it is more likely to be picked.
There may be more than one way (microstate) of finding the system with a particular number of
particles, energy, volume. These macroscopic quantities (N, U, V ) then define a macrostate.
A unique equilibrium macrostate usually exists. It is the macrostate with the overwhelmingly most
microstates. Now, we define entropy via S = kB ln g. Where g is the number of microstates in
a given macrostate. So, in equilibrium, S is a maximum. Thus, dS = 0. Therefore, for some
S
macroscopic variable X, X = 0. This is a consequence of the chain rule. These variables are then
used to define temperature, chemical potential and pressure:
S S S
T p
N U V
Now, lets consider an open system; one that can transfer particles and energy with a reservoir. We
do this by considering our system of interest as being part of a bigger system. Suppose our system
of interest has particle number and energy Ns , Us . Then, we have derived that the probability to
find the system in such a state; in a single quantum state (microstate) is:
Ns Us
e kB T
P (Ns , Us ) =
Z
Which is the Gibbs distribution.
From now, we shall consider our systems to be composed of non-interacting particles. We consider
fermions and bosons. We can derive that the mean number of particles in a given quantum state is:
1
hniF D =
e()/kB T +1
1
hniBE =
e()/kB T 1
These tell us what the average number of particles in a particular energy state is.

We have the classical limit where hniF D hniBE e kB T . The classical limit can be interpreted
as being where the box defined by the de Broglie wavelength of the particles is very much less than
the distance between the particles. Or, n << nQ , where nQ 13 .
Q
We write some general formulae for finding the particle number N , internal energy U of the system
and entropy S:
Z
dn
N = hnidk
dk
Z0
dn
U = hnidk
0 dk
X
S = kB pi ln pi

I
The integral for N can be done, and then solved for the chemical potential . We must specify
2 2
whether to use non-relativistic or ultra-relativistic expressions  = h2m
k
or  = h
ck. We now consider
fermions and boson separately.

Fermions Fermions have anti-symmetric wavefunctions, which has the consequence of excluding
there ever being more than one fermion in a single particle energy state.
We have considered examples of electrons in metals and white dwarf stars.
We can use an approximation of T = 0 to rewrite the particle distribution function. We say that
the occupancy of all (almost) single particle states below some  = F is total, and that (almost)
no particles have energies above F . Hence, the integral for the number of particles N has upper
limit kF , and hni = 1 in this range. From this, we are able to derive an equation for F , kF and use
F = kB TF to find the fermi-temperature.
For T 6= 0, we imagine the step function crumbling over, to give the actual distribution. We use
this idea in calculating low temperature corrections to U . We say that the particles within kB T of
F move up in energy by amount kB T . This allows us to write:
kB T
U = N kB T
F
That the internal energy is now dependent upon T gives the experimentally verified result of linear
heat capacity. Note, at T = 0, U is independant of T , and C = 0.

Bosons Bosons have symmetric wavefunctions, and hence have no restriction on the number in
any single particle energy state.
We see that if N is fixed, then, as T falls, rises. We see that < 0; however, if T keeps decreasing,
will go above zero (where zero is defined as being the same as the lowest energy state). When
this happens, there is a negative particle occupancy. So, to get around this, we say that below some
critical temperature, Tc , = 0. We say that Tc is the minimum temperature for which we can
ignore the ground state occupation. We also find that the ground state is macroscopically occupied
by particles. Infact, we define: Z
dn 1
N d
0 d e B Tc 1
/k

Now, for T < Tc , we are able to say that N = N0 + Ne , where N0 is the number in the ground state
and Ne is the number of particles in excited state, where:
Z
dn 1
Ne = d
0 d e B T 1
/k

Using these two integrals, we are able to calculate the number of particle in the ground state. We
are also able to calculate Tc .
If we have a gas of photons in thermal equilibrium, we cannot expect the particle number to stay
constant; however, we still say that = 0, from considerations of the entropy in equilibrium. This
modifies the BE distribution, to give us that for a gas of photons. We are easily able to compute
the energy density of such a gas, and hence the power outputted per unit area, and can thus derive
Stefans law P = T 4 . The distribution of this power over wavelengths (spectral distribution) can

II
thus be calculated. We write u()d as being the energy per unit volume in a wavelength interval.
We can write the same thing in :
Z
1 dn
u()d = hni d
V d
Where we write  = h and the particle distribution is the modified BE distribution with = 0.
Now, we actually have that: Z
U
= u()d
V
1U
As energy density. We are able to calculate the pressure exherted by a photon gas as being 3V,
which is the same as that due to ultra-relativistic electrons.

III
B Calculating the Density of States

The number of states dn, of spinless particles in a small 3D annulus, in k-space is given by the
volume of the annulus (which is 81 of the volume of the whole annulus) divided by the volume taken
up by one state. Thus:
1 2
8 4k dk
dn =
3

L
V k2
= dk
2 2
We have used that L3 = V . Hence, rearranging, we have the density of states, in k-space, of spinless
particles:

dn V k2
= (B.1)
dk 2 2
This will be our starting point for all subsequent calculations. To get the density of states in energy-
space, we require a relationship between the wavenumber k and energy , and hence we must specify
whether or not the gas is non-relativistic, or ultra-relativistic (massless).
If the particles are fermions, and carry an intrinsic spin s, then this is modified to:

dn V k2
= (2s + 1) 2
dk 2
If the particles are photons, they carry a spin of 1. However, we actually use a factor of 2, as there
are two polarisation states of a photon associated with each state. Hence, for photons:

dn V k2
= 2
dk
Proceeding with all subsequent calculations assume spinless.

B.1 Energy Space: Non-Relativistic

Suppose we want the density of states in energy space, that is:


dn
d
We use the chain rule of differentiation to write:
dn dn dk
=
d dk d
The link between wavenumber k and energy , for non-relativistic particles is:

2 k2
h
= (B.2)
2m

IV
Hence, rearranging:
1/2


2m
k= 
h2
Therefore:  1/2
dk 2m 1
=
d h2 2 
dn
Now, we must also put dk in terms of , which must be done using the non-relativistic expression
above in (B.2):
dn V k2 V 2m
= = 2 2
dk 2 2 2 h
Therfore, putting this all together:
dn dn dk
=
d dk d
V 2m 2m 1/2 1
 
=
2 2 h2 h2 2 
2m 3/2
 
V
= 
4 2 h2

So, the density of states, in energy-space, of a gas of non-relativistic particles is thus:


3/2


dn V 2m
=  (B.3)
d 4 2 h2

B.2 Energy Space: Ultra-Relativistic

If the gas is ultra-relativistic, we must use the following relation  = pc, hence:

=h
ck (B.4)

Thus, we have the chain-rule expression:


dn dn dk
=
d dk d
Where we now see that:
dk 1
=
d hc
We also have that:
dn V k2 V 2
= 2
= 2
dk 2 hc)2
2 (
Therefore, the density of states for an ultra-relativistic gas of spinless particles is:

dn V 2
= 2 (B.5)
d 2 (hc)3

V
C Deriving FD & BE Distributions

The Gibbs probability distribution:


e(N U )
P (N, U ) =
Z
1
Where kB T . That is, the probability to find the system with N particles and internal energy
U.
Now, if we have an energy level i , and ni particles occupying that level, then:
X
N = ni = n1 + n2 + n3 + . . .
i
X
U = ni i = n1 1 + n2 2 + n3 3 + . . .
i

That is, ni is the number of crosses on a particular energy level. Hence, Gibbs becomes:

e ( ni i ni i )
P P
i
P (N, U ) =
P Z
e i ni (i )
=
Z
The Grand Partition Function Z is the sum for ni taking on all values (i.e. there being 1, 2, 3,
. . . particles in energy level 1, and again for all other energy levels). That is:
X P
Z = e i ni (i )
ni
X
= e(n1 n1 1 )+(n2 n2 2 )+(n3 n3 3 )+...
ni
XY
= e(ni ni i )
ni i

So, P (N, U ) can be written:


P
e i ni (i )
P (N, U ) = P Q (ni ni i )
ni ie
eni (i )
Q
i
= P Q (ni ni i )
ni ie
en1 (1 ) en2 (2 ) en3 (3 ) . . .
= P n1 (1 ) en2 (2 ) en3 (3 ) . . .
ni e
en1 (1 ) en2 (2 ) en3 (3 )
= ...
en1 (1 ) en2 (2 ) en3 (3 )
P P P
n1 n2 n3
= P (n1 , 1 )P (n2 , 2 )P (n3 , 3 ) . . .

So, the probability to find the system with N particles, and internal energy U is the product of the
probabilities of finding ni particles in energy level i .

VI
So, the average number of particles in energy level i is given by:
X
hn(i )i = P (ni , i )ni
ni =0,1,2,...

Where:
eni (i )
P (ni , i ) = P n ( )
ni e
i i

Let us suppose that we only allow either 0 or 1 particles in each energy state (i.e. fermionic case);
hence:
X
hn(i )i = P (ni , i )ni
ni =0,1
= 0 P (0, i ) + 1 P (1, i )
e0(i ) e1(i )
= 0 P n(i )
+ 1 P n(i )
n=0,1 e n=0,1 e
e(i )
=
e0 + e(i )
e(i )
=
1 + e(i )
1
= ( )
e i +1
We have thus derived the Fermi-Dirac distribtion.
Suppose that we allow any number of particles in each energy level; that is, ni can take on any
number 0, . . . , .

X
hn(i )i = P (ni , i )ni
ni =0

0 + e1(i ) + 2e2(i ) + . . .
=
1 + e(i ) + e2(i ) + . . .
Here, we have noted that denominator (grand partition function) is actually a common factor to all
expressions. To evaluate this sum, we see that this is actually:

a + 2a2 + 3a3 + . . .
1 + a + a2 + a3 + . . .
Now, the sum

X 1
ai = ,
1a
i=0
is a standard result. We use this to find:
1
hn(i )i =
e() 1
We have thus derived the Bose-Einstein distribution.

VII

S-ar putea să vă placă și