Documente Academic
Documente Profesional
Documente Cultură
January 2, 2008
Contents
3 Identical Particles 8
3.1 Distinguishable Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Indistinguishable Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Pauli Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5 Classical Limit 11
5.1 Chemical Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.1.1 Internal Energy & Heat Capacity . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.2 Entropy of an Ideal Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6 Fermi Gases 19
i
6.1 Ideal Fermi Gas at T 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.2 Density of States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.2.1 3D Density of States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2.2 Low T Corrections to N, U . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.3 Example: Electrons in Metals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.4 Example: Liquid 3 He . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.5 Example: Electrons in Stars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.5.1 White Dwarf Stars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7 Bose Gases 38
7.1 Black Body Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.2 Spectral Energy Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.2.1 Pressure of a Photon Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
A Colloquial Summary I
ii
1 Einsteins Model of a Solid
Assume: each atom in a solid vibrates independantly about an equilibrium position. The vibrations
are assumed to be simple harmonic, and all of the same frequency.
In a 3D solid, each atom can oscillate in 3 independant directions.
i.e. if N oscillators, then N3 atoms.
Our system is a collection of N independant oscillators. Each oscillator has energy:
1
ni + h (1.1)
2
i = ni h (1.2)
U = (n1 + n2 + n3 + . . . + nN )h (1.3)
= nh (1.4)
(N + n 1)!
g(n, N ) = (1.5)
n!(N 1)!
To summarise: g(n, N ) is the total number of quantum states. The probability that an Einstein
1
solid is in any partiular state is given by g(n,N ) . These ideas lead directly to the definitons & the
1
concept of entropy & 2nd law of thermodynamics.
To get an idea of temperature, well consider 2 Einstein solids connected to each other so that they
can exchange energy.
UA is the energy of solid A = nA
h.
UB is the energy of solid B = nB
h.
Hence, the total energy is U = UA + UB ; and is constant.
Now, the number of possible quantum states for A + B is:
Hence, N is the total number of osscillators, and n the total number of quanta. Then:
X
gA+B (n, N ) = g(nA , NA )g(n nA , N NA ) (1.10)
nA
Intuitively, we say that provided NA and NB are large enough, the system will settle down into a
macrostate with nA = n A . Remarkably, this is already present in our analysis.
We claim that:
X
gA+B (n, N ) = g(nA , NA )g(nB , NB ) (1.11)
nA
nA
ln n! n ln n n (1.12)
d(gA gB ) = 0 (1.15)
d
(gA gB ) dnA = 0 (1.16)
dna
gA gB
gB dna + gA dnB = 0 (1.17)
na nB
2
Now, we see that dnA = dnB (as n is fixed.) Hence, we have
gA gB
gB gA = 0 (1.18)
nA nB
gA gB
gB = gA (1.19)
nA nB
1 gA 1 gB
= (1.20)
gA nA gB nB
Hence, we find that something is equal in equilibrium.
We define temperature:
1 1 g
= (1.21)
kB T g U
ln g
= (1.22)
U
This definition gives T units of Kelvin, and according with our desire to have heat flow from hotter
to cooler bodies.
Let a hotter body loose energy U > 0. Then, we can write (from (1.22)) as:
U
ln ghot = (1.23)
kB Thot
U
ln gcold = (1.24)
kB Tcold
ln(ghot gcold ) = ln ghot + ln gcold > 0 (1.25)
Which should be so, as the system moves towards equilibrium. Thus:
U 1 1
ln ghot + ln gcold = (1.26)
kB Tcold Thot
Which is > 0 if Tcool < Thot .
The necessity that ln g increases as the system evolves towards the equilibrium state (/equiv state
with maximum g) is just the second law of thermodynamics:
S = kB ln g (1.27)
Where S is entropy. As we have that ln g > 0, then this implies that S > 0. Systems evolve to
states of higher statistical weight.
Aside The law of increase of entropy appears to signal a violation of time reversal invariance.
This is not actually so: imagine molecules in a gas.
Although our discussion followed an Einstein solid, it should be clear that our discussion is of much
broader generality.
So, so we have:
S = kB ln g (1.28)
1 S
= (1.29)
T U
3
Since we worked hard to get g(n, N ), we may as well make use of it. Lets predict the heat capacity
of a solid.
S
= ln g (1.30)
kB
= (N + n) ln(N + n) n ln n N ln N (1.31)
1 S 1 1 S
= = (1.32)
T U T h n
N +n h
ln = (1.33)
n kB T
N
n(T ) = h
(1.34)
e kB T 1
Hence, we have the energy of the solid as a function of temperature: n(T ); remembering that
U = nh. Hence, we have that the specific heat capacity C is:
U
C =
T
n
= h
T N
h
N h kBhT 2 e kB T
= h
(e kB T 1)2
N kB X 2 e X
h
= X
(eX 1)2 kB T
Hence, for X << 1 (i.e. high T ), we have the Dulong-Petit law: C = N kB . For X >> 1 (i.e. low
T ), we have that C = N kB X 2 ex . So, graphically, we have a curve, starting from 0 at 0, which
increases to a constant at high temperatures.
We can go further (than the previously closed systems), and figure out the probability that a system
is in a particular quantum state.
If the system is closed, then we know the answer: the probability of the system being in any one
state is just g1 .
Generally however, we are interested in systems which are not isolated (i.e. are not closed).
Suppose we have a really really big box (denoted the reservior R), which is closed; and a smaller
box (our system S), within R, and is allowed to exchange particles and energy with R.
At equilibrium, the total number of states available to the system as a whole is:
gT = gS gR (2.1)
Where gi is the number of accesible states to the system i.
In equilibrium, gT is a maximum. As gT (US , NS ) we can hence write its differential:
dgT = 0 (2.2)
gT gT
= dUs + dNS (2.3)
US NS
4
And, as gT gR + gS , we use the product rule to get:
gR gS gR gS
gS dUS + gR dUS + gS dNS + gR dNS = 0 (2.4)
US US NS NS
gR gS gR gS
gS + gR dUS + gS + gR dNS = 0 (2.5)
US US NS NS
Now, (2.6) gives us the previous definition of temperature: TR = TS in equilibrium. The other
equation, (2.7) will give us that the chemical potential is the same in equilibrium:
R = S (2.8)
S T kB g
T = (2.9)
N g N
S g
As S = kB ln g, we have that N = g1 N .
So far, we have assumed that the volume of our system is constant. If this is not the case, all we
g
have, is that g(U, N, V ), and hence an added term of V dV (for both S, R) in (2.5). This will give
us, that in equilibrium, we have:
1 gR 1 gS
= (2.10)
gR VR gS VS
Which leads us to give the definition of pressure:
S
p=T (2.11)
V
And that pS = pR in equilibrium.
Now, our goal is to find P (NS , US ), the probability to find the system in a particular quantum
state. i.e. gS = 1.
So,
SR SR
SR (NT NS , UT US ) = SR (NT , UT ) NS US (2.14)
N U
N US
= SR (NT , UT ) + (2.15)
T T
5
Neglecting terms in higher differentials, as reservior is big enough, then its temperature T and
chemical potential are independant of NS , US . So, we have:
SR (NT , UT ) NS US
P (NS , US ) exp + (2.16)
kB kB T kB T
Now, we notice that the first term in the exponential is a constant (or assumed to be so constant
that it is!); also, we now drop the subscripts, and write that US = S = . This gives us:
N
P (N, ) exp (2.17)
kB T
So that we now have the probability to find the system as a whole, in a particular state with energy
, and number of particles N , is given by:
exp NkB
T
P (N, ) = (2.21)
Z
Note, if the system has more than one type of particle, then the exponent changes thus:
N N1 1 + N2 2 + . . .
exp exp (2.22)
kB T kB T
6
2.0.1 Example: CO Poisoning
Suppose that our system of interest is a Haemoglobin molecule, in one of 3 states: unbound (1);
bound to O2 (2); bound to CO (3).
So, we can write the particle numbers, for each state, in terms of (NHb , NO2 , NCO ), with their
associated energies (given)
(1) : (1, 0, 0) 1 = 0
(2) : (1, 1, 0) 2 = 0.7eV
(3) : (1, 0, 1) 3 = 0.85eV
The chemical potentials are: Hb =dont care! O2 = 0.6eV , CO = 0.7eV (again, values given).
T = 310K.
So, to calculate Z, we have:
!
N1Hb Hb + N1O2 O2 + N1CO CO 1
Z = exp (2.25)
kB T
!
N2Hb Hb + N2O2 O2 + N2CO CO 2
+ exp (2.26)
kB T
!
N1Hb Hb + N3O2 O2 + N3CO CO 3
+ exp (2.27)
kB T
= eHb /kB T + e(Hb +O2 +0.7eV )/kB T + e(Hb +CO +0.85eV )/kB T (2.28)
= 161 (2.29)
Hence, we can write what the probability of each state (1), (2) or (3) is, of occuring:
eHb /kB T 1
(1): Hb is unbound: P1 = Z = 161 .
(Hb +O +0.7eV )/kB T
e 2 40
(2): Hb is bound to O2 : 2 = Z = 161 = 25%.
e(Hb +CO +0.85eV )/kB T 120
(3): Hb is bound to CO: P3 = Z = 161 = 75%
I find the following formula easier to use, to write the probability of finding the system in a state
with energy j , with N particles:
7
3 Identical Particles
We shall be looking at cases where the system is a bunch of particles. We need to be able to
count quantum states for such system. We start with a simple system.
If we have 2 distinguishable particles (A, B), and 2 accesible quantum states (1 , 2 ), and if the
particles are distinguishable, then we can have 4 configurations: (AB, 0), (0, AB), (A, B), (B, A).
Hence, 4 allowed states of the system.
If we have again 2 accesible states, but 2 identical particles, we now have that the possible config-
urations are: (AA, 0), (0, AA), (A, A). Hence, 3 allowed states. This is the case if we allow the two
particles to be in the same state at the same time, We denote these types of particles bosons.
The other possibility is that we dont allow the two particles to be in the same state. Hence, we
have that the only configuration is (A, A). We denote these types of particles fermions. That there
is only one state allowed, is a consequence of the Pauli Principle. Which we shall now derive.
Let i (x) be an energy eigenstate for a single particle in the system, so that:
H(x) i (x) = Ei i (x)
If our two particles do not interact, then i (x)j (y) is an energy eigenstate of the two particle
system, with E = Ei + Ej :
h i
H(x)
+ H(x) i (x)j (y) = Ei i (x)j (y) + Ej i (x)j (y)
If the particles are indistinguishable, then no observable can depend upon which particle is where.
Hence, we have that observables should be unchanged if we swap the positions x y:
D E Z
O = (x, y)O(x,
y)(x, y)d
8
Now, we want:
|ij (x, y)|2 = |ij (y, x)|2
This implies that:
ij (x, y) = ei ij (y, x)
Where swaping over the spatial coordinates has the effect of picking up some phase ei . Now, if we
swap back:
ij (x, y) = ei ei ij (x, y)
e2i = 1
ei = 1
So, we have that:
ij (x, y) = ij (y, x)
Now, there are two linear combinations of i (x)j (y) and i (y)j (x) which satisfy this requirement:
ij (x, y) = i (x)j (y) i (y)j (x) (3.1)
If the positive sign is taken, then the wavefunction is symmetric (under interchange of two identical
particles). We classify this type as being bosons. If the negative sign is taken, then the wavefunction
is anti-symmetric, and we classify these as fermions.
We see that the Pauli principle is a direct consequence of the symmetry property for fermions:
If we have 2 identical fermions in the same state i, then:
ii (x, y) = i (x)i (y) i (y)i (x) = 0
|ii (x, y)|2 = 0
That is, there is no probability for the system to be found in such a state. Therefore, two identical
fermions can not be in the same quantum state. There is no such problem for bosons.
This is actually a pretty weird conclusion. If we have the idea that an electrons wavefunction is
never zero (just really small) anywhere, then any two electrons in the universe must be considered
dependantly, and we must conclude that these two electrons can never be in the exact same state.
That is, we cannot treat an electron sitting on the earth independantly to one on the other side
of the galaxy. These two electrons simply can never be in the same state. We end up concluding
that they are allowed to sit very very close to being in the same state, but not quite. This can be
shown in potential well arguments. There is a very fine difference between the ground state of two
electrons whose wavefunctions overlap.
We ought to conclude by stating that bosons have integer spin, and fermions half-integer. This is
proved using relativistic quantum mechanics, and is hard to do!
If a collection of particles have an odd number of fermions, then the system is fermionic; even
number of fermions gives a bosonic system.
9
state.
A single state of S is specified by the set {ni , i }; where ni is the number of particles in the energy
level i . So:
exp[(n1 + n2 , + . . .)/kT (1 n1 + 2 n2 + . . .)/kT ]
P ({ni , i }) = P (4.1)
n1 ,n2 ,... exp[(n1 + n2 , + . . .)/kT (1 n1 + 2 n2 + . . .)/kT ]
en1 (1 )/kT en2 (2 )/kT . . .
= P n1 (1 )/kT
P n2 (2 )/kT . . .
(4.2)
n1 e n2 e
= P1 (n1 )P2 (n2 ) . . . (4.3)
Where:
eni (i )/kT
Pi (ni ) P n ( )/kT
ni e
i i
Thus, Pi (ni ) is the probability to find ni particles in energy level i . What we have done is to show
that the probability to find the system in a particular state is just the product of the probabilities
of finding a particular number of particles in the particular state.
Now, we can figure out the mean number of particles in a particular energy level:
X
hn(i )i = Pi ni
Here, we just condsider the fermionic case. We know that we can only have either one or zero
particles in each state. Thus, the sum over n is just for n = 0, 1. Hence:
X
hn()iF D = P (n)n (4.4)
n=0,1
1 0 + e()/kT
= (4.5)
1 + e()/kT
1
= (4.6)
1 + e()/kT
1
hn()iF D = ()/kT
(4.7)
1+e
Where hn()iF D is called the Fermi-Dirac Distribution.
0 + e()/kT + 2e2()/kT + . . .
hn()iBE = (4.8)
1 + e() + e2() + . . .
10
Now, if we define:
a e()/kT
We see that the denominator is a geometric series:
1
1 + a + a2 + a3 + . . . =
1a
And that the top is:
In a gas of particles with spin s, the mean number of particles per state is hence given by:
2s + 1
hn()i =
e()/kT 1
5 Classical Limit
11
Figure 1: Graph showing how the three distributions change. Notice that the Fermi-Dirac distri-
bution stays below 1.
We can develop insight into by computing the mean number of particles in our system as a whole.
Suppose we have N particles, within a big cube, with sides length L. Our system is within the big
cube. So, we have:
X
N = hn(i )i
i
X
= exp
kT
i
22 2
h
(n1 , n2 , n3 ) = (n + n22 + n23 )
2mL2 1
Where the integers ni run from 1, 2, 3, . . . , . Hence we have that the total number of particles in
the system is:
X (n1 , n2 , n3 )
N= exp
n ,n ,n
kT
1 2 3
kT 2mL2
n21 + n22 + n23
h2 2
12
Going back to evaluation of the summation: As we find that ni >> 1, we will make the approxi-
mation of making the energy levels continuous:
Z Z Z
X (n1 , n2 , n3 ) (n1 , n2 , n3 )
exp dn1 dn2 dn3 exp
n ,n ,n
kT kT
1 2 3
Rearinaging, we get:
n
= kT ln (5.1)
nQ
13
gives: p mkT . Therefore, Q h . Now, we want nQ 1
, and we have hence acheived
mkT 3Q
the previous value of nQ .
Hence, we may say that the quantum density nQ is that at which particles occupy boxes the size of
their de Broglie wavelengths.
Comments
Dont forget the spin multiplicity factor. If the gas particles have spin s, then
n = (2s + 1)exp(/kT )nQ = kT ln(n/(2s + 1)nQ ).
Actual values of depend of what we choose as the zero of the energy scale. If = 0 +
(n1 , n2 , n3 ), then a change of + 0 would keep everything the same. So = 0 +
kT ln(n/nQ ).
We have assumed that the gas so far is monoatomic. Now, if we give the system a set {j} of
internal quantum numbers (e.g angular momentum), then we have that we can find the new chemical
potential . We proceed by finding the average number of particles (in total) in the system, which
is just the sum over probabilities to find a number of particles in an energy state:
X
N = ni
i
X i
= e kT
i
X
= e /kT
ei /kT
i
/kT
= e Z
n
= Z
nQ
i /kT . N
P
Where we have defined the partition function Z ie Now, from the definition n = L3
.
Hence:
NZ
N = (5.4)
L3 nQ
Z = nQ L3 (5.5)
So, now, if we introduce internal degrees of freedom, we introduce another energy term j ; hence:
X X
N = e/kT ei /kT ej /kT
i j
ei /kT
P
N X
n= = e/kT i
ej /kT
L3 L3
j
Z
n = e/kT Z
3 int
L
n
= kT ln
nQ Zint
14
Hence, we see that if we introduce internal motion of the particles, described by the internal partition
function Zint , the chemical potential is given by:
n
= kT ln (5.6)
nQ Zint
We know that the internal energy U is given by the sum over the different states, where each state
is the mean number of particles in each energy level (state):
X
U = hni ii
i
X
= e(i )/kT i
i
X
= e /kT
ei /kT i
i
n X i /kT
= e i
nQ
i
N L3 X i /kT
= e i
L3 Z
i
N X i /kT
= e i
Z
i
Where we have used that Z = nQ L3 and n = N/L3 . Now, we notice that the last expression is
equal to the differential of the logarithm of the partition function:
ln Z
U = N kT 2
T
2
= N kT ln(nQ L3 )
T
3
= N kT
2
After doing the differentiation, and subbing in for nQ .
We can also compute the heat capacity:
U 3
CV = = Nk
T 2
Where we have now been able to reproduce last years results.
We must count all accesible quantum states. But it is not clear how to do that for a system with
variable particle number and energy.
We can compute the mean entropy by considering the following situation:
15
Suppose we have a big box, and inside the big box is m little boxes. One of these little boxes is our
system. Hence, our system is surrounded by (m 1) replica systems, all in thermal and diffusive
equilibrium with each other. Each system is specified by specifying its quantum state.
Each distinct configuration of the boxes corresponds to 1 quantum state of the entire system (where
the entire system is the collection of m little boxes).
To compute the entropy of the entire system, we just need to count the number of ways of shuffling
the boxes. This is just:
X m! X 1
W = = m!
m1 ,m2 ,...
m1 !m2 ! . . . m ,m ,...
m1 !m2 ! . . .
1 2
Sm = k ln W
!
X
= k m ln m mi ln mi
i
P
Where we have used Stirlings approximation, and noting that i mi = m. Hence, we see that:
X
m ln m = mi ln m
i
Therefore:
X m
Sm = k mi ln
mi
i
X mi mi
= km ln
m m
i
e(Ni i )/kT
pi = P (N )/kT
e i i
Where the sum is over all microstates of the system.
Therefore, we have derived that:
X
S = kB pi ln pi (5.7)
i
Where we have made no distinction between classical or quantum gases: this holds for both.
So, for an ideal gas, we have that:
e(Ni Ei )/kT
pi =
Z
16
Where we have that:
Ni = n1 + n2 + n3 + . . .
Ei = n1 1 + n2 2 + n3 3 + . . .
Where we have a summation over occupancies over single particle states. Hence, we can show that
this summation is the same as the product of the corresponding factors:
Y
Z = 1 + e(i )/kT + e2(i )/kT + . . . (5.8)
{i }
YX
= enj (i )/kT (5.9)
{i } {nj }
P i 1
And we see that the sum is just the standard i=0 x = 1x , to give:
Y 1
Z=
1 e(i )/kT
{i }
For fermions, the sum in (5.9) is just for nj = 0, 1; and is hence trivial, and we find that, for
fermions:
X
ln Z f ermion = ln 1 + e(i )/kT (5.11)
{i }
17
i.e. Z classical Z boson Z f ermion in the classical limit.P
Hence, in the classical limit, we can compute S = k i pi ln pi , where i is over the states of the
gas. Below, j are the single particle energy levels:
X
S = k pi ln pi
i
X e(Ni Ei )/kT e(Ni Ei )/kT
= k ln
Z Z
i
X e(Ni Ei )/kT
= k ln e(Ni Ei )/kT ln Z
Z
i
X e(Ni Ei )/kT
Ni Ei e/kT
X
= k ej /kT
Z kT
i j
!
X 1 X
= k pi Ni pi Ei e/kT Z
kT kT
i i
N U /kT
= k Ze
kT kT
P j /kT
Where we have used the partition function Z
P= j e , and thatP
the expressions for the average
particle number and energy were written: i Ni pi = hN i = N and i Ei pi = hEi = U .
Now, we have previously derived that = kT ln nnQ , and Z = nQ V . Hence, we have:
n U n
S = k N ln nQ V
nQ kT nQ
If we put:
3
U = N kT nV = N
2
Then:
5 nQ
S = Nk + ln (5.12)
2 n
18
We can now easily compute the pressure and Cp . We start with pressure:
S
p = T
V N,U
V
= N kT ln nQ + ln
V N N,U
= N kT ln V
V
N kT
=
V
pV = N kT
Where we have used that nQ (T ) only, and that if we fix N and U , we therefore fix T . And finally,
the heat capacity at constant pressure:
S
Cp = T
T p,N
N kT 31
= N kT ln +
T p 2T
5
= N kT
2
Notice, we have also recovered a previously known result: Cp CV = N kT .
6 Fermi Gases
We shall focus on a non-interacting gas of spin- 12 particles. At sufficiently low temperatures, quan-
tum effects are crucial. So it remains to show when we we can use the classical limit, and when
quantum physics becomes important.
Examples of such systems are:
3 He atoms;
Nucleons in a nucleus.
Example We will show that free electrons in a metal at room temperature will be in the quantum
regime, whereas hydrogen at STP will not.
We know we need to use quantum physics if n nQ .
Consider the electrons first:
19
If there is roughly 1 conduction electron per atom, in a metal of density 103 kgm3 . So, we can
compute the electron density:
103 kgm3
n 1028 m3
1025 kg
where we have used an approximate atomic mass.
Now,
3
mkT 2
nQ =
2h2
So, we have quantum physics when:
n nQ
h2
2/3 2
T < nQ
mk
T < 105 K
Where we have used the mass of the electron. Notice, this upper temperature is well above room
temperature, so free electrons in a metal are always acting according to quantum physics.
N
For hydrogen, we compute the number density via pV = N kT , and knowing that n = V . Thus:
p
n
kT
105
1023 102
= 1026 m3
1017 1067
T < K
1027 1023
< 1K
Where we have used the mass of a hydrogen atom. Notice that this temperature is very small, so
quantum effects for hydrogen hardly ever come into play.
Intuitively, expect the gas to be in its ground state at T = 0: all fermions occupy the lowest single
particle states. So, up to some energy F , we expect the system to be full, and above F empty. All
particles in the system occupy states below some fermi-energy.
hn()i = 1 < F
hn()i = 0 > F
This is all using Fermi-Dirac statistics.
Notice that this is the case for T = 0. We get a step function. For T > 0, the step is smeared out
slightly.
Now, we know
1
hn()i = ()/kT
e +1
20
If > , then (under T = 0), we have that hni = 0.
Similarly, if < , then hni = 1.
Hence, we have the interpretation that the Fermi-energy F is the value of the chemical potential
at T = 0.
F (T = 0) (6.1)
At T = 0, all single particle states below F are filled, whilst all those above are empty.
We denote a system at T 0 as being a degenerate Fermi gas, and one at T = 0 an ideal Fermi
gas.
So a gas of fermions that is cold enough so that nearly all states below F are filled, and nearly all
states above is called a degenerate fermi gas. This is just reiterating what has previously been said.
Now, if the gas is non-relativistic, we have the relations:
p2F h2 kF2
F =
2m 2m
kB TF F
Where we have thus defined the Fermi temperature TF , Fermi momentum pF , and Fermi wavenum-
ber kF in terms of the Fermi energy F . Notice that we must distinguish between Boltzmann factor
kB and Fermi wavenumber kF .
If the gas is ultra-relativistic (m0 << p):
F = cpF
= hckF
Clearly, F must depend upon the number of particles N , so that we have that the number of
particles below F is N .
Let us construct a box, of side length L, so that L2 = A. Then we have the Schrodinger equation,
with its solution:
h2 2
=
2m
n1 x n2 y
sin sin
L L
2 2
h
= (n2 + n22 )
2mL2 1
Where n1 , n2 > 0 and are integers.
We must remember that for each state, we can have two spin-half electrons.
Hence, we want N to be twice the total number of single particle states with F :
F 2mL2
n21 + n22
h2 2
21
It is more convenient to work in terms of wavevectors k = (kx , ky ), so that sin(kx x) sin(ky y).
We obviously have that kx nL1 and ky nL2 . Hence, we can write the condition as:
2 k2
h
F
2m
Where k 2 = kx2 + ky2 .
Notice that the distance between adjacent states in k-space is just L , and hence that the area
2
occupied by one state is L .
Now, we want to know how many states there are with kx2 + ky2 kF2 . That is, the area of the
quarter-circle of radius kF2 , in k-space.
Thus, the number of electrons (which is twice the number of single particle energy states) in such
an area is given by N = 2 area of quarter cricle / area occupied by each state:
kF2
4
N = 2
2
L
N
kF2 = 2
L2
= 2n
N
Where n is just the number density n L2
.
22
particles in the system, at T = 0. We hence can say that the total number of particles is the
(continuous) sum of the density of states, over all states up to the Fermi-state:
Z kF
N = dn
0
Z kF
dn
= dk
0 dk
kF
kL2
Z
= dk
0
kF2 L2
=
2
N 2
kF2 =
L2
Hence, using the relation:
2 kF2
h
F =
2m
We can write F :
2 2N
h
F =
2m L2
h2 N
=
m L2
h2 n
=
m
Notice that the initial integral was for at T = 0. If T 6= 0, then we can write the much more general
integral in terms of the mean number of particles:
Z
dn
N= hnidk (6.2)
0 dk
If you recall we had the condition that hni = 1 for k < kF , and zero elsewhere. This expression can
be very hard to integrate, especially for bosons!
We do an example to calculate the density of states for a 3D gas of spinless non-interacting particles,
and use it to determine the chemical potential and energy of an ideal gas in the classical regime.
So, to begin, we know that the magnitude of the wavevector is given by k 2 = kx2 + ky2 + kz2 , and
3
that each state occupies a volume L .
We want to know how many states dn there are in the positive shell of k k + dk.
The volume of the entire shell is 4k 2 dk, and thus the volume of only the positive portion of the
23
shell is 81 4k 2 dk. Hence, we can write:
1 2
8 4k dk
dn =
3
L
L3 k 2 dk
=
2 2
dn k 2 L3
=
dk2 2
k2 V
=
2 2
Thus, we have derived the density of states for a 3D gas of spinless fermions.
To get the chemical potential, we need to continue to solve N for , via:
Z
dn
N = hnidk
0 dk
Now, we have that
1
hni =
e()/kB T
+1
Which, in the classical limit (notice that the classical limit dosent care if its a boson or fermion)
reduces to:
hni = e()/kB T
Hence:
V k2
Z
N= dke/kB T e/kB T
0 2 2
h2 k2
Notice, we also have that (k) = 2m , so the integral is pretty hard to do. So, an alternative
method, is to change variable, so right from the start, we have:
Z
dn
N = hnid
0 d
Hence, to do things using this method, we need the density of states in -space, as opposed to
k-space. We can do this by chain rule:
dn dn dk
=
d dk d r
k 2 V 1 2m
=
2 2 2 h2
r
2m 1 2m
=
h2 2 h2
V m3/2 2
=
2 2 h3
2 2
q
Where we have made use of = h2m k
k = 2m h2
, and the original density of states dn
dk . We can
immediately see that, for a non-relativistic gas:
dn
d
24
dn
Thus, inserting this expression for d back into the integral for N :
m3/2 2V /kB T /kB T
Z
N= e e d
h3
2 2 0
25
So:
Z
U = a e()/kB T d
Z 0
= b 3/2 e/kB T d b ae/kB T
0
Z 3/2
5/2 /kB T
= b(kB T ) e d
kB T kB T
Z0
= b(kB T )5/2 X 3/2 eX dX X /kB T
0
3
= b(kB T )5/2
3/2
4
Vm 2 /kB T 5/2 3
= 3 e (kB T )
2 2 h 4
3/2
Vm 2 5/2 3 n
= 2 3 (kB T )
2 h 4 nQ
3
V m3/2 2 2h2 2
5/2 3 N
= (kB T )
2 2 h3 4 V mkB T
5/2 5/2
Vm 3/2 2kB T 3 N 23/2 3/2 h3
= 3/2
2 2 h3 4V m3/2 kB T 3/2
3
= N kB T
2
Which again, is a result we previously knew.
In these two examples, we have used the standard integrals:
Z
3
x3/2 ex dx =
4
Z0
x 1
xe dx =
0 2
26
In the expression for U , it is required to know if the gas is non-relativistic, relativistic or ultra-
relativistic. For non- and ultra-relatitistic, we have the expressions:
2 k2
h
non = ultra = h
ck
2m
Suppose we want the average velocity of a single particle. We compute this by finding the average
velocity of all particles, and divide by the number of particles:
1 dn
Z
hvi = hniv dk (6.5)
N 0 dk
2 k2
h
To get an expression for v, we note that kinetic energy is both equal to 2m and to 21 mv 2 . Hence,
v = hmk . Note that this is only for non-relativistic.
If we want to find the variance on a quantity, N , say, we merely need to compute:
2
N = hN 2 i hN i2 (6.6)
27
Or, that:
3/2
2mkB T 1
n << (6.9)
h2 3 2
Now, we have that the quantum concentration nQ is given by:
3/2
2mkB T
nQ =
2h2
Hence, we see that (6.9) is equivalent to:
8
n << nQ
3
Hence, we have that n << nQ for classical. Which is our previous definiton, but derived by stating
that T >> TF for classical.
Now, lets calculate the internal energy of a non-relativistic 3D gas of electrons. We have straight
away that:
V k 2 h2 k 2 V h2 kF2
Z kF
U= dk =
0 2 2m 2 2m 5
Now, from (6.7), we have:
1/3 V kF3
kF = 3 2 n N=
3 2
Hence,
3 h2 kF2
U= N
5 2m
2 kF
h 2
But we also have the relation that F = 2m . Hence:
3
U = N F (6.10)
5
dU = T dS pdV
28
Hence:
2 N F 2
p= = nF
5 V 5
Which we can rearrange into the equation of state for a Fermi-gas at low temperature:
2
pV = N F (6.11)
5
This is the equivalent of pV = N kB T .
To get CV , we take:
U
CV = =0
T V
At T = 0, as U is independant of T at 0. This is a little useless, as it does not tell us what happens
as T 0. To find out this behavior, we need to do some small T corrections to U . To do this
properly, we need to evaluate the integral:
Z
dn 1
U= ()/k
dk
dk e BT + 1
0
Which is hard.
We can estimate the correction by saying that only particles within kB T of F move. So, the number
of excited particles is:
kB T
N
F
So, additional energy is of the order:
kB T
N kB T
F
Hence, U needs to be corrected by this factor:
3 (kB T )2
U = N F + N
5 F
2
Where is some constant. If this correction is done exactly, one finds that = 4 . Hence, if we
now calculate CV using this corrected version of U , we find:
2 T 2
N kB
U
CV = =
T V 2F
29
We have previously derived that:
dn 3 3/2
= N F 1/2
d 2
So, to solve these integrals, we have objects of the form:
Z
f ()
I= ()/k
d
e BT
0
Now, in the limit of T 0, z . Hence, the middle integral above can be written to have an
upper limit of : Z Z
f ( + zkT ) f ( zkT )
I= f () d + kT dz
0 0 ez + 1
The second integral can be Taylor expanded to:
Z
df () z
I2 = 2(kT )2 dz
d 0 ez + 1
30
Where we can look up the integral as being:
Z
z 2
dz =
0 ez + 1 12
Thus:
(kT )2 df ()
Z
I= f () d + 2
0 6 d
We hence see that f () = 1/2 and f () = 1/2 . We hence have, for the integral IN :
2 (kT )2 1 1/2
IN = 3/2 +
3 6 2
Thus:
3 3/2 2 3/2 (kT )2 1 1/2
N = N +
2 F 3 6 2
3/2
(kT )2 2
= N 1+
F 8
2/3
(kT )2 2
1 = 1+
F 8
2/3
(kT )2 2
= F 1 +
8
2 2 !
kT
= F 1
F 12
Where the last step has been done by a binomial expansion, and noting that (0) = F .
Now, doing so similarly for U , we find:
3 3/2
U = N IU
2 F
Z
3/2
IU = d
0 e()/kT + 1
2 5/2 (kT )2 3 1/2
= +
5 6 2
!
3 5 2 kT 2
U = N F 1 +
5 12 F
Which gives the previously stated correction factor. The last step has been done by substituting in
the derived expression for .
31
6.3 Example: Electrons in Metals
32
Thus, if m were 20% bigger when ionic interactions taken into account, then the agreement would
be perfect.
2
h
F = (3 2 n)2/3 = 4.2 104 eV
2m
F
TF = = 5K
kB
r
2F
vF = = 160ms1
m
Where we have also compute the Fermi-velocity, to check that it is non-relativistic (which is blatantly
is!).
So, for T < 5K, we expect:
2
kB T
CV = N kB = (1.0K 1 )N kB T
2 F
And again, experiment yields 2.8K 1 , which is higher than our prediction.
Hence, interactions between 3 He is obviously not negligible.
Especially at T < 2mK, where we have a discontinuity in the heat capacity at the transition into
a superfluid, where the 3 He atoms pair up into bosonic pairs.
33
6.5 Example: Electrons in Stars
We start by asking: Do the electrons in the Sun form a degenerate fermi gas?
So, given that the core temperature is T 107 K, we need to be able to compare it with the fermi
temperature, which we need to calculate.
Note, if T = 107 K, then the thermal energy = kB T 107 1023 103 eV. Now, this is a lot
higher than the binding energy (= 13.6eV), hence there will not be much atomic hydrogen - it will
be mostly ionised. Now, we can compute TF via:
kB 2
1 h
TF = = (3 2 n)2/3
F kB 2m
Where we have assumed non-relativistic.
We can work out n via the mass of the Sun. If we assume that the Sun is only made up of protons
and electrons.
N M 1 M 1
n= = =
V M p + Me V M p V
Where we have used the fact that Me << Mp . We have also assumed one electron per proton.
Using M 2 1030 kg and R 7 108 m, we have:
n 1030 m3
Stellar evolution starts with hydrogen gas collapsing under gravity to the point where pp fusion can
occur at T 107 K. The stars radius is stabilised by the outward radiation pressure from the gas of
ions, electrons and photons. The electrons are not (necessarily) degenerate. The outward radiation
pressure is stabilised (or stabilises) the inward gravitational attraction due to mass.
When the proton fuel runs out, the radiation pressure falls, and the star collapses under gravity
untill the core becomes hot enough for the He to ignite at T 108 K.
The process continues until no more nuclear fuel is left to burn. However, such stars are kept from
further collapse by the presence of an outward pressure due to the degenerate electron gas. These
types of stars are called white dwarfs.
Now, given the mass of a white dwarf, we ought to be able to compute its radius.
Consider a shell at radius r, thickness dr. Now, the inward force due to gravity is balanced by the
outward force due to degenerate electrons. We can compute the inward gravitational force:
GM (r)
dF = 4r2 dr(r)
r2
34
Where M (r) is the mass of the star inside a sphere is radius r. Notice that the volume of the shell
is 4r2 dr, hence its mass is just 4r2 dr(r).
An outward force would come from a difference in pressure between the inner and outer surfaces of
the shell, which would be of the form:
Now, to do this integrals properly requires a numerical approximation. So we use various approxi-
mations.
We assume that p(0) >> p(R), thus helping with the integral on the LHS. We shall also assume
that any integral for the density (r) over r results in some average density . Now, we have that
the mass inside some sphere can be given by:
Z r
4r3
M (r) = 4r2 (r0 )dr0 =
0 3
Where we have used our approximation of average density. Thus, (6.14) becomes:
Z R
GM (r)
p(0) = (r) dr
0 r2
Z R
1 r3
= 4G 2 2
dr
0 r 3
4G2 R 2
=
6
p
2 2 h2
p = nF = n5/3 (3 2 )2/3
5 5 2m
35
Where we can compute n as we did for the Sun. We assume the core to be composed of helium,
which is 1 electron per proton. We use u to denote the atomic mass unit:
M 1
n= 4 3
3 R
2u
Therefore, we have that the pressure due to the degenerate electrons is like:
M 5/3
p
R5
Thus, equating these two pressures (due to degeneracy and gravity) we obtain:
R = M 1/3
h2 2/3 4/3 3 5/3 8
3
2m 8n 3G
= 4 1016 mkg 1/3
Data obtains = 7 1016 mkg1/3 , which is a pretty good agreement! The agreement becomes
absolute if the density integral is done properly.
Hence, we see that such a star is stable, and degeneracy pressure always wins. The reason for
quotation marks, is that if the mass is above a certain limit, electron degeneracy pressure cannot
sustain the equilibrium, and the star collapses further, into a neutron degenerate state.
We should check that the white dwarfs in the table are made of a degenerate core of electrons, and
are not-relativistic.
We know that the temperature of the star T 107 K. So, we need to compute TF .:
2
h
kB TF = F = (3 2 n)2/3
2m
To compute n, we do an order of magnitude calculation that is VERY rough:
M 1 1030
n= 4 27
1035 m3
u 3 r3 10 1020
=h
ck
36
Hence, we recalculate the density of states:
4k 2 dk 1
dn = 2 2
8
L
dn Vk2
=
dk 2
dn dn dk V k2 1
= = 2
d dk d hc
V 2
=
(hc)3 2
Where we now have the density of states in energy-space. Notice that the density of states for
an ultra-relativistis gas of fermions is now proportional to 2 , whereas the density of states for
non-relativistic fermions was .
At T = 0, we can write that:
U
p
V N
So, we compute the internal energy U via:
Z F
dn V 4F
U= d =
0 d hc)3 2 4
(
One-third of the energy-density of the system. Hence looking at everything, we have that:
N 4/3
p
V 4/3
And we see that N M and V R3 . Thus:
M 4/3
p
R4
37
For a gas of ultra-relativistic fermions.
Now, this dependance for the degeneracy pressure is such that there exists a maximum mass for
which the pressures due to graviy and degeneracy balance. That is, there exists a maximum mass
for which degeneracy pressure can sustain collapse due to gravity. This maximum mass is 1.4M :
the Chandrasekhar Mass.
As the star collapses F rises. Eventually, particle physics becomes important and we get inverse
beta-decay pe ne and all protons and electrons dissapear leaving only neutrons; where the
neutrinos will fly out. There will then be a neutron degeneracy pressure which supports against
further collapse (untill its critical mass of 1.8M is reached).
7 Bose Gases
We are interested in low T behaviour. As T 0 we expect even classically that particles will
occupy only the lowest energy level. So, how low must T be for macroscopic occupation of the
ground state? That is, the majority of particles in the ground state.
So, if we write down expressions for the two lowest possible energy states of a system:
h2 2 2 h2 2 2
0 = (1 + 12 + 12 ) 1 = (2 + 12 + 12 )
2m L 2m L
Thus, the spacing between these levels is:
h 2 2
3
1 0 =
2m L
For the ground state to contain most of the particles, we want:
kB T 1 0
1068 10
T = 1014 K
1026 14
Which is very low! This however, has been calculated using classical arguments, and the result is
very different if we use a quantum mechanical description of identical particles.
Recall:
1
hn()iBE =
e()/kB T 1
Initially notice, when e()/kB T
= 1 then the above becomes singular. We shall use the initial
approximation for the total number of particles:
Z
dn 1
N ()/k
d
d e BT 1
0
38
It will become clear as to why this is only an approximation. One thing to note is tha the density
of states is proportional to , and hence the result of this integral at the lowest single particle
energy state 0 = 0 is zero. The exact form is actually purely a summation:
X
N= hn(i )i
i
So, we have the interpretation that T = Tc is the lowest temperature in which hniBE works. We
say this because for T < TC , we have that > 0, which is impossible.
So, lets compute hniBE for the lowest energy state 0 = 0:
1 1 kB T
= = N0
e/kB T 1 1 kB T 1
Where we have use the Taylor expansion ex = 1 + x + x2 + . . .. Hence, we have that N0 is the
number of particles in the ground state. Notice that N0 as 0. Now, we also know that
N0 N . Hence kBT N ; thus
kB T
N
Hence, as we have that 0 and the above lower restraint, we have narrowed down the position
of very finely. For a system of 1023 particles, is very very close to zero, but not quite. In actual
fact, the splitting between and 0 is a lot less than that between 0 and 1 .
We now have a new particle distribution function for T < Tc :
1
hni =
e/kB T 1
Where we now have solved the problem of having > 0.
So, to summarise thus far, we have shown that as T 0, the occupancy of the ground state, N0 ,
tends to infinity. We also have that as T 0, 0 very quickly for a macroscopic system.
So, we can now write down a correct expression for the number of particle in the system: we take
out of the summation just the term due to the ground state:
Z
dn 1
N = N0 + /kB T 1
d
0 d e
39
That we have not changed the lower limit of the integral is not a problem: the integral is zero for
= 0 anyway. Notice, we have also put = 0, as the energy level splitting is massive compared
to 0 . Now, we can do this integral, after inserting in the expression for the density of states
in energy-space (for non-relativistic bosons). So, how does N0 vary for T < Tc , where we assume
= 0? So, writing the integral:
V m3/2 1/2 d
Z
N = N0 + (7.1)
0 2 2 h3 e/kB T 1
Now, we previously wrote that:
Z
dn 1
N /kB Tc 1
d
0 d e
Z
V m3/2 1/2 d
=
h3 0 e/kB T 1
2 2
Z
V m3/2 3/2 (/kB Tc )1/2 d(/kB Tc )
= (k T
B c )
h3
2 2 0 e/kB T 1
V m3/2 X 1/2 dX
Z
3/2
= (k B Tc )
h3
2 2 0 eX 1
Now, we notice a similarlity between all factors in both integrals, except for T and Tc , which we
can factor out:
Where:
V m3/2 X 1/2 dX
Z
(k )3/2
3 B
2
2 h 0 eX 1
N
= 3/2
Tc
Therefore, inserting this into (7.2) gives a relation for how the number of particles in the ground
state varies as T gets close to Tc :
" 3/2 #
T
N0 = N 1
Tc
40
We are also able to write down the condensation temperature:
2 3 N !2/3
2 h V 1
Tc =
m2/3 2.315 kB
Thus, this is the temperature at which Bose-Einstein condensation (BEC) takes place. Putting
various numbers in:
h2 2/3
Tc = 3.31 n
mkB
Now, if we just write down the inequality for which we get BEC:
T < Tc
h2 2/3
T < n
mkB
mkB T 3/2
< n
h2
nQ < n
Where we have recognised the definition of the quantum density.
So, for T < Tc , it is not only the region where BEC takes place, but it is also the region where
quantum effects become important. So, as soon as we get quantum effects (in an ideal Bose gas),
all particles jump into the ground state.
For 4 He, with = 145kgm3 we have Tc = 3K; which is a lot greater than from the previous
classical argument! n is computed from , via:
N 145
n= = = = 2.17 1028 m3
V nn m n 4 1.67 1027
Where nn is the number of nucleons (2p + 2n) each having a mass mn . Thus, Tc is finally computed
by using this, and that m is the mass of a helium-4 atom = 4mn = 4u, where u is the atomic mass
unit.
Experimentally, BEC has been observed in many systems; but for the first time in 1995, using a
gas of Rubdium atoms.
Consider a gas of photons in thermal equilibrium. So, the gas is ideal (except in early universe
phase), ultra-relativistic (obviously) and = 0.
To see why = 0, we note that dS = 0 in equilibrium. Therefore, so is:
S
dN = 0
N U,V
Now, we have that a gas of photons cannot have a fixed number of particles (hence dN 6= 0), as,
for example, atoms are constantly radiating them. Hence, we see that:
S
=0
N U,V
41
However, we have the definition:
S
T
N U,V
Therefore, we see that = 0. Hence, we have:
1
hn()i =
e/kB T 1
U
Lets compute the energy density V of a gas of photons:
1 dn
Z
U
= hn()i d
V V 0 d
Using the ultra-relativistic form for the density of states in energy-space (previously derived) and
using a spin multiplicity factor of 2, as there can be two polarisation states in one, we have:
1 V 2 1
Z
U
= d
V V 0 2 hc hc e/kB T 1
Z
1 3
= d
(hc)3 2 0 e/kB T 1
(kB T )4 (/kB T )3
Z
= d(/kB T )
(hc)3 2 0 e/kB T 1
(kB T )4 X 3
Z
= dX
(hc)3 2 0 eX 1
(kB T )4 4
=
(hc)3 2 15
U 2
= (kB T )4
V 15(hc)3
Now, to proceed, we shall discuss a little bit about blackbody radiation:
Suppose we have a box of photons in thermal equilibrium (whose energy-density we have just
computed). The box is completely isolated, except for a very small hole. Photons are able to leave
this hole, being in thermal equilibrium with the other photons still inside the box, hence blackbody
radiation. Any photons incident upon the hole, from outside, will be able to enter the hole, hence
a blackbody absorber. If all photons inside the box move towards the side with the hole (of area
A), then the volume of photons being ejected per second is just cA. However, not all photons will
be moving in that direction, so we must integrate over the solid angle, to give 14 cA.
So, we ask the question, what is the power being radiated by the hole? This is just the total energy
ejected per second:
1 U
P = cA
4 V
Hence, inserting our expression for the energy density:
2
P = kB T 4 T 4
4
60h3 c2
Where we have arrived at Stefans law, with his constant:
5.67 108 W m2 s1 K 4
Now, how is this power distributed over wavelengths? This will lead us to the Planck distribtion,
and being able to predict the CMB.
42
Aside: CMB. The Cosmic Microwave Background arose in the very early universe. When the
temperature of the universe was so high that protons and electrons could not combine to form
hydrogen, or any other elements, the ambient photons were being constantly scattered by this hot-
plasma. As the universe cooled, Hydrogen formed, and photons stopped being scattered by the
protons and electrons. They had, however, been in thermal equilibrium with them. So, at the time
of recombination (where hydrogen formed), the photons suddenly had nothing to scatter off, so they
maintained their temperature/energy from when they were in thermal equilibrium. We are able to
measure this background radiation.
The CMB is a perfect blackbody radiation distribution, at a temperature of 2.7K. On a finer
resolution however (order mK), we find anisotropies in the otherwise perfect blackbody spectrum.
These anisotropies give information about how the very first matter was formed, in its density. Inital
density fluctuations from recombination have left their mark in the form of the CMB anisotropies,
and from this, we can find information about the early universe.
That is to say:
dn 1
u()d = 2
hni d
d V
Where the factor 2 comes from there being 2 polarisation states.
hc
Now, we can find u() by changing variables. We have that = h = . Thus:
hc
d = d
2
Thus, inserting these expressions into (7.3) results in:
hc 3
Z
U 1 hc
= d
V 0 ehc/kB T 1 2 (hc)3 2
43
Where the minus sign has been absorbed into the integrand, by symmetry arguments. If this is now
compared with (7.4), we find (after cleaning up the above):
(2)3 hc 1
u() = 2 5 hc/k
e BT 1
Thus, we have derived what the energy density of a particular wavelength is: u(). This is known
as the Planck Formula. If we let , then we can Taylor expand the exponential, and we will
end up with the classical Rayleigh-Jeans limit:
hc kB T 8kB T
u() = 8 5
=
hc 4
Notice, for this formula now (which is purely classical) there is a huge problem, and it is predicted
that u( = 0) = . So, an infinite distribution for zero wavelength. This is, of course ridiculous,
and is known as the UV-catastrophe.
We can find the wavelength which has maximum power associated with it. That is, a turning point
of the u() curve. This is:
du
=0
d
This is known as Weins displacement law. However, to actually differentiate u() gets tedious, so
we apply a trick. Let:
f (x)
u 5 x T
Hence, if we write:
du d f (x)
=
d d 5
1 df d 1
= + f (x)
5 d d 5
1 df dx 5
= 5
f (x) 6
dx d
1 df 5
= T f (x) 6
5 dx
= 0
1 df 5
5 T = f (x) 6
dx
df
x = 5f (x)
dx
Hence, the solution to this equation is that x =constant, or T =constant.
Therefore, we have derived that the wavelength which has maximum power ascribed to it, at a
particular temperature, can be found from T =constant. Thus, for two temperatures T1 , T2 ; their
maximum powers are at wavelengths 1 , 2 . If T1 < T2 , then 1 > 2 .
Sometimes we want u()d, which is the spectral distribution in terms of angular frequency.
We have that = h
; and thus:
dn h
u()d = 2 dhn()i
d V
44
So, it remains to calculate the density of states in space:
dn dn dk
=
d dk d
Using the relation = ck, we can then write:
dn dn 1
=
d dk c
V k2 1
=
2 2 c
V 2
=
2c2 2 c
V 2
=
2c3 2
Therefore, putting everything in:
V 2 1 h
u()d = 2 3 2
d h/k T
2c e B 1 V
Cleaning up:
h 3
u()d = d
c3 2 (eh/kB T 1)
Now, let us leave that alone.
Let us now calculate the pressure due to a bose (photon) gas, in thermal equilibrium. To do so, we
calculate S, and differentiate it to get p. Recall that we have previously derived:
X e(Ns Es )/kB T Ns Es
S = kB ln Z
states
Z kB T
Where the sum is over all single particle energy states. We have also derived:
X
ln Z = ln 1 e(i )/kB T
i
Now, we have that = 0. We hence see that the expression for S simplifies somewhat:
X eEs /kB T Es
S = kB ln Z
states
Z kB T
P
Notice that the first term is the same as writing i pi Ei , which is just U . The second term is just:
X
kB ln Z pi = kB ln Z
i
45
The expression for ln Z can be made continuous via:
Z
dn
ln Z = d ln(1 e/kB T )
0 d
Which we evaluate:
Z
V 1
ln Z = 2 2 ln(1 e/kB T )d
(hc)2 0
Z 2
V 1 3 /kB T
= 2 (kB T ) ln(1 e )d
(hc)3 kB T kB T
Z0
V 1
= 2 (kB T )3 X 2 ln(1 eX )dX
(hc)3 0
U 2 kB
4 T4
=
V 15(hc)3
46
Thus, we see that S = 43 aV T 3 . Therefore:
U 3/4
4
S = aV
3 aV
3/4
4 U
= a V 1/4
3 a
3/4
S 1 U
= a V 3/4
V U 3 a
U 3/4
1
= a
3 aV
1 3
= aT
3
S
p = T
V U
1 4
= aT
3
1U
=
3V
Hence, we have that the pressure exerted by a gas of photons is one third of the energy-density of the
photons. Which is, incidentally, an identical result as found for the pressure due to ultra-relativistic
electrons (fermions)
Now, for adiabatic expansions, we have that dS = 0. Classically, this corresponded with the result
of P V =const. However, here, if we look at the expression for S, and put it equal to a constant,
we have:
V T 3 = const
For a photon gas.
Weve thus far been discussing a gas of photons by specifying the occupancies of each energy level.
Thus, systems previously were determined by the set {ni }; where ni was the number of photons in
energy level i . Thus, we were able to compute the average internal energy of the system via:
Z
X dn
U= hnii i hni d
0 d
i
47
particle, where a photon was a light particle.
For a cube of some solid, with N atoms on some periodic lattice, there will be 3N normal modes.
This is the total number of frequencies the system is allowed to vibrate in; such is the definition of
a normal mode.
So, we have that we have ni phonons in energy level i . Hence, we have an energy ni hi on
oscillator i. As we had no need (or idea) about a fixed number of photons in a system, we also have
that the number of phonons in a system is not fixed; thus, = 0 for lattice vibrations. We also use
Bose-Einstein statistics.
In Einsteins model of a solid that we discussed at the start of the course, it was assumed that all
3N oscillate with the same frequency; thus 1 = 2 = . . . = 3N . So, for an Einstein solid:
3N
X
UE = hni iBE i
i=1
3N
X 1
= hi
i=1
eh/kB T 1
3N h
=
eh/kB T1
Which is a result that previously took us a lot longer to derive, as we previously had to physically
count all states available. From this, we can just take the differential w.r.t. T to find the heat
capacity.
Now, Debye realised that the 3N modes do not oscillate with a single frequency - as Einstein had
assumed. But rather, the allowed frequencies are those of harmonic waves in a box. Thus, we have
a wavenumber ki = nLi . Thus, we are able to write down a density of states. We use the relation
k = u , where u is the speed of wave propogation, which is the speed of phonons, which is the speed
of sound. Thus:
dn dn dk
=
d dk d
V k2 1
=
2 2 u
V 2
=
2 2 u3
Now, the multiplicity factor we use is 3. This comes from the consideration that we are able to
excite 2 transverse and 1 longitudinal sound waves in a 3d cube. Thus, the useable density of states
is thus:
dn V 2
=3 2 3
d 2 u
So now, lets calculate the average energy of a Debye solid:
Z D
dn
UD = hni d
0 d
Now, notice: for a photon gas, the upper limit was infinity. That was because there was an
(essentially) infinite range of frequencies open to the system. We cannot assume this for lattice
vibrations. So, we assume that only < D , that is > D . So, lets try to compute this cut-off.
48
Suppose atoms are spaced by an amount d. Then, the shortest possible wavelength that is able to
be fully supported by atoms are D 2d. If we try to stick more than one wave between atoms,
then nothing is excited, as there are no atoms there, hence pointless. Therefore, we are able to
1
estimate the shortest wavelength that will be excited. The atomic spacing d is of the order n1/3 , the
inverse of the cube-root of the number-density. We see that is of the order u (up to 2). Hence:
1/3
N
D u
V
We can calculate this exactly, by noting that there should be exacly 3N energy levels. So:
Z
dn
3N = d
0 d
3
3V D
= 2 3
2 u 3
1/3
2N
D = u 6
V
Therefore, we have an expression for the Debye cut-off frequency D . Notice, its in good agreement
with previous estimation. So, going back to the calculation of the internal energy of a Debye solid,
inserting expressions for the density of states and BE distribution:
Z
D 3V h 3 d
U = 2 3
2 u 0 eh/kB T 1
This integral is hard to do, and we are unable to express it in the dimensionless terms we have used
for bose gases. So, lets compute the heat capacity directly:
U
CD =
T
Z D
3V h 3 eh/kB T h
= 2 3
d
2 u 0 (e h
/k B T 1) kB T 2
2
Now, put:
h
x
kB T
Then:
xD
3V h (kB T )4 1 x4
Z
D
C = dx
2 2 u3 h T 0 (ex 1)2
This can be cleaned up, after a lot of work, to:
Z xD
x4 e x
D 3
C = 3N kB dx
x3D 0 (ex 1)2
Let us now look at the high and low temperature limits:
For high T , we see that x is small. Hence, the exponential in the bottom is expanded to ex = 1 + x;
and the exponential in the top just to unity. Thus, the integral itself is:
Z xD Z xD 4
x4 x 1
2
dx = 2
dx = x3D
0 (x + 1 1) 0 x 3
49
Hence, we see:
3 x3D
C D 3N kB = 3N kB
x3D 3
Therefore, the high temperature behaviour of the heat capacity is a constant: C D = 3N kB .
In low temperatures, we see that x . Hence, the upper limit of the integral goes to infinity.
x
We see that the (exe)2 term goes to ex , for large x. Therefore, the integral just becomes:
Z
x4 ex dx
0
Hence, we see that the low temperature heat capacity goes as T 3 . We find that experimental data
Figure 2: Heat capacity predictions of Debye and Einstein, Data will sit on Debyes curve, as
opposed to Einsteins.
sits on the Debye curve, as opposed to that predicted with Einsteins approximation.
50
A Colloquial Summary
I
The integral for N can be done, and then solved for the chemical potential . We must specify
2 2
whether to use non-relativistic or ultra-relativistic expressions = h2m
k
or = h
ck. We now consider
fermions and boson separately.
Fermions Fermions have anti-symmetric wavefunctions, which has the consequence of excluding
there ever being more than one fermion in a single particle energy state.
We have considered examples of electrons in metals and white dwarf stars.
We can use an approximation of T = 0 to rewrite the particle distribution function. We say that
the occupancy of all (almost) single particle states below some = F is total, and that (almost)
no particles have energies above F . Hence, the integral for the number of particles N has upper
limit kF , and hni = 1 in this range. From this, we are able to derive an equation for F , kF and use
F = kB TF to find the fermi-temperature.
For T 6= 0, we imagine the step function crumbling over, to give the actual distribution. We use
this idea in calculating low temperature corrections to U . We say that the particles within kB T of
F move up in energy by amount kB T . This allows us to write:
kB T
U = N kB T
F
That the internal energy is now dependent upon T gives the experimentally verified result of linear
heat capacity. Note, at T = 0, U is independant of T , and C = 0.
Bosons Bosons have symmetric wavefunctions, and hence have no restriction on the number in
any single particle energy state.
We see that if N is fixed, then, as T falls, rises. We see that < 0; however, if T keeps decreasing,
will go above zero (where zero is defined as being the same as the lowest energy state). When
this happens, there is a negative particle occupancy. So, to get around this, we say that below some
critical temperature, Tc , = 0. We say that Tc is the minimum temperature for which we can
ignore the ground state occupation. We also find that the ground state is macroscopically occupied
by particles. Infact, we define: Z
dn 1
N d
0 d e B Tc 1
/k
Now, for T < Tc , we are able to say that N = N0 + Ne , where N0 is the number in the ground state
and Ne is the number of particles in excited state, where:
Z
dn 1
Ne = d
0 d e B T 1
/k
Using these two integrals, we are able to calculate the number of particle in the ground state. We
are also able to calculate Tc .
If we have a gas of photons in thermal equilibrium, we cannot expect the particle number to stay
constant; however, we still say that = 0, from considerations of the entropy in equilibrium. This
modifies the BE distribution, to give us that for a gas of photons. We are easily able to compute
the energy density of such a gas, and hence the power outputted per unit area, and can thus derive
Stefans law P = T 4 . The distribution of this power over wavelengths (spectral distribution) can
II
thus be calculated. We write u()d as being the energy per unit volume in a wavelength interval.
We can write the same thing in :
Z
1 dn
u()d = hni d
V d
Where we write = h and the particle distribution is the modified BE distribution with = 0.
Now, we actually have that: Z
U
= u()d
V
1U
As energy density. We are able to calculate the pressure exherted by a photon gas as being 3V,
which is the same as that due to ultra-relativistic electrons.
III
B Calculating the Density of States
The number of states dn, of spinless particles in a small 3D annulus, in k-space is given by the
volume of the annulus (which is 81 of the volume of the whole annulus) divided by the volume taken
up by one state. Thus:
1 2
8 4k dk
dn =
3
L
V k2
= dk
2 2
We have used that L3 = V . Hence, rearranging, we have the density of states, in k-space, of spinless
particles:
dn V k2
= (B.1)
dk 2 2
This will be our starting point for all subsequent calculations. To get the density of states in energy-
space, we require a relationship between the wavenumber k and energy , and hence we must specify
whether or not the gas is non-relativistic, or ultra-relativistic (massless).
If the particles are fermions, and carry an intrinsic spin s, then this is modified to:
dn V k2
= (2s + 1) 2
dk 2
If the particles are photons, they carry a spin of 1. However, we actually use a factor of 2, as there
are two polarisation states of a photon associated with each state. Hence, for photons:
dn V k2
= 2
dk
Proceeding with all subsequent calculations assume spinless.
2 k2
h
= (B.2)
2m
IV
Hence, rearranging:
1/2
2m
k=
h2
Therefore: 1/2
dk 2m 1
=
d h2 2
dn
Now, we must also put dk in terms of , which must be done using the non-relativistic expression
above in (B.2):
dn V k2 V 2m
= = 2 2
dk 2 2 2 h
Therfore, putting this all together:
dn dn dk
=
d dk d
V 2m 2m 1/2 1
=
2 2 h2 h2 2
2m 3/2
V
=
4 2 h2
If the gas is ultra-relativistic, we must use the following relation = pc, hence:
=h
ck (B.4)
dn V 2
= 2 (B.5)
d 2 (hc)3
V
C Deriving FD & BE Distributions
That is, ni is the number of crosses on a particular energy level. Hence, Gibbs becomes:
e ( ni i ni i )
P P
i
P (N, U ) =
P Z
e i ni (i )
=
Z
The Grand Partition Function Z is the sum for ni taking on all values (i.e. there being 1, 2, 3,
. . . particles in energy level 1, and again for all other energy levels). That is:
X P
Z = e i ni (i )
ni
X
= e(n1 n1 1 )+(n2 n2 2 )+(n3 n3 3 )+...
ni
XY
= e(ni ni i )
ni i
So, the probability to find the system with N particles, and internal energy U is the product of the
probabilities of finding ni particles in energy level i .
VI
So, the average number of particles in energy level i is given by:
X
hn(i )i = P (ni , i )ni
ni =0,1,2,...
Where:
eni (i )
P (ni , i ) = P n ( )
ni e
i i
Let us suppose that we only allow either 0 or 1 particles in each energy state (i.e. fermionic case);
hence:
X
hn(i )i = P (ni , i )ni
ni =0,1
= 0 P (0, i ) + 1 P (1, i )
e0(i ) e1(i )
= 0 P n(i )
+ 1 P n(i )
n=0,1 e n=0,1 e
e(i )
=
e0 + e(i )
e(i )
=
1 + e(i )
1
= ( )
e i +1
We have thus derived the Fermi-Dirac distribtion.
Suppose that we allow any number of particles in each energy level; that is, ni can take on any
number 0, . . . , .
X
hn(i )i = P (ni , i )ni
ni =0
0 + e1(i ) + 2e2(i ) + . . .
=
1 + e(i ) + e2(i ) + . . .
Here, we have noted that denominator (grand partition function) is actually a common factor to all
expressions. To evaluate this sum, we see that this is actually:
a + 2a2 + 3a3 + . . .
1 + a + a2 + a3 + . . .
Now, the sum
X 1
ai = ,
1a
i=0
is a standard result. We use this to find:
1
hn(i )i =
e() 1
We have thus derived the Bose-Einstein distribution.
VII